date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,455,096,620,000
I have a couple of arguments inside a list (array, e.g. $@) and I'd like to know if the option -v is in the list. In Python I would simply do: verbose = "-v" in sys.argv How do I achieve that in shell without much code?
In a shell script, you call the getopts function in a loop. There is a code example in the dash manual. Note that your Python code is not correct except in extremely simple cases. It detects an argument -v anywhere on the line, even if it's the argument of another option or after non-option arguments. The correct way to parse options in Python is with argparse or optparse.
Shell is option given
1,455,096,620,000
I trying to make control statement to check on a option flag is used or not? For -o output.file if is used I would like to make equal to variable OUTPUTSUM = OUTPUTFILE and if there no -o option use then I have it defaulted to relative path of the script? I just not sure how to go about checking for -o option? PSEUDO code while getopts i:o:h OPTION do case $OPTION in i) INPUTFILE=$OPTARG ;; o) OUTPUTFILE=$OPTARG ;; h) usage exit 1 ;; ?) usage exit ;; esac done if [-o EXIST]; then OUTPUTSUM = OUTPUTFILE else OUTPUTSUM = $SCRIPTPATH/SUMMARY fi
bash parameter expansion: use default value if variable is unset or null: outputsum=${outputfile:-"$scriptpath/SUMMARY"} Get out of the habit of using $ALL_CAPS_VARNAMES -- one day you'll use "PATH" and break your script.
bash control statment for when an option flag is used
1,455,096,620,000
The man page for install describes the --compare option like so: -C, --compare compare each pair of source and destination files, and in some cases, do not modify the destination at all However, there is no explanation or further reference as to what "in some cases" exactly means.
The full install manual gives more details: Compare content of source and destination files, and if there would be no change to the destination content, owner, group, permissions, and possibly SELinux context, then do not modify the destination at all. Note this option is best used in conjunction with --user, --group and --mode options, lest install incorrectly determines the default attributes that installed files would have (as it doesn’t consider setgid directories and POSIX default ACLs for example). This could result in redundant copies or attributes that are not reset to the correct defaults. You can see this locally by running info coreutils install.
install --compare is said to, in some cases, not modify the destination at all - but in which cases?
1,455,096,620,000
About the ps command - to add one/many extra column/header with the default headers according with the option(s) used with the ps command - it through O the option. Intro If ps x is executed the output is as follows: PID TTY STAT TIME COMMAND 1677 ? Ss 0:00 /lib/systemd/systemd --user 1679 ? S 0:00 (sd-pam) 1704 tty1 S+ 0:00 -bash 1961 tty4 S 0:00 -bash 1973 tty4 S+ 0:00 man ps 1983 tty4 S+ 0:00 pager 2227 ? S 0:00 sshd: manueljordan@pts/0 2228 pts/0 Ss 0:00 -bash 2307 ? S 0:01 sshd: manueljordan@pts/1 2308 pts/1 Ss 0:00 -bash 2407 ? S 0:00 sshd: manueljordan@pts/2 2408 pts/2 Ss 0:00 -bash 2437 pts/2 S+ 0:00 less 2846 pts/1 S+ 0:00 man ps 2856 pts/1 S+ 0:00 pager 2968 pts/0 R+ 0:00 ps x Appears and we know the default set of column/headers. How a simple confirmation - if ps xO %cpu is executed the output is as follows: PID %CPU S TTY TIME COMMAND 1677 0.0 S ? 00:00:00 systemd 1679 0.0 S ? 00:00:00 (sd-pam) 1704 0.0 S tty1 00:00:00 bash 1961 0.0 S tty4 00:00:00 bash 1973 0.0 S tty4 00:00:00 man 1983 0.0 S tty4 00:00:00 pager 2227 0.0 S ? 00:00:00 sshd 2228 0.0 S pts/0 00:00:00 bash 2307 0.0 S ? 00:00:01 sshd 2308 0.0 S pts/1 00:00:00 bash 2407 0.0 S ? 00:00:00 sshd 2408 0.0 S pts/2 00:00:00 bash 2437 0.0 S pts/2 00:00:00 less 2846 0.0 S pts/1 00:00:00 man 2856 0.0 S pts/1 00:00:00 pager 2969 0.0 R pts/0 00:00:00 ps Until here all is ok. No reason to create this post Therefore theoretically can be added any HEADER based on with its respective CODE - it according with the STANDARD FORMAT SPECIFIERS section, it available through man ps. Now thanks with the experience of this question: What does '-P' mean in the context of the 'ps' command? Appears the PSR column/header. Situation If ps xO psr is executed the output is as follows: PID TTY STAT TIME COMMAND 1677 ? Ss 0:00 /lib/systemd/systemd --user 1679 ? S 0:00 (sd-pam) 1704 tty1 S+ 0:00 -bash 1961 tty4 S 0:00 -bash 1973 tty4 S+ 0:00 man ps 1983 tty4 S+ 0:00 pager 2227 ? S 0:00 sshd: manueljordan@pts/0 2228 pts/0 Ss 0:00 -bash 2307 ? S 0:01 sshd: manueljordan@pts/1 2308 pts/1 Ss 0:00 -bash 2407 ? S 0:00 sshd: manueljordan@pts/2 2408 pts/2 Ss 0:00 -bash 2437 pts/2 S+ 0:00 less 2846 pts/1 S+ 0:00 man ps 2856 pts/1 S+ 0:00 pager 2975 pts/0 R+ 0:00 ps xO psr If you realize the PSR does not appear, why? But if ps xO %cpu,psr is executed the output is: PID %CPU PSR S TTY TIME COMMAND 1677 0.0 1 S ? 00:00:00 systemd 1679 0.0 2 S ? 00:00:00 (sd-pam) 1704 0.0 1 S tty1 00:00:00 bash 1961 0.0 0 S tty4 00:00:00 bash 1973 0.0 1 S tty4 00:00:00 man 1983 0.0 2 S tty4 00:00:00 pager 2227 0.0 3 S ? 00:00:00 sshd 2228 0.0 3 S pts/0 00:00:00 bash 2307 0.0 3 S ? 00:00:01 sshd 2308 0.0 3 S pts/1 00:00:00 bash 2407 0.0 0 S ? 00:00:00 sshd 2408 0.0 0 S pts/2 00:00:00 bash 2437 0.0 0 S pts/2 00:00:00 less 2846 0.0 0 S pts/1 00:00:00 man 2856 0.0 0 S pts/1 00:00:00 pager 2981 0.0 0 R pts/0 00:00:00 ps or if ps xO uname,psr is executed the output is: PID USER PSR S TTY TIME COMMAND 1677 manuelj+ 1 S ? 00:00:00 systemd 1679 manuelj+ 2 S ? 00:00:00 (sd-pam) 1704 manuelj+ 1 S tty1 00:00:00 bash 1961 manuelj+ 0 S tty4 00:00:00 bash 1973 manuelj+ 1 S tty4 00:00:00 man 1983 manuelj+ 2 S tty4 00:00:00 pager 2227 manuelj+ 3 S ? 00:00:00 sshd 2228 manuelj+ 3 S pts/0 00:00:00 bash 2307 manuelj+ 3 S ? 00:00:01 sshd 2308 manuelj+ 3 S pts/1 00:00:00 bash 2407 manuelj+ 0 S ? 00:00:00 sshd 2408 manuelj+ 0 S pts/2 00:00:00 bash 2437 manuelj+ 0 S pts/2 00:00:00 less 2846 manuelj+ 0 S pts/1 00:00:00 man 2856 manuelj+ 0 S pts/1 00:00:00 pager 2982 manuelj+ 0 R pts/0 00:00:00 ps How you can see when is declared at least another extra header to be added together with the PSR header, just then/only PSR appears in the output. Why the PSR header does not appear when is declared how the unique extra header to be added through the O option? Is it an expected behavior for some reason?
I'd recommend not using the O option like that because it can lead to confusion because it behavior changes depending on what fields it gets and quite often it is very confusing. So you probably see ps xO psr and think that means it should sort and show the PSR (last processor used) column? It's not doing that actually. @steeldriver above nailed it, it's sorting with the old keys. So you are asking ps to sort by: pid "p", size "s" and RSS "r". Sorting by rsp or spr are just as valid, with different order of sorting of course. Why does ps xO psr,%cpu or even for that matter ps xO psr,psr change things? ps now realises you don't want the old sort keys and are using the name of columns to sort.
ps command: strange behavior with 'O' option, extra column/header does not appear as expected
1,455,096,620,000
I would like to get an option value when launching a shcell. I wrote: optstring=hcnxl: V=0 while getopts $optstring opt; do case $opt in h) V=1 ;; c) V=2 ;; n) V=3 ;; x) V=4 ;; l) V=$OPTARG ;; *) echo -e "wrong synthax" && exit 1 ;; esac done shift $((OPTIND-1)) echo :: ${OPTARG} : $OPTARG But when I run the script with -l <n> the option is always empty. What am I getting wrong? Thanks
Change the last line to echo "$V" You don't need to shift anything, while getopts already loops over the options.
How to get option value $OPTARG correctly?
1,455,096,620,000
I have these aliases in my ~/.bashrc alias grep='grep --color=auto -H' alias fgrep='fgrep --color=auto -H' alias egrep='egrep --color=auto -H' but they have no effect when I run find ... -exec grep ..., and I always have to provide those options manually. Is there a way to tell find to rely on aliases in the -exec option's arguments? I'm thinking of configuration files, rather than other aliases. Would it be unsafe in some way?
You can't use aliases like that. Aliases work only if they're used first in a long command sequence, the shell basically replaces the alias text with the actual command. When you enter a command, the shell first searches for an alias, then a function and so on. Command substitution/alias substitution doesn't work if you're using an alias in the middle of a command sequence. Furthermore, the -exec flag of find, will always spawn a seperate process executing the binary, neither an alias, nor a function, that's hard coded.
Aliasing grep in find's -exec option
1,455,096,620,000
I am using zsh 5.4.2. The function that is causing issue is: function zp () { zparseopts -E -watch:=o_watch -show=o_show echo "show : $o_show" echo "watch : $o_watch" } Output: $ zp --show --watch "Watching" show : --show watch : --watch Watching $ zp --watch --show show : watch : --watch --show You can see that, If I do not pass a value to --watch (which's argument is mandatory) then it takes the next option in this case --show as the argument. It should actually show an error like zp:zparseopts:1: missing argument for option: -watch Why is --watch taking --show as an argument instead of throwing an error?
For comparison, I'm pretty sure that's how the GNU C function getopt_long also works, e.g. with GNU ls: $ ls --sort --foo ls: invalid argument ‘--foo’ for ‘--sort’ Valid arguments are: ... If you made the argument to --walk optional, zparseopts would take --watch --show as two arguments: In all cases, option-arguments must appear either immediately following the option in the same positional parameter or in the next one. Even an optional argument may appear in the next parameter, unless it begins with a ‘-’. But it seems that the user just needs to know which options take arguments, which also happens with short options, e.g. tar -tzf is quite different from tar -tfz. Using (only) --sort=whatever would, in my opinion, make it clearer, but zparseopts doesn't even really support = directly. (--sort=whatever would give =whatever as the argument value). And that doesn't really work for short options.
If no argument is given to mandatory option, zparseopts takes next option as the argument
1,455,096,620,000
I was trying to find out what qalter's -r flag does, but I can't search for -r when viewing the man page for it (pattern not found). Yet it's clearly there. If you scroll down a bit you'll see it. Why can't I search for flags in this man page? This online version seems to work fine though.
It sometimes happens that man pages contain formatting commands and settings that lead to certain characters being rendered using various non-ASCII characters. This can be ⎪ instead of |, ­ (soft hyphen) or ‐ (hyphen) or ‑ (non-breaking hyphen) instead of - (ASCII hyphen-minus), ∗ instead of *, etc. Try searching for non-ASCII characters in the man page: LC_COLLATE=C LESS='+/[^ -~]' man qalter You can force the man page to be rendered in ASCII by choosing an ASCII character set: LC_CTYPE=C man qalter Having shell options rendered with a non-ASCII alternative to - is a bug, probably in the man page soucre. I don't know enough *roff to know what the bug might be.
Can't search for flags in qalter's man page?
1,455,096,620,000
The manpage for objdump states: --demangle[=style] ... The optional demangling style argument can be used to choose an appropriate demangling style for your compiler. Nowhere does it mention what possible styles are recognized by the program. Wherever I have found reference to the --demangle option in form posts, there is also no mention of the possible style options. How can anyone find out?
I looked on the sources and I found the solution, and also I found some misleading information on objdump: You should use objdump -H to get the list of available styles. Just running objdump give you misleading information: -H Display this information, which it is not. -H gives you much more data. In any case, on my system: -C, --demangle[=STYLE] Decode mangled/processed symbol names STYLE can be "none", "auto", "gnu-v3", "java", "gnat", "dlang", "rust" Note: not what I expected. I was thinking more C++ variants (anybody still remember the few ABI changes some years ago?)
What are the possible objdump demangle styles?
1,455,096,620,000
I'm looking at an online man page for the sync command and I can't quite figure out the intended use of the -d or --data option. Is it faster? Does it have any noticeable effect? Or is it something legacy?
Many filesystems have checksums for meta data. Usual data on the other side is often not checksummed. Using sync -d is a faster operation than a complete sync and should be preferred in situations where time matters, like hanging on a fail-safe battery. sync -d also needs less IO access and thus can increase the lifespan of the device, especially on cheap devices like SDcards and other cheap flash memory technologies. Recommend use: Use sync -d within program loops which are followed with a sync at the end of loop and use sync or sync -d; sync in CLI.
Could someone explain when I would use sync -d over sync with no options?
1,455,096,620,000
Is it possible to invoke some program in a Bash script with complete command line parameters (both the key and the value) stored in variables? I use the following scanimage call in a script: scanimage -p --mode "True Gray" --resolution 150 -l 0 -t 0 -x 210 -y 297 --format=png -o scan.png I want to store some of the parameters in variables. I tried this, concerning the --mode switch: options="--mode \"True Gray\"" scanimage -p $options --resolution 150 -l 0 -t 0 -x 210 -y 297 --format=png -o scan.png but this doesn't work, scanimage says: scanimage: setting of option --mode failed (Invalid argument) Storing only the value for the --mode switch does work: mode="True Gray" scanimage -p --mode "$mode" --resolution 150 -l 0 -t 0 -x 210 -y 297 --format=png -o scan.png but I'd like to be variable about the switches, and I'd also like to customize multiple switches without knowing which will be set. So is it possible to store not only the values for command line options in a variable, but also the option switch(es) along with the value(s)?
You can do this if you use an array instead of a string. Try this: options=( '--mode' "True Gray" ) scanimage -p "${options[@]}" --resolution 150 -l 0 -t 0 -x 210 -y 297 --format=png -o scan.png
Invoke a program in a Bash script with command line parameters stored in a variable
1,455,096,620,000
The Debian guide for compiling a kernel says: Do not forget to select “Kernel module loader” in “Loadable module support” (it is not selected by default). If not included, your Debian installation will experience problems. However, I have downloaded the 3.12.22 kernel, run make xconfig and searched for the “Kernel module loader” option without finding it. Has such option been discontinued, included by default, or not needed anymore? Thank you.
Parts of this guide are seriously out of date. “Loadable module support” is the name of the option that enables kmod, the kernel component that calls modprobe to load modules with a symbolic name based on hardware identification. You can see these symbolic names in /lib/modules/VERSION/modules.alias; they're automatically extracted from the kernel sources. For example the line alias pci:v00001002d00005147sv*sd*bc*sc*i* radeonfb means that when the kernel requests a module whose name is of the form pci:v00001002d00005147sv*sd*bc*sc*i* then modprobe will look for a file called radeonfb.ko. The symbolic name corresponds to a particular PCI identifier which is sent by the PCI peripheral (in this case, a video card). The thing is, “loadable module support” is the name of the option in kernel 2.4.x. In 2.6, the option was renamed “Automatic kernel module loading” (for the internal name CONFIG_KMOD). In version 2.6.27, the kmod feature became a compulsory part of module support, and the option was removed soon after since it was ignored.
Debian + Linux kernel 3.12.22: “Kernel module loader” option is not available
1,455,096,620,000
Wanted to check for understanding; while revisiting the topic of using dd over netcat I experimented with compressing the data with bzip2. In the man page, there's -c (compress or decompress to standard output) and there's -z (complement to -d: forces compression, regardless of the invocation name) Is -c simply a way to force the output to standard output, and using bzip2 at invocation implying you want to compress data if you don't use -d?
From the man page: -d --decompress Force decompression. bzip2, bunzip2 and bzcat are really the same program, and the decision about what actions to take is done on the basis of which name is used. This flag overrides that mechanism, and forces bzip2 to decompress As this says bzip2, bunzip2 and bzcat are really the same binaries (oddly hardlinked binaries rather than symlinks to a single bzip2 binary on my system). When the program is run it will check the name it was executed under and act appropriately. bzip2 will compress by default, but -d will make it decompress. bunzip2 will decompress by default but -z will make it compress. bzcat will decompress to stdout by default while the other invocations require the -c option to output to stdout rather than a file. Is -c simply a way to force the output to standard output, and using bzip2 at invocation implying you want to compress data if you don't use -d? So to answer simply - yes.
bzip2 -c versus -z
1,455,096,620,000
I found some code in one of the answers here and accommodated it to my needs but now I have two questions: Q1: how can I display the options text after the code in case statement finishes? So the user can see the options again. Q2: Can I make it so that every option is displayed on its own line when I run the script? Currently it is not. options=( "quit/exit" "new rational db" "run php for rational codebase" "run php for playground codebase" ) select option in "${options[@]}"; do case "$REPLY" in 1) break;; 2) sudo -i -u db2inst1 bash -c "db2stop force;";; 3) rm /tmp/createDb2*;; 4) ;; esac done
Q1: add spaces to the end of any of the options to make it longer than 40 characters, for example: options=( "quit/exit" "new rational db" "run php for rational codebase" "run php for playground codebase " ) Q2: Not sure if there is a more elegant way (didn't see in help select), but this should work: finished= while test ! "$finished"; do select option in "${options[@]}"; do case "$REPLY" in 1) finished=1;; 2) sudo -i -u db2inst1 bash -c "db2stop force;";; 3) rm /tmp/createDb2*;; 4) ;; esac break done done
How does "options" in shell scripting work?
1,455,096,620,000
Short version: I am searching for a way to get the behaviour of the -f flag in rm when using rmdir. Long version: I am running a parallel process where every command must clean up its working directory after completion. Commands may operate in the same working directory, so rmdir -p --ignore-fail-on-non-empty works perfectly to prevent a worker removing a directory that is still in use by another worker. The only problem seems to arise when the last two workers based in the same directory finish simultaneously-- then one of the workers is beaten to the punch and rmdir returns the error "No such file or directory". Is there a way to make rmdir ignore this non-issue, as rm does with -f? (rm does have the -d option which may be a different solution, but I can't see any way to get the -p --ignore-fail-on-non-empty type behaviour with rm. In any case, -d is not universal to all versions of rm so it is better to avoid that approach.)
You could check if the directory exists first: [ -d "$tmpdir" ] && rmdir -p --ignore-fail-on-non-empty "$tmpdir" This could still produce errors if two jobs start to remove the directory at exactly the same time (a TOCTOU vulnerability: both test and see the directory, then both try to remove it), but that's probably not too likely. Or you could just ignore every error rmdir might produce: rmdir -p --ignore-fail-on-non-empty "$tmpdir" 2>/dev/null || true Or, create a program of your own to remove the directory but to ignore the "File does not exist" error only. E.g. with Perl: perl -le 'if (not rmdir($ARGV[0]) and not $!{ENOENT}) { warn "rmdir: $!\n"; exit 1; }' "$tmpdir"
-f option for rmdir
1,455,096,620,000
In Linux Ubuntu about the 'tar' command for these versions: tar -tzf /path/to/filename.tar.gz # Show the content tar -xzf /path/to/filename.tar.gz # Extract the content Observe both commands use the z option, and well, they work as expected. Through man tar, about the z option, it indicates: -z, --gzip, --gunzip, --ungzip Filter the archive through gzip(1). Question Why tar command uses gzip command through 'z' option? Extra Question About the Filter the archive through gzip(1). part. Why is need it "filter" in the two commands shown above? or What is the meaning or context of filter?
Archiving and compression are two separate things. Most archiving programs on Windows (e.g. zip, 7z, rar, and many more) combine the two into one program that does both archiving and compression - so people who are used to using Windows tend to think of them as being just one inseparable thing. While many of these programs exist on unix/linux, largely for compatibility with non-unix systens, it is far more common for the compressing and archiving functionality to be done by separate programs. Unlike MS-DOS/Windows archivers, unix-native programs understand and make use of unix file metadata like ownership and permissions, and some even handle ACLs correctly. tar is an archiving program. It allows one or more files to be stored in a .tar archive. This archive is not compressed. It was originally used for writing a stream containing multiple files and associated metadata (filenames, ownership, perms, etc) to tape. Or to a file, as any stream of bytes can be redirected to a file or piped to another program. tar is not the only archiving program around, there are many others including cpio, ar, afio, pax, and more. gzip is a compression program. It can compress any single file to a compressed version of itself. Or it can compress data from stdin and output it to stdout (i.e. it can work as a "filter"). Again, gzip is not the only compression/decompression program around, it is one of many. tar can use gzip to compress a .tar archive before it is written to disk. And to decompress a compressed archive before reading from it. Depending on what version of tar you have, it may be able to use other compression programs instead of, or as well as, gzip. For example, GNU tar has the following compression-related options: Compression options -a, --auto-compress Use archive suffix to determine the compression program. -I, --use-compress-program=COMMAND Filter data through COMMAND. It must accept the -d option, for decompression. The argument can contain command line options. -j, --bzip2 Filter the archive through bzip2(1). -J, --xz Filter the archive through xz(1). --lzip Filter the archive through lzip(1). --lzma Filter the archive through lzma(1). --lzop Filter the archive through lzop(1). --no-auto-compress Do not use archive suffix to determine the compression program. -z, --gzip, --gunzip, --ungzip Filter the archive through gzip(1). -Z, --compress, --uncompress Filter the archive through compress(1). --zstd Filter the archive through zstd(1). And, worth noting, the other archiving programs can also be used with compression programs - either through command-line options like -z or -Z, etc; or by piping the output of the archiver into a compression program before redirecting the compressor's output to a file (or, conversely, piping the output of a decompressing program into an archiver to list or extract its contents) You can "mix-and-match" the archiving and compression programs as needed, allowing you to take advantage of improvements in archiving and/or compression technology. Most archivers, including GNU tar, support this via pipes, but GNU tar also has several built-in options for some well-known programs AND a convenient -I option for using other compression programs that don't have their own built-in option - perhaps implementing a new compression algorithm or a new implementation of an existing algorithm. For example, programs like pigz, pixz, pbzip2 etc (instead of gzip, xz, bzip2, etc) which are parallelised versions of those compression programs which can take advantage of multi-core/multi-thread CPUs to greatly reduce the time needed to compress or decompress the data. A "filter" is a generic term for a program used in a pipeline to process (and possibly modify in some way) the output of one program before either redirecting it to a file or piping it to the next program in the pipeline. Some programs (like tar with -z etc) can set up the filtering pipeline themselves, without requiring the user to do it in the shell (e.g. tar xfz filename.tar.gz is basically the same as gzip -d filename.tar.gz | tar xf -, and tar cfz filename.tar.gz ... is essentially the same as tar cf - ... | gzip > filename.tar.gz) Many unix programs are written so that they can be used as filters in a pipeline - e.g. gzip can compress either an existing file or it can compress its input stream (stdin) and send the output to stdout....and a simple program like cat can just pass its stdin directly to stdout or optionally number the lines (with -n), make end-of-line and control and other codes visible (with options like -v, -E, -A, -t). BTW, because pipelines are so useful, it's very common for people to write their own scripts (in awk or perl or whatever) so that they are capable of taking their input from stdin and writing to stdout - i.e. it's common for people to write their own filters.
Why tar command uses gzip command through 'z' option?
1,455,096,620,000
Let's say I want to search a file for a string that begins with a dash, say "-something": grep "-something" filename.txt This throws an error, however, because grep and other executables, as well as built-ins, all want to treat this as a command-line switch that they don't recognize. Is there a way to prevent this from happening?
For grep use -e to mark regex patterns: grep -e "-something" filename.txt For general built-ins use --, in many utilities it marks "end of options" (but not in GNU grep).
Stop executables and built-ins from interpreting a string argument starting with - as a switch? [duplicate]
1,455,096,620,000
How can I have some of the options to a unix command come from a file? That is the file does not contain all the options -- other options are specified elsewhere. For example I have the filelsoptions.txt with the following content: -F -G Now I would like to execute ls -a <and all options specified in the file lsoptons.txt> ie, I want to execute ls -a -F -G. That is, specify some of the options, but other options are read from a file. Note: This is obviously a cooked up example. My real use case is that my shell script needs some parameters that were provided in a TeX file. So, instead of having the user duplicate the information in multiple places, I will have the TeX write the desired options into a file so that the shell script can also have access to them. One option would be to have the TeX generate the entire script, but would much easier if I could just read the options from a file.
For the trivial example listed above, ls -a $(cat lsoptions.txt)
Options to a command specified in a file
1,455,096,620,000
I've got that far: wc --files0-from=FILE lets one get the word count of a list of files. Each entry in this list mus be terminated with an ASCII NUL character. Question: A way to set a terminating character, such as NUL by hand, or something else? I found this here [quoted from the info wc output]: … produce a list of ASCII NUL terminated file names is with GNU find', using its-print0' predicate. I found somebody saying the each file name is being terminated by an ASCII NUL character. Is it right that this fits to, say the output of ls? I know, one shouldn't parse ls, but writing the output into a file to be read by wc another time would be nice.
ls ends each filename with a newline (\n) and not a NUL (\0) (if its standard output is not a terminal). A way to list the files in the current directory, using NUL as a separator, is: find . -maxdepth 1 -print0. This will match the files starting with a period too. To ignore them, use: find . -maxdepth 1 \! -name '.*' -print0 Others ways could be: ls | tr '\n' '\0' or printf '%s\0' * As noted by @ChrisDown in his comment, only the find and the printf options will do the job correctly if you have file names containing \n in the current directory. If this is not your case (in fact, I wonder if there really are people using newlines in file names out there), the three are equivalent.
wc - setting a terminating character
1,455,096,620,000
I'd like to write a script that reads a file and passes every line as options (or "option arguments") to a command, like this: command -o "1st line" -o "2nd line" ... -o "last line" args What's the simplest way of doing this?
Here is one possibility: $ cat tmp 1st line 2nd line 3rd line 4th line $ command $(sed 's|.*|-o "&"|' tmp | tr '\n' ' ') As glennjackman points out in the comments, word splitting can be circumvented by wrapping in eval, though the security implications of doing so should be appreciated: $ eval "command $(sed 's|.*|-o "&"|' tmp | tr '\n' ' ')" Edit: Combining my suggestion of using sed to assemble arguments with glenn jackman's mapfile/readarray approach gives the following concise form: $ mapfile -t args < <(sed 's|.*|-o\n&|' tmp) && command "${args[@]}" As a trivial demonstration, consider the aforementioned tmp file, the command grep, and the file text: $ cat text some text 1st line and a 2nd nonmatching line some more text 3rd line end $ mapfile -t args < <(sed 's|.*|-e\n&|' tmp) && grep "${args[@]}" text some text 1st line and some more text 3rd line end $ printf "%s\n" "${args[@]}" -e 1st line -e 2nd line -e 3rd line -e 4th line
How to pass every line of a file as options to a command?
1,455,096,620,000
What is the -alhF flag in ls? I can't find it in the man page.
From man ls: -a, --all do not ignore entries starting with . -F, --classify append indicator (one of */=>@|) to entries -h, --human-readable with -l, print sizes in human readable format (e.g., 1K 234M 2G) -l use a long listing format The command ls -alhF is equivalent to ls -a -l -h -F The ability to combine command line arguments like this is defined by POSIX. Options that do not require arguments can be grouped after a hyphen, so, for example, -lst is equivalent to -t -l -s.
What's the -alhF flag in ls?
1,455,096,620,000
What is the difference between quotes wrap around only the option value eg: grep --file="grep pattern file.txt" * vs quotes wrap around the option name and option value eg: grep "--file=grep pattern file.txt" * ? They produce the same result.
Quotes and backslash in shells are used to remove the specialness of some characters so they be treated as ordinary characters. Double quotes are special in that they still allow expansions to take place within. Or in other words, within them $, \, and ` are still special. They also affect how those expansions are performed. In that line, the only characters that are special to the shell are: space, which in the syntax of the shell (like in many languages) is used to delimit words in the syntax, and specifically for that line, arguments in simple commands (which is one of several and the main construct that the shell knows about). the newline character at the end which is used (among other things) to delimit commands *, which is a glob pattern operator, and the presence of such a character, when not quoted in a simple command line, triggers a mechanism called filename generation or globbing or path name expansion (the POSIX wording). The other characters have no special significance in the shell syntax. Here, what we want to do is for the shell to execute the /usr/bin/grep file with these arguments: grep --file=grep pattern file.txt and following: the list of files in the current working directory. So we do want: space to be treated as a word delimiter in between those newline to delimit that command * to be treated as a glob operator and be expanded to the non hidden files in the current working directory. So those characters above must not be quoted. However, there are two of those spaces that we want to be included in the second argument passed to grep, so those must be quoted. So at the very least, we need: grep --file=grep' 'pattern' 'file.txt * Or: grep --file=grep" "pattern\ file.txt * (to show different quoting operators) That is where we only quote the 2 characters that are special to the shell and that we don't want be treated as such. But we could also do: 'grep' '--file=grep pattern file.txt' * And quote all the characters except those we want the shell to treat specially. Quoting those non-special characters make no difference to the shell¹. 'g'r"ep" \-\-"file="'grep p'atte'\r\n\ file.txt * Here alternating different forms of quotes would work the same. Given that command and option² names rarely contain shell special characters, it is customary not to quote them. You rarely see people doing 'ls' '-l' instead of ls -l, so that grep --file="the value" is a common sighting even if it makes no difference compared to grep "--file=the value" or grep --file=the" value". See How to use a special character as a normal one in Unix shells? for more details as to what characters are special and ways to quote them in various shells. Now that still leaves a few problems with that command: if the first³ filename expanded by * starts with with -, it will be treated as an option * expands all files regardless of their type. That includes directories, symlinks, fifos, devices. Chances are you only want to look in files of type regular (or may symlink to regular files) --file is a GNUism. The standard equivalent is with -f. If * expands to only one file, grep will not include the file name along with the matching lines. If * doesn't match any file, in a few shells including bash (by default), a literal * argument will be passed to grep (and it will likely complain that it can't open a file by that name). So, here, you'd like want to use the zsh shell for instance, and write: grep -f 'grep pattern file.txt' -- /dev/null *(-.) Where: -f is used in place of --file. -- marks the end of option so that no other argument after it be treated as one even if it starts with -. we add /dev/null so grep be passed at least 2 files, guaranteeing that it will always print the file name. We use *(-.) so grep only looks in regular files. If that doesn't match any, zsh will abort with a no match error and not run grep. Since, we're passing /dev/null to grep, we could also add the N glob qualifier (*(N-.)) which would cause the glob to expand to nothing when there's no match instead of reporting an error, and grep would only look inside /dev/null (and silently fail). ¹ Beware quoting keywords in the shell syntax such as while, do, if, time even in part has an influence though as it stops the shell from recognising them as such; similarly, 'v'ar=value would stop the shell from considering as a variable assignment as 'v'ar is not a valid variable name (and quote handling is performed after parsing the syntax). Or your foo alias won't be expanded if you write them \foo or f'oo' unless you also have aliases for those quoted forms (which few shells let you do) ² To the notable exception of -? sometimes found on utilities inspired by Microsoft ones. ³ In the case of GNU grep, that also applies to further arguments, not just the first as GNU grep (and nowadays a few other GNU-like implementations) accepts options even after non-option arguments.
What is the difference between quotes wrap around only the option value vs quotes wrap around the option name and option value?
1,455,096,620,000
About the ps command, consider if for simplicity: in tty3 is executed the yes command in tty4 is executed the yes > /dev/null command through ps I need to show in the report the complete command with options, pipes and redirection, in this case redirection, was tried ps aux and ps -ef and does not appear as expected, for both cases always appear yes - I need see yes and yes > /dev/null ... COMMAND yes yes > /dev/null so currently the > /dev/null part is not included. How accomplish this goal? About a command with pipe it would be mvn clean ... | tee ... and about option(s) the command would be tar -xzf /path/to/filename.tar.gz ... COMMAND mvn clean ... | tee ... tar -xzf /path/to/filename.tar.gz or all together: options, pipes and redirection
You can't. At least, not without deconstructing output from ps, lsof, and a little bit of guesswork. You can use ps -ef or maybe ps -wwef to get the command with its options, but redirections and pipes are not part of a command and so will not be shown
ps command: how show the complete command with options, pipes and redirection? (or all together)
1,455,096,620,000
About the tar command Introduction Having for example: source numbers 001.txt # with the 111 content 002.txt # with the 222 content 003.txt # with the 333 content If is created the numbers.tar.gz file through the tar -czf numbers.tar.gz numbers command, therefore now we have: source numbers.tar.gz numbers 001.txt # with the 111 content 002.txt # with the 222 content 003.txt # with the 333 content Consider if the mv numbers.tar.gz target command is executed, therefore we have: target numbers.tar.gz If the tar -xzf numbers.tar.gz command is executed therefore we have target numbers.tar.gz numbers 001.txt # with the 111 content 002.txt # with the 222 content 003.txt # with the 333 content Therefore as general overview we have: source numbers.tar.gz numbers 001.txt # with the 111 content 002.txt # with the 222 content 003.txt # with the 333 content target numbers.tar.gz numbers 001.txt # with the 111 content 002.txt # with the 222 content 003.txt # with the 333 content Overriding control Lets assume the following simple update: target numbers.tar.gz numbers 001.txt # with the 111 content 002.txt # with the 222222 content <--- updated 003.txt # with the 333 content If the tar -xzf numbers.tar.gz command at the target directory is executed therefore the 002.txt file is overridden, so its 222222 content returns to 222. Until here I am ok. To keep the new data safe or to avoid a no desired override can be used the --keep-old-files and --skip-old-files options, according with the tar(1) - Linux man page, it indicates: -k, --keep-old-files don't replace existing files when extracting, treat them as errors --skip-old-files don't replace existing files when extracting, silently skip over them Therefore for the execution of the two following commands: tar --keep-old-files -xzf numbers.tar.gz tar --skip-old-files -xzf numbers.tar.gz happens the following: the former always shows the tar: numbers/222.txt: Cannot open: File exists error message and the data is kept safe (remains with 222222) the latter shows nothing - the exception is if the v option is used - it shows the tar: numbers/222.txt: skipping existing file message; and the data is kept safe (remains with 222222). Useful for script purposes. Until here, all is fine. After to do a research I found the --keep-newer-files option, and according again with the tar(1) - Linux man page, it indicates: --keep-newer-files don't replace existing files that are newer than their archive copies Therefore for the execution of the following command: tar --keep-newer-files -xzf numbers.tar.gz happens the following: appears the tar: Current ‘numbers/222.txt’ is newer or same age message and the data is kept safe (remains with 222222) Practically this --keep-newer-files option does the same than --skip-old-files about to avoid overriding but showing a different message. Question When is mandatory use the --keep-newer-files option for the tar command over the --keep-old-files and --skip-old-files options? I want to know what is the specific scenario where this options is mandatory.
--keep-newer-files is useful if you want to keep changes made on the target after the source files were last modified, and replace anything older on the target with newer versions from the source. To illustrate the difference between the two options, you need another piece of information, the timestamp of each file. Consider the following: source numbers 001.txt # timestamp t1, contents 1 002.txt # timestamp t1, contents 222 003.txt # timestamp t1, contents 333 Those files are copied to target preserving their timestamp. source numbers 001.txt # timestamp t1, contents 1 002.txt # timestamp t1, contents 222 003.txt # timestamp t1, contents 333 target numbers 001.txt # timestamp t1, contents 1 002.txt # timestamp t1, contents 222 003.txt # timestamp t1, contents 333 Now you fix 001.txt on the source (observe the t2 part): source numbers 001.txt # timestamp t2, contents 111 002.txt # timestamp t1, contents 222 003.txt # timestamp t1, contents 333 target numbers 001.txt # timestamp t1, contents 1 002.txt # timestamp t1, contents 222 003.txt # timestamp t1, contents 333 Someone also edits 002.txt on the target (observe the t3 part): source numbers 001.txt # timestamp t2, contents 111 002.txt # timestamp t1, contents 222 003.txt # timestamp t1, contents 333 target numbers 001.txt # timestamp t1, contents 1 002.txt # timestamp t3, contents 22222 003.txt # timestamp t1, contents 333 You create a new archive, and extract it on the target: without any option, all files are extracted, so the target ends up with the same contents in 001.txt and 002.txt as on the source, losing the changes made to 002.txt on the target; with --keep-old-files, 001.txt and 002.txt aren’t extracted, and the target is left with its outdated version of 001.txt; with --keep-newer-files, 001.txt is extracted and overwrites the existing target file, because the existing file is older than the file in the archive, but 002.txt is not extracted, because the existing file is newer than the file in the archive.
When is mandatory use the "--keep-newer-files" option for the tar command?
1,657,201,961,000
I have a simple script to search patterns in my code sources, named prgrep #!/usr/bin/bash grep -irnI --exclude-dir={.git,obj} --exclude=tags --color=auto "$@" (The fact that it is a script and not an alias or function is that I want to be able to call it from inside vim and with any shell) Note that the search is case insensitive (since I consider this a good default to search) and the script accepts any flags that grep accepts. I would like grep to have a flag --no-ignore-case so that the caller of the script could override the -i flag of the script, but GNU grep does not provide this. Do you have any simple idea to provide such functionality? Currently I have a separate script named Prgrep which performs case sensitive searches. EDIT Recent versions of GNU grep do provide a --no-ignore-case option, which is exactly what I need. I'm using GNU grep 3.1, which still doesn't have this option.
New versions of grep have the option --no-ignore-case which overrides -i: --no-ignore-case Do not ignore case distinctions in patterns and input data. This is the default. This option is useful for passing to shell scripts that already use -i, to cancel its effects because the two options override each other. For older versions of grep, you could simply add this as option to your script: #!/usr/bin/bash if [ "$1" = "--no-ignore-case" ]; then shift grep -rnI --exclude-dir={.git,obj} --exclude=tags --color=auto "$@" else grep -irnI --exclude-dir={.git,obj} --exclude=tags --color=auto "$@" fi Note: --no-ignore-case will need to be the first argument when you call your script.
grep flag to NOT ignore case
1,657,201,961,000
I have been using the following command to install the Expo CLI package: sudo npm install expo-cli --global The command above works successfully to install that package. However, I'm wondering if moving the --global before the package name would work equally to the command above. So, doing this instead: sudo npm install --global expo-cli Environment: Ubuntu 18.04. I looked online for a reference but did not find one (even though there must be one out there somewhere).
The man page for npm(1) shows: Synopsis npm <command> [args] It doesn't say much else, so all we can deduce is that install is the <command> and must come before the [args]. The [args] are expo-cli and --global. Let's inspect the install command to see if we can get more details. npm-install(1) says: Synopsis ... npm install [<@scope>/]<name> ... aliases: npm i, npm add common options: [-P|--save-prod|-D|--save-dev|-O|--save-optional] [-E|--save-exact] [-B|--save-bundle] [--no-save] [--dry-run] It doesn't say anything about order. This starts to make us think order doesn't matter. If we scroll down we see things like: The --tag argument will apply to all of the specified install targets. The -g or --global argument will cause npm to install the package globally rather than locally. See npm help folders. Ok... so order is never mentioned in the man page, but we see that --tag applies to all targets. They felt that it was important to mention in the man page because if someone tries to install several packages and specify a tag, they might assume that the --tag flag applies only to the package before or after. That's not the case, options apply to everything. If options apply to everything, then order is probably not important. Note that all of the examples they give in the man page put the package before the flag. You could try it out: npm install sax --global expo-cli Check if they are both installed globally (I bet they are). If you want to install several packages, some local, some global, then I'd suggest taking the safe approach and using two separate commands because it isn't defined in the documentation and therefore behavior could change.
does it matter where flag appears in a command?
1,657,201,961,000
I recently had to use this command as I got some error due to nvidia package diversion, but I don't exactly know how it is working: LC_MESSAGES=C dpkg-divert --list '*nvidia-340*' | sed -nre 's/^diversion of (.*) to .*/\1/p' | xargs -rd'\n' -n1 -- sudo dpkg-divert --remove sudo apt --fix-broken install I read about LC_MESSAGES and sed, and I know how the pipe operator works, but I can't figure out how exactly this command is working with these specific options.
LC_MESSAGES=C dpkg-divert --list '*nvidia-340*' lists all the diversions matching the glob pattern *nvidia-340*, in English so that the output is of the form “diversion of ... to ... by ...”. sed -nre 's/^diversion of (.*) to .*/\1/p' extracts the text between “diversion of” and “to”, i.e. the name of the diverted files. -nre is equivalent to -n -r -e; -n disables automatic pattern space output, so nothing is output unless requested by a p command (see the end of the sed command); -r enables extended regular expressions; and -e introduces the script we want to run. In the regular expression, ^diversion of matches “diversion of ” (including a space) at the start of a line; (.*) matches any number of characters, and creates a match group; to .* matches “ to ” (including leading and trailing spaces) followed by any character. This is used in a s command to replace the complete text with only \1, the contents of the match group (i.e. the text between “diversion of” and “to”). The final p prints the pattern space if the s command matched. xargs -rd'\n' -n1 -- sudo dpkg-divert --remove runs sudo dpkg-divert --remove on every file output by the previous step, removing the corresponding diversion. sudo apt --fix-broken install tries to fix any broken dependencies.
What do these options on dpkg-divert and sed do as it relates to Nvidia package diversion?
1,657,201,961,000
I've summarized a list of commands that accepts symbolic link options according to SUSv4-2018ed: cd chgrp chown chmod cp find ln ls pax rm The full list also includes their defaults and other related options supported (such as -h and -d), and I stored it on my HDD for reference. I've previously seen (GNU documents if I was correct) referring to -P -L options as "physical" and "logical" respectively, and I think that's probably where the option letters come from, but the latest docs as of Nov 2019 refer to them as "--no-dereference" and "--dereference" now. My question is: where do -P -L -H come from? Is it SUS, XPG, POSIX, SVID, or vendor documentation? And what do they initially stand for?
P and L indeed refer to the physical symbolic link itself, and the logical file the symbolic link refers to. If one goes to section A.3. subsection "symbolic link" of the Rationale volume of 2018 edition of the Single Unix Specification, all of -P -L -H are mentioned, and it says -H (for half logical) Thanks goes to Don Cragon (from Austin Group mailing list) for the pointer.
What do letters for symlink options (-P -L -H) stand for?
1,657,201,961,000
I usually combine options together whenever there are more than one option to be used with respect to some command. For example , if i want to create an archive using tar i will write tar -cvf archive.tar file1 file2 but my doubt is that how to know the correct order in which i have to combine the options together. If i use tar -cfv archive.tar file1 file2 it shows error. I have faced this issue with many other commands also. I know it is a very silly doubt but i was having a really hard time getting through it. I have checked the man description of the commands also but there they have specified a particular sequence under the synopsis section. I was not able to find something related to combining the options in a particular sequence.
The manual for any given command will describe exactly how to use its options. In this case, the -f option takes a filename argument. An option's argument (if it takes one) must be placed just after it. In your first tar command, this filename argument is archive.tar, but in your second it is v. The second command tries to create an archive called v from three files, archive.tar file1, and file2. Since archive.tar probably does not exist, you would get an error message about this. Again, the tar manual would describe this. The GNU tar manual say tar -c [-f ARCHIVE] [OPTIONS] [FILE...] so it's clear that -f takes the name of an archive. A bit further down, it says -f, --file=ARCHIVE Use archive file or device ARCHIVE. [...] The other options that you use, -c and -v, don't take arguments. Also, in general, options come before file operands. Some GNU tools allow you to add opions to the very end of the command line, as in tar -c -f archive.tar file1 file2 -v but this is (IMHO) bad style, and it would break on many other Unices (-v would be interpreted as a file name). The 100% correct way to write your tar command, following the form in the synopsis, is tar -c -f archive.tar -v file1 file2
correct order while combining different options of a command
1,657,201,961,000
I grasp the heading of this question odd, but I do wonder if in some situations there should be a need to take extra caution and somehow "enforce" non-recursivness when changing permissions with chmod nonrecursively (without the -R argument). Say I have a directory ~/x. This dir has a few files, as well as a sub-dir ~/x/y that also has a few files, and I decided to make all x files executable without effecting files at y. I could execute: chmod +x ~/x/* Surly the chmod should do the job and it's unlikely that in any Bash version (including future versions) the POSIX logic would be changed and the above chmod would effect the sub dir as well, but I do wonder if there could be any situations in Bash (or common shells) in which chmod +x ~/x/* will also cover the y files, and how to improve my command to protect from such undesired change?
You can use find and restricting to get only files in the current directory find ~/x -maxdepth 1 -type f -exec chmod +x {} +
Make all files in a dir executable (non-recursively) while strictly-ensuring non-recursivness
1,657,201,961,000
I see the tutorial, when create a user: useradd -g liao1 lamp You see they place the -g liao1 before the lamp, and I tested the: useradd lamp -g liao1 I put the param after the name, I can create the user too, but I am not sure if there is distinction.
As ivanivan noted, the interpretation of the line parameters is done by the program (useradd) in your case. Many programs don't care about the order of the parameters, but some do. Eg. convert (from the imagemagick package) converts images and specifies: convert [input-option] input-file [output-option] output-file So, the input-option(s) have to be specified before the input file, and similar for the output-option(s). There are much more complicated examples, such as compilers, which need options to be in a specific order in order to work correctly. In all cases, it's very advisable to consult the man pages of the command, or the --help (or -h or -? or whatever) of the program you want to run. Things can go wrong...
Does the params and the name affect the command in Linux?
1,657,201,961,000
How do I understand what the various options/flags mean? For example: 1) uname -a - What does -a denote here? 2) pyang -f - What does -f denote here? I just want to understand if there is some reference/doc that tells the usage of these? Please clarify.
With almost all Linux commands, I think the fastest and easiest first course of action is to append "--help" to the command. This gives you a good summary, which is often enough. If you need more details, the man command is a good second choice. For example: $ uname --help Usage: uname [OPTION]... Print certain system information. With no OPTION, same as -s. -a, --all print all information, in the following order, except omit -p and -i if unknown: -s, --kernel-name print the kernel name -n, --nodename print the network node hostname -r, --kernel-release print the kernel release -v, --kernel-version print the kernel version -m, --machine print the machine hardware name -p, --processor print the processor type (non-portable) -i, --hardware-platform print the hardware platform (non-portable) -o, --operating-system print the operating system --help display this help and exit --version output version information and exit
What do the options after a specific command mean?
1,657,201,961,000
I'm busy setting up a back-up script to run on my Pi using rsync. I see that a number of people use the -v option in their cron jobs. Why? It's going to be run as root, and not in a terminal where someone will see it. I understand that maybe if something happens you can tail /var/logs/syslog, but the chance of that happening is negligible. As I'm running the backup between 2 external hard drives on the same system, I can see the benefit of using -za. The -z for compression because why not, the CPU is barely taxed at the best of times. The -a to preserve permissions, time-stamps, symlinks and owners and groups and to make it recursive. I might remove the -z and replace it with -W to write whole files in stead of blocks, but I don't want it to run for too many hours. Is there a way to output any encountered errors to an error log file? In that case, the -v option might make sense - unless I'm missing something here.
Usually, cron sends the output of the jobs it runs to the relevant user; so -v is useful there because you get an email with the full output of the rsync command. On a correctly-configured system, even mail to root goes to the appropriate user. For this to work you need mail to be setup appropriately on the system running cron; that used to be common on Unix-type systems, not so much nowadays... cron uses sendmail by default to send email; this can be overridden with the -m option to crond. Alternatively, you can configure crond to log job output using syslog, with the -s option. You can also redirect individual cron jobs' output using shell redirection, so > somelog.log 2> errorlog.log would log standard output to somelog.log, and standard error to errorlog.log (you can of course add paths).
Use of Verbose in a cron job
1,657,201,961,000
I have one txt file called sales.txt Fred apples 20 April 4 Susy oranges 5 April 7 Mark watermelons 12 April 10 Terry peaches 7 April 15 And when i use this command: [root@ip-10-0-7-125 bash-tut]# cat sales.txt | cat /dev/stdin | cut -d' ' -f 2,3 | sort 20 April oranges 5 peaches 7 watermelons 12 The point is if i remove the -d' ' part, i got all the field of the text file. [root@ip-10-0-7-125 bash-tut]# cat sales.txt | cat /dev/stdin | cut -f 2,3 | sort Mark watermelons 12 April 10 pples 20 April 4 Susy oranges 5 April 7 Terry peaches 7 April 15 [root@ip-10-0-7-125 bash-tut]# man cut Why was that? I was looking up for d option on man page and it said: -d just mean field delimiter.
The man page on my system says: -d, --delimiter=DELIM use DELIM instead of TAB for field delimiter So if you don't specify -d, cut assumes that your fields are separated by TAB characters. Your input file contains no TAB characters. Meanwhile, the man page also says: -f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified The key part there is "also print any line that contains no delimiter character". This is what you have: every line in your file contains "no delimiter character".
Why do i need -d option on this cut command?
1,657,201,961,000
On an NFS server, the shares are typically set up in /etc/export, where mount options like rw, root_squash, sync etc. can be set. When mounting the NFS share on the client side, again mount options can be specified. How do these two (possibly opposite) ways to set the options relate to each other? Do the options on one side supersede those on the other side?
The mount options on the NFS client can be more restrictive than those on the server but not the opposite. For example, if a share is exported read/write the client can choose to mount read-only. However, if a share is exported read-only then the client gets read-only no matter how it tries to mount it.
NFS server mount options vs. client mount options
1,657,201,961,000
I know that is not a exciting question, but yet I don't understand why some programs needs program -h and other program --help sometimes is very boring recognize it
In practice, programs should have both options. The -h is the "short form" and --help is "long form". Short form command options are usually one or two characters while long form options are more descriptive (such as yum update -y and yum update --assume-yes meaning "assume yes to all questions"). Programs that don't use both usually are non-standard utilities.
why some programs needs -h and other no
1,657,201,961,000
If in Ubuntu Server is executed: curl https://services.gradle.org/distributions/gradle-7.5.1-bin.zip -O appears: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0curl: (6) Could not resolve host: services.gradle.org Observe the curl: (6) Could not resolve host: services.gradle.org part. If in Fedora Server is executed the same command, it shows: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 No error, but the gradle-7.5.1-bin.zip file is empty. If you open a web browser and put the https://services.gradle.org/distributions/gradle-7.5.1-bin.zip in the address bar the download process automatically starts. So how to fix this situation? Note Just in case, both Linux are running as VM in VirtualBox
Apparently there is some problem with your DNS resolution in the ubuntu Server Try to set an other DNS server like 8.8.8.8 or 1.1.1.1 See the contents of /etc/resolv.conf and set nameserver 8.8.8.8 update: After fixing the issue with DNS , add -L option to curl which allows it to "follow redirects" The problem here was that the URL does not actually provide the file, but it redirects to another URL that actually hosts the file. This is done automatically in the browser but in curl the -L option is needed. See the output of wget as example: % wget https://services.gradle.org/distributions/gradle-7.5.1-bin.zip --2022-11-07 11:16:12-- https://services.gradle.org/distributions/gradle-7.5.1-bin.zip Resolving services.gradle.org (services.gradle.org)... 104.18.191.9, 104.18.190.9 Connecting to services.gradle.org (services.gradle.org)|104.18.191.9|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://downloads.gradle-dn.com/distributions/gradle-7.5.1-bin.zip [following] --2022-11-07 11:16:12-- https://downloads.gradle-dn.com/distributions/gradle-7.5.1-bin.zip Resolving downloads.gradle-dn.com (downloads.gradle-dn.com)... 104.18.164.99, 104.18.165.99 Connecting to downloads.gradle-dn.com (downloads.gradle-dn.com)|104.18.164.99|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 120635534 (115M) [application/zip] Saving to: ‘gradle-7.5.1-bin.zip.1’ gradle-7.5.1-bin.zip.1 100%[================================>] 115,05M 5,31MB/s in 21s 2022-11-07 11:16:34 (5,43 MB/s) - ‘gradle-7.5.1-bin.zip.1’ saved [120635534/120635534] 301 Moved Permanently is the redirect and after that the new URL
curl: (6) Could not resolve host: services.gradle.org
1,657,201,961,000
I read this question Get SSH server key fingerprint In the answer and solution appears the following command (adapted here for presentation purposes) ssh-keyscan 192.168.1.X | ssh-keygen -lf - I know that the first command isolated shows the public keys of the host. When the complete command is executed - thus the two parts - according with the final output, the public keys of the first command are used to generate the fingerprint of themselves. question How does - work in the ssh-keygen -lf - command? It is mandatory. I know l is to show the fingerprint and f to define a filename, but how is interpreted -?
By default, ssh-keygen -l will ask you interactively for what public key file to show the fingerprint of. With -f you give it the pathname of some existing file instead. If the pathname is - (a dash), input is read from standard input instead of from a file. This is a common practice that quite a few other commands also follow, most notably cat (cat - reads from standard input). In your pipeline, the data on standard input is provided by the ssh-keyscan command. The ssh-keyscan command will extract the public key of the mentioned host and pass it on to ssh-keygen -l. The ssh-keygen utility will output the fingerprint. Without -f -, the ssh-keygen utility would try to use the output of ssh-keyscan as the filename to read the key from. This is arguably bad design, as it's easy to programmatically determine whether the input to a program comes from a terminal or something that is not a terminal (like another command or a file). So in a sense, -f - could be made unnecessary.
How does "-" work in the "ssh-keygen -lf -" command?
1,657,201,961,000
Please consider the prior discussion as background to this new question. I have modified my script and applied the same filesystem options to my USB drive's ext4 partitions using tune2fs, and mount options specified in the fstab. Those options are all the same as for the previous discussion. I have applied those changes and performed a reboot, but the mount command is not reporting what I would have expected, namely that it would show mount options similar to those reported for the internal hard drive partitions. What is being reported is the following: /dev/sdc3 on /site/DB005_F1 type ext4 (rw,relatime) /dev/sdc4 on /site/DB005_F2 type ext4 (rw,relatime) /dev/sdc5 on /site/DB005_F3 type ext4 (rw,relatime) /dev/sdc6 on /site/DB005_F4 type ext4 (rw,relatime) /dev/sdc7 on /site/DB005_F5 type ext4 (rw,relatime) /dev/sdc8 on /site/DB005_F6 type ext4 (rw,relatime) /dev/sdc9 on /site/DB005_F7 type ext4 (rw,relatime) /dev/sdc10 on /site/DB005_F8 type ext4 (rw,relatime) /dev/sdc11 on /site/DB006_F1 type ext4 (rw,relatime) /dev/sdc12 on /site/DB006_F2 type ext4 (rw,relatime) /dev/sdc13 on /site/DB006_F3 type ext4 (rw,relatime) /dev/sdc14 on /site/DB006_F4 type ext4 (rw,relatime) /dev/sdc15 on /site/DB006_F5 type ext4 (rw,relatime) /dev/sdc16 on /site/DB006_F6 type ext4 (rw,relatime) /dev/sdc17 on /site/DB006_F7 type ext4 (rw,relatime) /dev/sdc18 on /site/DB006_F8 type ext4 (rw,relatime) These are all reporting the same, but only reporting "rw,relatime", when I expected much more. The full dumpe2fs report for the first USB partition (same as for all others) is as follows: root@OasisMega1:/DB001_F2/Oasis/bin# more tuneFS.previous.DB005_F1.20220907-210437.dumpe2fs dumpe2fs 1.45.5 (07-Jan-2020) Filesystem volume name: DB005_F1 Last mounted on: <not available> Filesystem UUID: 11c8fbcc-c1e1-424d-9ffe-ad0ccf480128 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_fi le dir_nlink extra_isize metadata_csum Filesystem flags: signed_directory_hash Default mount options: journal_data user_xattr acl block_validity nodelalloc Filesystem state: clean Errors behavior: Remount read-only Filesystem OS type: Linux Inode count: 6553600 Block count: 26214400 Reserved block count: 1310720 Free blocks: 25656747 Free inodes: 6553589 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1017 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Sat Nov 7 09:57:44 2020 Last mount time: Wed Sep 7 18:18:32 2022 Last write time: Wed Sep 7 20:55:33 2022 Mount count: 211 Maximum mount count: 10 Last checked: Sun Nov 22 13:50:57 2020 Check interval: 1209600 (2 weeks) Next check after: Sun Dec 6 13:50:57 2020 Lifetime writes: 1607 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 32 Desired extra isize: 32 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 802d4ef6-daf4-4f68-b889-435a5ce467c3 Journal backup: inode blocks Checksum type: crc32c Checksum: 0x21a24a19 Journal features: journal_checksum_v3 Journal size: 512M Journal length: 131072 Journal sequence: 0x000000bd Journal start: 0 Journal checksum type: crc32c Journal checksum: 0xf0a385eb Does anyone know why this is happening? Can something be done to have both internal and USB hard disk report same options? In my /etc/default/grub file, I currently use the following definition involving a quirk: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash scsi_mod.use_blk_mq=1 usb-storage.quirks=1058:25ee:u ipv6.disable=1" Do I need to specify another quirk for the journalling and mount options to take effect as desired? Or is this again an "everything is OK" situation, the same as for the other post? Modified script: #!/bin/sh #################################################################################### ### ### $Id: tuneFS.sh,v 1.3 2022/09/08 03:31:12 root Exp $ ### ### Script to set consistent (local/site) preferences for filesystem treatment at boot-time or mounting ### #################################################################################### TIMESTAMP=`date '+%Y%m%d-%H%M%S' ` BASE=`basename "$0" ".sh" ` ### ### These variables will document hard-coded 'mount' preferences for filesystems ### count=1 BOOT_MAX_INTERVAL="-c 20" ### max number of boots before fsck [20 boots] TIME_MAX_INTERVAL="-i 2w" ### max calendar time between boots before fsck [2 weeks] ERROR_ACTION="-e remount-ro" ### what to do if error encountered #-m reserved-blocks-percentage ### ### This OPTIONS string should be updated manually to document ### the preferred and expected settings to be applied to ext4 filesystems ### OPTIONS="-o journal_data,block_validity,nodelalloc" ASSIGN=0 REPORT=0 VERB=0 SINGLE=0 USB=0 while [ $# -gt 0 ] do case ${1} in --default ) REPORT=0 ; ASSIGN=0 ; shift ;; --report ) REPORT=1 ; ASSIGN=0 ; shift ;; --force ) REPORT=0 ; ASSIGN=1 ; shift ;; --verbose ) VERB=1 ; shift ;; --single ) SINGLE=1 ; shift ;; --usb ) USB=1 ; shift ;; * ) echo "\n\t Invalid parameter used on the command line. Valid options: [ --default | --report | --force | --single | --usb | --verbose ] \n Bye!\n" ; exit 1 ;; esac done workHorse() { reference=`ls -t1 "${PREF}."*".dumpe2fs" 2>/dev/null | tail -1 ` if [ -n "${reference}" -a -s "${reference}" ] then if [ ! -f "${PREF}.dumpe2fs.REFERENCE" ] then mv -v ${reference} ${PREF}.dumpe2fs.REFERENCE fi fi reference=`ls -t1 "${PREF}."*".verify" 2>/dev/null | tail -1 ` if [ -n "${reference}" -a -s "${reference}" ] then if [ ! -f "${PREF}.verify.REFERENCE" ] then mv -v ${reference} ${PREF}.verify.REFERENCE fi fi BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}" BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}" rm -f ${PREF}.*.tune2fs rm -f ${PREF}.*.dumpe2fs ### reporting by 'tune2fs -l' is a subset of that from 'dumpe2fs -h' if [ ${REPORT} -eq 1 ] then ### No need to generate report from tune2fs for this mode. ( dumpe2fs -h ${DEVICE} 2>&1 ) | awk '{ if( NR == 1 ){ print $0 } ; if( index($0,"revision") != 0 ){ print $0 } ; if( index($0,"mount options") != 0 ){ print $0 } ; if( index($0,"features") != 0 ){ print $0 } ; if( index($0,"Filesystem flags") != 0 ){ print $0 } ; if( index($0,"directory hash") != 0 ){ print $0 } ; }'>${BACKUP}.dumpe2fs echo "\n dumpe2fs REPORT [$PARTITION]:" cat ${BACKUP}.dumpe2fs else ### Generate report from tune2fs for this mode but only as sanity check. tune2fs -l ${DEVICE} 2>&1 >${BACKUP}.tune2fs ( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.dumpe2fs if [ ${VERB} -eq 1 ] ; then echo "\n tune2fs REPORT:" cat ${BACKUP}.tune2fs echo "\n dumpe2fs REPORT:" cat ${BACKUP}.dumpe2fs fi if [ ${ASSIGN} -eq 1 ] then echo " COMMAND: tune2fs ${COUNTER_SET} ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE} ..." tune2fs ${COUNTER_SET} ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE} rm -f ${PREF}.*.verify ( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.verify if [ ${VERB} -eq 1 ] ; then echo "\n Changes:" diff ${BACKUP}.dumpe2fs ${BACKUP}.verify fi else if [ ${VERB} -eq 1 ] ; then echo "\n Differences:" diff ${BACKUP}.tune2fs ${BACKUP}.dumpe2fs fi rm -f ${BACKUP}.verify fi fi } workPartitions() { case ${PARTITION} in 1 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda3" ; OPTIONS="" ;; 5 ) DEVICE="/dev/sdc3" ;; 6 ) DEVICE="/dev/sdc11" ;; esac ;; 2 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda7" ;; 5 ) DEVICE="/dev/sdc4" ;; 6 ) DEVICE="/dev/sdc12" ;; esac ;; 3 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda8" ;; 5 ) DEVICE="/dev/sdc5" ;; 6 ) DEVICE="/dev/sdc13" ;; esac ;; 4 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda9" ;; 5 ) DEVICE="/dev/sdc6" ;; 6 ) DEVICE="/dev/sdc14" ;; esac ;; 5 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda12" ;; 5 ) DEVICE="/dev/sdc7" ;; 6 ) DEVICE="/dev/sdc15" ;; esac ;; 6 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda13" ;; 5 ) DEVICE="/dev/sdc8" ;; 6 ) DEVICE="/dev/sdc16" ;; esac ;; 7 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda14" ;; 5 ) DEVICE="/dev/sdc9" ;; 6 ) DEVICE="/dev/sdc17" ;; esac ;; 8 ) case ${DISK_ID} in 1 ) DEVICE="/dev/sda4" ;; 5 ) DEVICE="/dev/sdc10" ;; 6 ) DEVICE="/dev/sdc18" ;; esac ;; esac PARTITION="DB00${DISK_ID}_F${PARTITION}" PREF="${BASE}.previous.${PARTITION}" echo "\n\t\t PARTITION = ${PARTITION}" echo "\t\t DEVICE = ${DEVICE}" count=`expr ${count} + 1 ` COUNTER_SET="-C ${count}" workHorse } workPartitionGroups() { if [ ${SINGLE} -eq 1 ] then for PARTITION in `echo ${ID_SET} ` do echo "\n\t Actions only for DB00${DISK_ID}_F${PARTITION} ? [y|N] => \c" ; read sel if [ -z "${sel}" ] ; then sel="N" ; fi case ${sel} in y* | Y* ) DOIT=1 ; break ;; * ) DOIT=0 ;; esac done if [ ${DOIT} -eq 1 ] then #echo "\t\t PARTITION ID == ${PARTITION} ..." workPartitions exit fi else for PARTITION in `echo ${ID_SET} ` do #echo "\t\t PARTITION ID == ${PARTITION} ..." workPartitions done fi } if [ ${USB} -eq 1 ] then for DISK_ID in 5 6 do echo "\n\n DISK ID == ${DISK_ID} ..." ID_SET="1 2 3 4 5 6 7 8" workPartitionGroups done else DISK_ID="1" echo "\n\n DISK ID == ${DISK_ID} ..." ID_SET="2 3 4 5 6 7 8" workPartitionGroups fi exit 0 exit 0 exit 0
Some ext4 filesystem options may not take effect if specified in /etc/fstab as they require changes to filesystem structures. Some of those can be simply applied with tune2fs while the filesystem is unmounted, but there are some options that may require running a full filesystem check after tune2fs to take effect properly. As far as I know, there is no mechanism that would affect filesystem options based on whether the disk is connected by USB or not.
EXT4 on USB - how to specify journalling behaviour to be same as for root disk partitions
1,657,201,961,000
About the less and according with: Less command Linux / Unix Colored Man Pages With less Command indicates the following: f ^F ^V SPACE * Forward one window (or N lines). b ^B ESC-v * Backward one window (or N lines). z * Forward one window (and set window to N). w * Backward one window (and set window to N). Enabling the line numbers - with -N - for example for man less itself, I can see that b/f works/behaves the same than w/z it about the amount of content/lines moved up/down by either window or page. Question What is the difference between b/f vs w/z? normally I use the first pair, but when use the second pair? Extra Question What does and set window to N mean? I am assuming it is the expected difference that makes w/z different against b/f
I'll try my best to explain with an example. Open a long text file with less, something with obvious lines. Now type 4z, and you will see that 4 lines have shifted down. Type z and another 4 lines have moved. That 4z has told less that you want the window size to be set to 4. Once you have set the window size, all options (f,b,z or w) will now use that as the window size when moving through the text. The difference is when f and b are used like this, they do not set the window size, they only move by that N number of lines. Summing up with an example: 8f: Move through the document 8 lines. 9b: Move backwards through the document 9 lines. f or z: Move one "window size" through the document. b or w: Move backwards one "window size" through the document. 6z: Move through the document 6 lines and set the "window size" to 6 lines. Using f,b,z or w after this will shift the document 6 lines. 3w: Move backward through the document 3 lines and set the "window size" to 3 lines. Using f,b,z or w after this will shift the document 3 lines. To reset the window size, you can type -+z (then enter). Hope that helps.
less command: b/f vs w/s
1,657,201,961,000
Through the ps --help all command about the option(s) related with tty exists the following: -a all with tty, except session leaders a all with tty, including other users x processes without controlling ttys I know the difference between tty[1-6] and pts/[0...N], quickly the former is based on a direct connection through a console and the latter through a remote connection (i.e: ssh), correct me if something is wrong Through Virtual Box for Ubuntu Server, having: the 6 tty being logged with 3 different users (2 users in 2 tty, for example user1 logged at tty1 and tty3, user2 logged at tty2 and tty4 and so on) 3 SSH connections. When is executed either ps a or ps -a commands, in their output - the TTY column/header appears with the tty[1-6] values as expected, but also appears pts/[0..N] too. I didn't expect that because pts is not a tty. Now about the x option - not sure if its description indicates/means: all about not related with tty - for example pts. But again appears both, the tty (not expected) and pts that is expected. If I am understanding in a wrong way these options, pls correct me. Question How generate the report of processes but according with only either tty or pts? Not both together
pts is used for any UNIX 98 pseudoterminal, not only remote connections. You’ll see it used for graphical terminal emulators, screen or tmux sessions, etc. You mentioned tty[1-6], tty can also be a prefix for pseudoterminals, when BSD pseudoterminals are used; you’ll then see ttyp0 etc. It is also used as a prefix for terminals connected e.g. over a serial port (ttyS0 etc.), USB (ttyUSB0) and perhaps others I’m forgetting. There can also be more than 6 VTs. ps doesn’t distinguish between all these. Whatever is a process’ controlling terminal is a terminal. As a result, the only way to select processes in the way you want to is to either specify individual terminals: ps -t tty1 -t tty2 or more generally, ttys=(/dev/tty[123456789]*) ps "${ttys[@]/#/-t}" etc. (the selection is additive), or post-process ps’s output: ps -e | awk '$2 ~ /^tty/' As far as ps x is concerned, x lifts the restriction requiring that processes have a terminal; it doesn’t limit the selection to processes without a terminal. So ps …x will show any processes already selected, plus any other processes which would have been filtered because of their lack of a terminal — in the basic ps x case, this is all your processes (the user restriction is still in place).
ps command: How generate the report of processes but according with only either tty or pts?
1,657,201,961,000
I am using a bash script to call rsync commands. Have decided to collect some options in an array called oser. The idea is to look at what's different in the two invocations and put that into the array, instead of putting all of the common options into the array. Now I would like to add the --backup possibility to rsync and getting confused on how to go about with the implementation oser=() (( filetr_dryrun == 1 )) && oser=(--dry-run) if (( filetr_dryrun == 1 )); then rsync "${oser[@]}" -av --progress --log-file="$logfl" "$source" "$destin" elif (( filetr_exec == 1 )); then rsync "${oser[@]}" -av --progress --log-file="$logfl" "$source" "$destin" else rsync "${oser[@]}" -av --progress --log-file="$logfl" "$source" "$destin" fi
How about this: # "always" options: you can put any whitespace in the array definition oser=( -av --progress --log-file="$logfl" ) # note the `+=` below to _append_ to the array (( filetr_dryrun == 1 )) && oser+=( --dry-run ) # now, `oser` contains all the options rsync "${oser[@]}" "$source" "$destin" Now, if you want to add more options, just add them into the initial oser=(...) definition, or if there's some condition, use oser+=(...) to append to the array.
Adding options using bash arrays
1,657,201,961,000
The man page for txt2html says: --make_links Should we try to build links? If this is false, then the links dictionaries are not consulted and only structural text-to-HTML conversion is done. (default: true) I want to set this to false. How do I do this? I could not find this information, and have tried several guesses.
The txt2html manual also says Boolean options can be negated by preceding them with no [...] The manual then refers to the Perl package Getopt::Long. In its manual, one may read the following about boolean options: The option does not take an argument and may be negated by prefixing it with no or no-. [...] This means that to invert the sense of the --make-links option, use --no-make-links or --nomake-links.
How to specify boolean value in argument to external command?
1,657,201,961,000
came across this line in a code base today ln -fs /tmp/Cargo.lock . and I couldn't find the -fs argument anywhere in man ln. What does it do? P.S. The project runs this command inside of a docker container I tried this command on my local machine too by making a file in the same directory path as the given command and it ran.
Standard Unix tools, and tools using the standard way of parsing command line options, allow for combining multiple single letter options into a single string of options (as long as the individual options don't take option-arguments)1. Because of the way the -f and -s options to the ln utility are defined (as options that don't take arguments), the command ln -fs is the same as ln -f -s. The -f and the -s options to the ln utility are described separately in the ln(1) manual (see man ln), but in short they are -f Force existing destination pathnames to be removed to allow the link. -s Create symbolic links instead of hard links. (The above was taken from the POSIX specification for ln) 1This is a POSIX guideline for command line utilities. See "Guideline 14" in Utility Syntax Guidelines.
What does the -fs flag do in the ln command?
1,657,201,961,000
It used to be that you could force command line FTP to use IPv4 like so: ftp -4 ftp.example.com However, at some point in the relatively recent past the "-4" (and for that matter, the "-6") option seems to have been removed. Despite exhaustively searching the Web (even for the exact error "ftp: 4: unknown option") I can't find out how to, as the old man page reads, "Use only IPv4 to contact any host" and force use of IPv4. Instead I'm forced to wait for the client to time out on the IPv6 in the DNS before trying IPv4, which is waste of time. Is there any other way to accomplish this? And before I get lectured on the insecurity of FTP, I'm aware of that and my options. However, I'm connecting to a very old server with non-critical log-in credentials to retrieve non-sensitive data. My ftp on Xubuntu 14.04 LTS supports the -4 option, but ftp on CentOS 7.7 doesn't.
-4 and -6 are options added by a patch in the Debian version of netkit-ftp; you’ll find these available in any Debian derivative. Fedora, RHEL and CentOS don’t have an equivalent patch, so their ftp doesn’t support these options. To force IPv4, you could try specifying the target IP address rather than the host name.
What happened to the "-4" option for command line FTP?
1,657,201,961,000
Could you please explain what each option on this ls command does: ls -td -- */? The result of such command would look like below: $ ls $ ls -al total 4 drwxr-xr-x 5 root root 68 Jun 4 09:58 . drwxrwxrwt. 13 root root 4096 Jun 4 10:05 .. drwxr-xr-x 5 root root 36 May 31 15:48 05-31-2018 drwxr-xr-x 5 root root 36 Jun 4 09:45 06-04-2018 drwxr-xr-x 2 root root 6 Jun 4 09:56 06-05-2018 -rw-r--r-- 1 root root 0 Jun 4 09:58 test $ ls -td -- */ 06-05-2018/ 06-04-2018/ 05-31-2018/ # To get latest folder created: $ ls -td -- */ | head -n 1 06-05-2018/ I have no ideas what each option would do with ls command.
-td is the two options -t and -d written together. -t tells ls to sort the output based on time, and -d asks to show directories named on the command line as themselves, instead of their contents. The -- option is as far as I know not explicitly documented for many commands that do support it and it has become a slightly obscure syntax. It finds it's origins in the getopt function and is use to delimit the end of the options and the start of the parameters. You would mainly use that -- syntax to use parameters that would otherwise look like options. A good illustration is trying to manipulate files that start their names with a hyphen such as a file called "-rm -rf" Create it with touch -- '-rm -rf' ls -la total 0 -rw-r--r-- 1 herman wheel 0 Jun 4 16:46 -rm -rf ls -la * ls: illegal option -- usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...] ls -la -- * total 0 -rw-r--r-- 1 herman wheel 0 Jun 4 16:46 -rm -rf and rm -i * rm: illegal option -- m usage: rm [-f | -i] [-dPRrvW] file ... unlink file versus rm -i -- * For the meaning of command line options in general, this very basic nugget: Nearly all Linux commands come with an online manual explaining their usage and various options that modify their behaviour. Than manual can be accessed with the man command i.e. man ls Try man man for an explanation of the manual.
What is -- and -td options on ls command?
1,657,201,961,000
I am writing a shell script that takes several args like -l -s -a -f thing ming and append only those starting with -. This is my code: arrayOfArgs=() for arg in "$@": do case arg in -*) arrayofArgs+=($args) ;; esac done Now my arrayOfArgs print this -l, -s, -a, -f. The thing I am worried about is that the result is separated by the comma. is ls {"$arrayOfArgs"} equivalent to ls -l -s -a -f ?
Rather than trying to solve the question you've asked, this answer offers a solution that attempts to solve the underlying issue. For this example I've assumed that arguments a and s are booleans (switches) but argument l takes a parameter: unset -v flagA flagS valueL while getopts "al:s" OPT do case "$OPT" in a) echo "Got a"; flagA=true ;; s) echo "Got s"; flagS=true ;; l) printf 'Got l with value "%s"\n' "$OPTARG"; valueL="$OPTARG" ;; esac done shift "$((OPTIND - 1))" printf '%s\n' "flagA=${flagA-unset}, flagS=${flagS-unset}, valueL=${valueL-unset}" if [ "$#" -gt 0 ]; then printf 'Other arguments:\n' printf ' - "%s"\n' "$@" fi More information in the bash man page.
passing options from an array to built in ls command in UNIX [closed]
1,657,201,961,000
If you type help set, then - among other things - a list of shell options is displayed. But these options are not the same as those displayed with shopt. And different also from those displayed with set and env. Is there a command which displays options such as errexit and braceexpand and their current values? Also, what is the connection between the different option commands? What does set display that env doesn't / what does shopt display that set doesn't / etc ? (bash 3.2.51 on Mac OS X 10.9.1)
Use set -o: $ set -o allexport off braceexpand on emacs on errexit off errtrace off functrace off hashall on histexpand on history on ignoreeof off interactive-comments on keyword off monitor on noclobber off noexec off noglob off nolog off notify off nounset off onecmd off physical off pipefail off posix off privileged off verbose off vi off xtrace off Also see Set and Shopt - Why Two?
Is there a command which displays options like `errexit` and `braceexpand` other than `help set`?
1,657,201,961,000
I have SerNet Samba 4.0.9 installed on CentOS 6.4. How can I tell if it was compiled with CUPS support?
The docs, which say they're valid for Samba 3 and 4, say: "...make sure, that your smbd is compiled with CUPS support:" # smbd -b | grep CUPS HAVE_CUPS_CUPS_H HAVE_CUPS_LANGUAGE_H HAVE_CUPS HAVE_LIBCUPS
Does SerNet compile Samba 4 with CUPS support? (How to tell in general?)
1,657,201,961,000
I would like either of these inputs to work. That is, the -n option itself is optional – I already know how to do that – but it then may have an optional parameter on top. If no parameter is given, a fallback value will be applied. command -n 100 command -n I can only make the former input type work or the latter, but not both. HAS_NICE_THINGS=0 NICE_THINGS=50 # default value. while getopts n: option; do #while getopts n option; do # NICE_THINGS would always be that default value. #while getopts nn: option; do # same. case "${option}" in n) HAS_NICE_THINGS=1 if [[ ! -z "${OPTARG}" ]] && (( "${OPTARG}" > 0 )) && (( "${OPTARG}" <= 100 )); then NICE_THINGS=${OPTARG} fi;; esac done # error message: # option requires an argument -- n I'm not entirely sure yet if I would need a boolean for my script, but so far, just in case, I am logging one (HAS_NICE_THINGS). The end goal I had in mind was to set the JPG quality when eventually saving an image. Though, I can imagine this construct being useful elsewhere as well. I'm using Ubuntu 18.04.5 and GNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu).
Not sensibly with Bash's/POSIX getopts, but you could do it with the "enhanced" getopt (without an s) from util-linux or Busybox. (And those only, in particular many "traditional" getopt implementations are broken wrt. whitespace also) The man page says of getopts: optstring contains the option characters to be recognized; if a character is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. there's no mention of optional option-arguments. Of course you could have another optional option to give the non-default value. E.g. let -n take no argument and just enable nice things, and let -N <arg> take the argument, enable nice things and set the value. E.g. something like this: #!/bin/bash HAS_NICE_THINGS=0 NICE_THINGS_VALUE=50 while getopts nN: option; do case "${option}" in n) HAS_NICE_THINGS=1 shift;; N) HAS_NICE_THINGS=1 NICE_THINGS_VALUE=$OPTARG shift; shift;; esac done if [ "$HAS_NICE_THINGS" = 1 ]; then echo "nice things enabled with value $NICE_THINGS_VALUE" fi would give $ bash nice-getotps.sh -n nice things enabled with value 50 $ bash nice-getopts.sh -N42 nice things enabled with value 42 The util-linux getopt takes optional option-arguments with the double-colon syntax. It's a bit awkward to use, and you need to mess with eval, but done correctly, it seems to work. Man page: -o shortopts [...] Each short option character in shortopts may be followed by one colon to indicate it has a required argument, and by two colons to indicate it has an optional argument. With a script to just print the raw values so we can check it works properly (getopt-optional.sh): #!/bin/bash getopt -T if [ "$?" -ne 4 ]; then echo "wrong version of 'getopt' installed, exiting..." >&2 exit 1 fi params="$(getopt -o an:: -- "$@")" eval set -- "$params" while [ "$#" -gt 0 ]; do case "$1" in -n) echo "option -n with arg '$2'" shift 2;; -a) echo "option -a" shift;; --) shift break;; *) echo "something else: '$1'" shift;; esac done echo "remaining arguments ($#):" printf "<%s> " "$@" echo we get $ bash getopt-optional.sh -n -a option -n with arg '' option -a remaining arguments (0): <> $ bash getopt-optional.sh -n'blah blah' -a -n 'blah blah' -a -- option -n with arg 'blah blah' option -a remaining arguments (0): <> No argument to -n shows up as an empty argument. Not that you could pass an explicit empty argument anyway, since the option-argument needs to be within the same command line argument as the option itself, and -n is the same as -n"" after the quotes are removed. That makes optional option-arguments awkward to use in that you need to use -nx, as -n x would be taken as the option -n (without an opt-arg), followed by a regular non-option command line argument x. Which is unlike what would happen if -n took a mandatory option-argument. More about getopt on this Stackoverflow answer to How do I parse command line arguments in Bash? Note that that appears limited to that particular implementation of getopt, one that happens to be common on Linux systems, but probably not on others. Other implementations of getopt might not even support whitespace in arguments (the program has to do shell quoting for them). The "enhanced" util-linux version has the -T option to test if you have that particular version installed. There's some discussion on the limitations and caveats with getopt here: getopt, getopts or manual parsing - what to use when I want to support both short and long options? Also, getopt is not a standard tool like getopts is.
Can you make a bash script's option arguments be optional?
1,389,384,877,000
Is there some way of saving all the terminal output to a file with a command? I'm not talking about redirection command > file.txt Not the history history > file.txt, I need the full terminal text Not with hotkeys ! Something like terminal_text > file.txt
You can use script. It will basically save everything printed on the terminal in that script session. From man script: script makes a typescript of everything printed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later with lpr(1). You can start a script session by just typing script in the terminal, all the subsequent commands and their outputs will all be saved in a file named typescript in the current directory. You can save the result to a different file too by just starting script like: script output.txt To logout of the script session (stop saving the contents), just type exit. Here is an example: $ script output.txt Script started, file is output.txt $ ls output.txt testfile.txt foo.txt $ exit exit Script done, file is output.txt Now if I read the file: $ cat output.txt Script started on Mon 20 Apr 2015 08:00:14 AM BDT $ ls output.txt testfile.txt foo.txt $ exit exit Script done on Mon 20 Apr 2015 08:00:21 AM BDT script also has many options e.g. running quietly -q (--quiet) without showing/saving program messages, it can also run a specific command -c (--command) rather than a session, it also has many other options. Check man script to get more ideas.
Save all the terminal output to a file
1,389,384,877,000
There are tools providing coloured output: dwdiff -c File1 File2 # word level diff grep --color=always # we all know this guy ... The question is: How to convert their colored output of arbitrary program into coloured html file? Other output formats might be suitable as well (LaTeX would be great). I think html is good starting point, as it's easy to convert it to other formats. (For curious how to keep terminal colour codes, please follow answer: https://unix.stackexchange.com/a/10832/9689 ... | unbuffer command_with_colours arg1 arg2 | ... - tool unbuffer is part of expect )
The answer to this question is probably what you want. It links to these tools, which do the conversion you're looking for: Perl package HTML::FromANSI aha, a C-language program (github repo)
Converting colored output into html
1,389,384,877,000
Currently I have a shell script which logs messages to a log file like this: log_file="/some/dir/log_file.log" echo "some text" >> $log_file do_some_command echo "more text" >> $log_file do_other_command When executing this script, there is no output to screen, and, since I'm connecting to the server via putty, I have to open another connection and do "tail -f log_file_path.log", because I can't terminate the running script and I want to see the output in real time. Obviously, what I want is that the text messages are printed on screen and into file, but I'd like to do it in one line, not two lines, one of which has no redirection to file. How to achieve this?
This works: command | tee -a "$log_file" tee saves input to a file (use -a to append rather than overwrite), and copies the input to standard output as well. Because the command can detect that it's now being run in a non-interactive fashion this may change its behaviour. The most common side effect is that it disables colour output. If this happens (and you want ANSI colour coded output) you have to check the command documentation to see if it has a way to force it to revert to the interactive behaviour, such as grep --color=always. Beware that this means the log file will also include these escape codes, and you'll need to use less --RAW-CONTROL-CHARS "$log_file" to read it without distracting escape code literals. Also beware that there is no way to make the log file contents different from what is printed to screen when running the above command, so you can't have colour coded output to screen and non-coloured output in the log file.
how to output text to both screen and file inside a shell script?
1,389,384,877,000
I have a simple bash function dividing two numbers: echo "750/12.5" | bc I'd like to take the output from bc and append /24 and pipe said result to another instance of bc. Something like: echo "750/12.5" | bc | echo $1 + "/24" | bc Where $1 is the piped result. P.S. I realize I could just do echo "750/12.5/24" | bc my question is more in regards to the appending of text to a pipe result.
In the simplest of the options, this does append to the pipe stream: $ echo "750/12.5" | { bc; echo "/24"; } 60 /24 However that has an unexpected newline, to avoid that you need to either use tr: $ echo "750/12.5" | { bc | tr -d '\n' ; echo "/24"; } 60/24 Or, given the fact that a command expansion removes trailing newlines: $ printf '%s' $( echo "750/12.5" | bc ); echo "/24" 60/24 But probably, the correct way should be similar to: $ echo "$(echo "750/12.5" | bc )/24" 60/24 Which, to be used in bc, could be written as this: $ bc <<<"$(bc <<<"750/12.5")/24" 2 Which, to get a reasonable floating number precision should be something like: $ bc <<<"scale=10;$(bc <<<"scale=5;750/12.5")/24" 2.5000000000 Note the need of two scale, as there are two instances of bc. Of course, one instance of bc needs only one scale: $ bc <<<"scale=5;750/12.5/24" In fact, what you should be thinking about is in terms of an string: $ a=$(echo "750/12.5") # capture first string. $ echo "$a/24" | bc # extend the string 2 The comment about scale from above is still valid here.
Append to a pipe and pass on?
1,389,384,877,000
Mail logs are incredibly difficult to read. How could I output a blank line between each line printed on the command line? For example, say I'm grep-ing the log. That way, multiple wrapped lines aren't being confused.
sed G # option: g G Copy/append hold space to pattern space. G is not often used, but is nice for this purpose. sed maintains two buffer spaces: the “pattern space” and the “hold space”. The lines processed by sed usually flow through the pattern space as various commands operate on its contents (s///, p, etc.); the hold space starts out empty and is only used by some commands. The G command appends a newline and the contents of the hold space to the pattern space. The above sed program never puts anything in the hold space, so G effectively appends just a newline to every line that is processed.
How do I add newlines between lines printed on the command line?
1,389,384,877,000
I have a command which outputs lots of data (say, strace with lots of syscalls, running for a few minutes). Is there any option (e.g. command wrapper or something similar) that would allow me to pause the output of the command (just the output on the screen, I don't mind the command running in the background), then unpause it after I take a look on its output?
You have three options: press controlS to stop output, controlQ to resume (this is called XON/XOFF) redirect your output to a pager such as less, e.g., strace date | less redirect your output to a file, e.g., strace -o foo date, and browse it later.
Pausing terminal output
1,389,384,877,000
I often use find or locate to find out about paths. (~) locate foobar.mmpz /home/progo/lmms/projects/foobar.mmpz The next step is often to open or otherwise manipulate the files. In a happy case like above, I can do this: (~) ls `!!` ls `locate foobar.mmpz` /home/progo/lmms/projects/foobar.mmpz But nobody's too happy when there are many lines of output, some of which may not be paths or something else of that kind. Besides, rerunning potentially wasteful commands is not that elegant either. Would there be a way to hook up zsh to store the stdout into an array for later manipulation? After all, it's the shell's job to redirect the streams to the user. I'm thinking it could store the first N and last N lines in a variable for immediate later use, like $? and others. Ok so this is pretty cool: https://unix.stackexchange.com/a/59704/5674. I'm now asking about the zsh know-how (and porting the code to zsh) to rig this kind of capture after each run line.
There is no feature to capture the output from the screen on most terminal emulators. I seem to recall the author of xterm (the “reference” terminal emulator) stating that it would be difficult to implement. Even if that was possible, the shell would have to keep track of where the last prompt had been. So you won't escape having to run the command again, unless you use a terminal-specific, manual mechanism such as copy-pasting with the mouse in xterm or with the keyboard in Screen. It would be highly impractical for the shell to automatically capture the output of commands, because it cannot distinguish between commands that have complex terminal and user interactions from commands that simply output printable characters. You can rerun the command and capture its output. There are various ways to do each. To rerun the command, you can use: !! history substitution — most convenient to type; fc -e -, which can be used in a function. To capture the output, you can use command substitution, or a function like the following: K () { lines=("${(f@)$(cat)}") } !! |K This sets the lines array to the output of the command that's piped into it.
Can zsh access the stdout of last run program?
1,389,384,877,000
I have a bash script that calls various commands and prints some output (both from the called commands themselves, such as git pull, and informative messages generated by the script itself such as Operation took XX minutes. I'd like to capture the whole output to a file from the script itself: basically I'm trying to avoid the need to call ./myscript.sh | tee file.txt for non-relevant-here reasons. Basically I'd like to do something like this: startCapture git pull echo "Text" other-command endCapture I also require the output to be printed on my shell while the script is running. The final goal is to: run ./myscript.sh without additional shell constructs see the output on the terminal as I do now obtain a file on disk with the whole output Is this even possible?
A method I found to capture all output from any session is to start a new bash session and tee to a log file. its really useful for tracking more then just a script. bash | tee ~/bash.log #this will save standard output until the bash session is ended bash 2>&1 | tee ~/bash.log #this will save all output including errors until the bash session is ended or you can just tee the script it's self ./myscript.sh | tee ./myscript.log #this will log only the output of the script.
Capture all the output of a script to a file (from the script itself) [duplicate]
1,389,384,877,000
INPUT_FILE=`ls -rt $MY_DIR/FILE.*.xml | head -1 | xargs basename` I wanted to execute the second command (head -1) only if the first command is successful. How do I improve this command?
Try this: INPUT_FILE=`ls -rt "$MY_DIR"/FILE.*.xml | head -1 | xargs -r basename` Passing xargs the -r flag will cause it to only run basename if reads at least one item from standard input (head -1). head -1 will run but you won't see or capture any output from it. Also, if you don't want the user to see any error output from ls, you can redirect ls's stderr stream to /dev/null. INPUT_FILE=`ls -rt "$MY_DIR"/FILE.*.xml 2> /dev/null | head -1 | xargs -r basename` Also note that I added quotation marks around $MY_DIR. That way, the command will not fail if $MY_DIR contains spaces. If you're using a modern shell such as bash, you should use a $( ) capture shell instead of backticks. You should also consider changing the style of your variables. You should generally avoid using all-uppercase variable names in scripts. That style is generally reserved for reserved and environmental variables. input_file=$(ls -rt "$my_dir"/FILE.*.xml 2> /dev/null | head -1 | xargs -r basename)
Pipe the output of a command if it is successful
1,389,384,877,000
There are many questions on SE that show how to recover from terminal broken by cat /dev/urandom. For those that are unfamiliar with this issue - here what it is about: You execute cat /dev/urandom or equivalent (for example, cat binary_file.dat). Garbage is printed. That would be okay... except your terminal continues to print garbage even after the command has finished! Here's a screenshot of a misrendered text that is in fact g++ output: I guess people were right about C++ errors sometimes being too cryptic! The usual solution is to run stty sane && reset, although it's kind of annoying to run it every time this happens. Because of that, what I want to focus on in this question is the original reason why this happens, and how to prevent the terminal from breaking after such command is issued. I'm not looking for solutions such as piping the offending commands to tr or xxd, because this requires you to know that the program/file outputs binary before you actually run/print it, and needs to be remembered each time you happen to output such data. I noticed the same behavior in URxvt, PuTTY and Linux frame buffer so I don't think this is terminal-specific problem. My primary suspect is that the random output contains some ANSI escape code that flips the character encoding (in fact, if you run cat /dev/urandom again, chances are it will unbreak the terminal, which seems to confirm this theory). If this is right, what is this escape code? Are there any standard ways to disable it?
No: there is no standard way to "disable it", and the details of breakage are actually terminal-specific, but there are some commonly-implemented features for which you can get misbehavior. For commonly-implemented features, look to the VT100-style alternate character set, which is activated by ^N and ^O (enable/disable). That may be suppressed in some terminals when using UTF-8 mode, but the same terminals have ample opportunity for trashing your screen (talking about GNU screen, Linux console, PuTTY here) with the escape sequences they do recognize. Some of the other escape sequences for instance rely upon responses from the terminal to a query (escape sequence) by the host. If the host does not expect it, the result is trash on the screen. In other cases (seen for instance in network devices with hardcoded escape sequences for the Linux console), other terminals will see that as miscoded, and seem to freeze. So... you could focus on just one terminal, prune out whatever looks like a nuisance (as for instance, some suggest removing the ability to use the mouse for positioning in editors), and you might get something which has no apparent holes. But that's only one terminal.
How to prevent random console output from breaking the terminal?
1,389,384,877,000
Say I run some processes: #!/usr/bin/env bash foo & bar & baz & wait; I run the above script like so: foobarbaz | cat as far as I can tell, when any of the processes write to stdout/stderr, their output never interleaves - each line of stdio seems to be atomic. How does that work? What utility controls how each line is atomic?
They do interleave! You only tried short output bursts, which remain unsplit, but in practice it's hard to guarantee that any particular output remains unsplit. Output buffering It depends how the programs buffer their output. The stdio library that most programs use when they're writing uses buffers to make output more efficient. Instead of outputting data as soon as the program calls a library function to write to a file, the function stores this data in a buffer, and only actually outputs the data once the buffer has filled up. This means that output is done in batches. More precisely, there are three output modes: Unbuffered: the data is written immediately, without using a buffer. This can be slow if the program writes its output in small pieces, e.g. character by character. This is the default mode for standard error. Fully buffered: the data is only written when the buffer is full. This is the default mode when writing to a pipe or to a regular file, except with stderr. Line-buffered: the data is written after each newline, or when the buffer is full. This is the default mode when writing to a terminal, except with stderr. Programs can reprogram each file to behave differently, and can explicitly flush the buffer. The buffer is flushed automatically when a program closes the file or exits normally. If all the programs that are writing to the same pipe either use line-buffered mode, or use unbuffered mode and write each line with a single call to an output function, and if the lines are short enough to write in a single chunk, then the output will be an interleaving of whole lines. But if one of the programs uses fully-buffered mode, or if the lines are too long, then you will see mixed lines. Here is an example where I interleave the output from two programs. I used GNU coreutils on Linux; different versions of these utilities may behave differently. yes aaaa writes aaaa forever in what is essentially equivalent to line-buffered mode. The yes utility actually writes multiple lines at a time, but each time it emits output, the output is a whole number of lines. while true; do echo bbbb; done | grep b writes bbbb forever in fully-buffered mode. It uses a buffer size of 8192, and each line is 5 bytes long. Since 5 does not divide 8192, the boundaries between writes are not at a line boundary in general. Let's pitch them together. $ { yes aaaa & while true; do echo bbbb; done | grep b & } | head -n 999999 | grep -e ab -e ba bbaaaa bbbbaaaa baaaa bbbaaaa bbaaaa bbbaaaa ab bbbbaaa As you can see, yes sometimes interrupted grep and vice versa. Only about 0.001% of the lines got interrupted, but it happened. The output is randomized so the number of interruptions will vary, but I saw at least a few interruptions every time. There would be a higher fraction of interrupted lines if the lines were longer, since the likelihood of an interruption increases as the number of lines per buffer decreases. There are several ways to adjust output buffering. The main ones are: Turn off buffering in programs that use the stdio library without changing its default settings with the program stdbuf -o0 found in GNU coreutils and some other systems such as FreeBSD. You can alternatively switch to line buffering with stdbuf -oL. Switch to line buffering by directing the program's output through a terminal created just for this purpose with unbuffer. Some programs may behave differently in other ways, for example grep uses colors by default if its output is a terminal. Configure the program, for example by passing --line-buffered to GNU grep. Let's see the snippet above again, this time with line buffering on both sides. { stdbuf -oL yes aaaa & while true; do echo bbbb; done | grep --line-buffered b & } | head -n 999999 | grep -e ab -e ba abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb So this time yes never interrupted grep, but grep sometimes interrupted yes. I'll come to why later. Pipe interleaving As long as each program outputs one line at a time, and the lines are short enough, the output lines will be neatly separated. But there's a limit to how long the lines can be for this to work. The pipe itself has a transfer buffer. When a program outputs to a pipe, the data is copied from the writer program to the pipe's transfer buffer, and then later from the pipe's transfer buffer to the reader program. (At least conceptually — the kernel may sometimes optimize this to a single copy.) If there's more data to copy than fits in the pipe's transfer buffer, then the kernel copies one bufferful at a time. If multiple programs are writing to the same pipe, and the first program that the kernel picks wants to write more than one bufferful, then there's no guarantee that the kernel will pick the same program again the second time. For example, if P is the buffer size, foo wants to write 2*P bytes and bar wants to write 3 bytes, then one possible interleaving is P bytes from foo, then 3 bytes from bar, and P bytes from foo. Coming back to the yes+grep example above, on my system, yes aaaa happens to write as many lines as can fit in a 8192-byte buffer in one go. Since there are 5 bytes to write (4 printable characters and the newline), that means it writes 8190 bytes every time. The pipe buffer size is 4096 bytes. It is therefore possible to get 4096 bytes from yes, then some output from grep, and then the rest of the write from yes (8190 - 4096 = 4094 bytes). 4096 bytes leaves room for 819 lines with aaaa and a lone a. Hence a line with this lone a followed by one write from grep, giving a line with abbbb. If you want to see the details of what's going on, then getconf PIPE_BUF . will tell you the pipe buffer size on your system, and you can see a complete list of system calls made by each program with strace -s9999 -f -o line_buffered.strace sh -c '{ stdbuf -oL yes aaaa & while true; do echo bbbb; done | grep --line-buffered b & }' | head -n 999999 | grep -e ab -e ba How to guarantee clean line interleaving If the line lengths are smaller than the pipe buffer size, then line buffering guarantees that there won't be any mixed line in the output. If the line lengths can be larger, there's no way to avoid arbitrary mixing when multiple programs are writing to the same pipe. To ensure separation, you need to make each program write to a different pipe, and use a program to combine the lines. For example GNU Parallel does this by default.
What prevents stdout/stderr from interleaving?
1,389,384,877,000
I have 2 exactly same formatted, same size and same brand SD-cards. I would like to dd image to /dev/disk2 and to /dev/disk3 at the same time. Pseudocode sudo dd bs=1m if=/Users/masi/2016-05-10-raspbian-jessie.img of={/dev/disk2,/dev/disk3} How can you dd from one input to many output SDs?
Borrowing from don_crissti's answer using tee, but without dd or bashisms: sudo tee /dev/disk2 /dev/disk3 > /dev/disk4 < masi.img Using pee from Debian's moreutils package: sudo dd if=masi.img | \ pee "dd of=/dev/disk2" "dd of=/dev/disk3" "dd of=/dev/disk4" With bash, ksh, or zsh, that can be abbreviated to: sudo dd if=masi.img | pee "dd of=/dev/disk"{2..4} Or even, (if there's no need for dd's useful functions): sudo pee "dd of=/dev/disk"{2..4} < masi.img pee is useful; if required one may include, (within each quoted argument), additional distinct dd options, and even other pipes and filters, individually tailored to each output device. With either method the number of output disks can be extended indefinitely.
dd: write to multiple disks?
1,389,384,877,000
I am a relative Linux novice. I am trying to learn how to use at so that I can schedule tasks to begin at a later time, without using sleep. I have been looking at this previous question for help. My question is, in the following sample bash script that I have created, why is "Running" never -- as far as I can tell -- printed to the standard output (i.e., my bash console)? #!/bin/bash echo "Started" at now + 1 minutes <<EOF echo "Running" EOF echo "Finished" The only output I see is, for example: Started warning: commands will be executed using /bin/sh job 3 at Fri Jul 12 17:31:00 2013 Finished Is the answer to my question found in the warning? If so, how does /bin/sh differ from the standard output?
Because at does not execute commands in the context of your logged in user session. The idea is that you can schedule a command to run at an arbitrary time, then log out and the system will take care of running the command at the specified time. Note that the manual page for at(1) specifically says (my emphasis): The user will be mailed standard error and standard output from his commands, if any. Mail will be sent using the command /usr/sbin/sendmail. So you should be checking your local mail spool or, failing that, the local system mail logs. /var/spool/mail/$USER is probably a good place to start. Also note that the "Started" and "Finished" originate from the outer script and in and of themselves have nothing to do with at at all. You could take out them or the at invocation and you'll get essentially the same result.
Why does this 'at' command not print to the standard output?
1,389,384,877,000
Say I have a Zsh script and that I would like to let it print output to STDOUT, but also copy (dump) its output to a file in disk. Moreover, the script starts with the following option set -o xtrace which forces it to be verbose and print what commands it runs. I would like to capture this output as well in a file in disk. My understanding is that if I do ./my_script.sh > log.txt it will just send STDOUT to log.txt, but what if I want to also be able to see the output in the terminal? I have read about tee and the MULTIOS option in Zsh, but am not sure how to use them. When I do: ./my_script | tee log.txt I can see the output on the terminal, but the file log.txt doesn'tseem to be capturing everything (in fact it captures barely anything).
It could be that your script is producing output to stdout and stderr, and you are only getting one of those streams output to your log file. ./my_script.sh | tee log.txt will indeed output everything to the terminal, but will only dump stdout to the logfile. ./my_script.sh > log.txt 2>&1 will do the opposite, dumping everything to the log file, but displaying nothing on screen. The trick is to combine the two with tee: ./myscript.sh 2>&1 | tee log.txt This redirects stderr (2) into stdout (1), then pipes stdout into tee, which copies it to the terminal and to the log file. The zsh multios equivalent would be: ./myscript.sh >&1 > log.txt 2>&1 That is, redirect stdout both to the original stdout and log.txt (internally via a pipe to something that works like tee), and then redirect stderr to that as well (to the pipe to the internal tee-like process).
Send copy of a script's output to a file
1,389,384,877,000
Where does standard output from at and cron tasks go, given there is no screen to display to? It's not appearing in the directory the jobs were started from, nor in my home directory. How could I actually figure this out given that I don't know how to debug or trace a background job?
From the cron man page: When executing commands, any output is mailed to the owner of the crontab (or to the user named in the MAILTO environment variable in the crontab, if such exists). The children copies of cron running these processes have their name coerced to uppercase, as will be seen in the syslog and ps output. So you should check your/root's mail, or the syslog (eg. /var/log/syslog).
Where does the output of `at` and `cron` jobs go?
1,389,384,877,000
I'm tailing a log file using tail -f messages.log and this is part of the output: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce eget tellus sit amet odio porttitor rhoncus. Donec consequat diam sit amet tellus viverra pellentesque. tail: messages.log: file truncated Suspendisse at risus id neque pharetra finibus in facilisis ipsum. It shows tail: messages.log: file truncated when the file gets truncated automatically and that's supposed to happen, but I just want tail to show me the output without this truncate message. I've tried using tail -f messages.log | grep -v truncated but it shows me the message anyway. Is there any method to suppress this message?
That message is output on stderr like all warning and error messages. You can either drop all the error output: tail -f file 2> /dev/null Or to filter out only the error messages that contain truncate: { tail -f file 2>&1 >&3 3>&- | grep -v truncated >&2 3>&-;} 3>&1 That means however that you lose the exit status of tail. A few shells have a pipefail option (enabled with set -o pipefail) for that pipeline to report the exit status of tail if it fails. zsh and bash can also report the status of individual components of the pipeline in their $pipestatus/$PIPESTATUS array. With zsh or bash, you can use: tail -f file 2> >(grep -v truncated >&2) But beware that the grep command is not waited for, so the error messages if any may end up being displayed after tail exits and the shell has already started running the next command in the script. In zsh, you can address that by writing it: { tail -f file; } 2> >(grep -v truncated >&2) That is discussed in the zsh documentation at info zsh 'Process Substitution': There is an additional problem with >(PROCESS); when this is attached to an external command, the parent shell does not wait for PROCESS to finish and hence an immediately following command cannot rely on the results being complete. The problem and solution are the same as described in the section MULTIOS in note Redirection::. Hence in a simplified version of the example above: paste <(cut -f1 FILE1) <(cut -f3 FILE2) > >(PROCESS) (note that no MULTIOS are involved), PROCESS will be run asynchronously as far as the parent shell is concerned. The workaround is: { paste <(cut -f1 FILE1) <(cut -f3 FILE2) } > >(PROCESS) The extra processes here are spawned from the parent shell which will wait for their completion.
Suppress 'file truncated' messages when using tail
1,389,384,877,000
Suppose this situation wget http://file wget starts to download file. I put it in the background. ^Z bg The command goes into the background. But its output is still on the console also -- if the console is still open. Is it possible to stop the command's output? Wget is only an example; think about a command which writes a lot of output. At the console, I know it is possible to do bg and then close terminal and open another, but what if I have only one terminal avaliable and no pseudo-terminals?
Here's a solution that actually redirects the output of a command while it is running: https://superuser.com/questions/732503/redirect-stdout-stderr-of-a-background-job-from-console-to-a-log-file For a solution that is more usable in an every-day scenario of using a terminal, you could do wget -o log http://file & to run wget in the background and write its output to log instead of your terminal. Of course, you won't see any output at all in this case (even if you ran wget in foreground), but you could do tail -f log to look at the output as it grows.
Is it possible to stop output from a command after bg?
1,389,384,877,000
I'm trying to write a simple script to monitor my network status, without all of ping's output: ping -q -c 1 google.com > /dev/null && echo online || echo offline The problem is that when I'm not connected, I'm still getting an error message in my output: ping: unknown host google.com offline How can I keep this error message out of my output?
When you run: ping -q -c 1 google.com > /dev/null && echo online || echo offline You are essentially only redirecting the output of Stream 1 (i.e. stdout) to /dev/null. This is fine when you want to redirect the output that is produced by the normal execution of a program. However, in case you also wish to redirect the output caused by all the errors, warnings or failures, you should also redirect the stderr or Standard Error stream to /dev/null. One way of doing this is prepending the number of the stream you wish to redirect to the redirection operator, > like this: Command 2> /dev/null Hence, your command would look like: ping -q -c 1 google.com > /dev/null 2> /dev/null && echo online || echo offline But, notice that we have already redirected one stream to /dev/null. Why not simply piggyback on the same redirection? Bash allows us to do this by specifying the stream number to which to redirect to. 2>&1. Notice the & character after the redirection operator. This tells the shell that what appears next is not a filename, but an identifier for the output stream. ping -q -c 1 google.com > /dev/null 2>&1 echo online || echo offline Be careful with the redirection operators, their order matters a lot. If you were to redirect in the wrong order, you'll end up with unexpected results. Another way in which you can attain complete silence is by redirecting all output streams to /dev/null using this shortcut: &>/dev/null (or redirect to a log file with &>/path/to/file.log). Hence, write your command as: ping -q -c 1 google.com &> /dev/null && echo online || echo offline
How to redirect the output of any command?
1,389,384,877,000
I have a script like this one at my .bashrc file at the mysuer home: eval `ssh-agent` ssh-add /path/to/my/key The problem is I have this output when I log with the user mysuer (su - myuser): Agent pid 1234 Identity added: /path/to/my/key (/path/to/my/key) I would like avoid this, silence this output, but load the ssh-agent and ssh-add. How can I perform this?
As usual? { eval `ssh-agent`; ssh-add /path/to/my/key; } &>/dev/null
How can I silence ssh-agent?
1,389,384,877,000
When a process breaks, as I know no output will be return anymore. But always after breaking ping command we have the statistics of the execution, and as I know it's part of the output. amirreza@time:~$ ping 4.2.2.4 PING 4.2.2.4 (4.2.2.4) 56(84) bytes of data. 64 bytes from 4.2.2.4: icmp_seq=1 ttl=51 time=95.8 ms 64 bytes from 4.2.2.4: icmp_seq=2 ttl=51 time=92.3 ms ^C --- 4.2.2.4 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 92.321/94.052/95.783/1.731 ms amirreza@time:~$ How does it work?
Ctrl+C makes the terminal send SIGINT to the foreground process group. A process that receives SIGINT can do anything, it can even ignore the signal. A common reaction to SIGINT is to exit gracefully, i.e. after cleaning up etc. Your ping is simply designed to print statistics upon SIGINT and then to exit. Other tools may or may not exit upon SIGINT at all. E.g. a usual behavior of an interactive shell (while not running a command) is to clear its command line and redraw the prompt. SIGINT is not the only signal designed to terminate commands. See the manual (man 7 signal), there are many signals whose default action is to terminate the process. kill sends SIGTERM by default. SIGTERM is not SIGINT. Both can be ignored. SIGKILL cannot be caught, blocked, or ignored, but it should be your last choice.
Why is there output of ping after it has been terminated?
1,389,384,877,000
My file consists of the the following; roughly: username:username:username:username:username The above line continues to about 600 characters. I use the awk command in order to use it as an argument in a API/HTTP request sent from the command line. I'm using my script to get a list of user accounts 'following' me, and every 24 hours or so, comparing the original list on my hard disk to the newly outputted username list (and echo'ing out who is no longer following me. I will have to encapsulate my logic into a loop using bash.. testing each username. My current script: user=$(awk -F: '{ print $1 }' FILE) # Grab $User to use as an argument. following=$(exec CURRENT_FOLLOWERS) # Outputs the new file echo "X amount of users are following you on 78B066B87AF16A412556458AC85EFEF66155" SAVE CURRENT FOLLOWERS TO NEW A FILE. if [[ DIFFERENCE IS DETECTED ]] ; then echo -ne "$User NO LONGER FOLLOWING YOU\r" else echo -ne "This user is following you still.\r" fi My question is; How can I output the difference between 2 files?
The utility you're looking for is diff. Take a peek at the manual for details.
How can I output the difference between 2 files?
1,389,384,877,000
I figure curl would do the job. I wrote in a script: #!/bin/sh function test { res=`curl -I $1 | grep HTTP/1.1 | awk {'print $2'}` if [ $res -ne 200 ] then echo "Error $res on $1" fi } test mysite.com test google.com The problem here is no matter what I do I can't get it to stop printing the below to stdout: % Total % Received % Xferd Average Speed Time Time Time Current I want a cronjob to run this script and if it writes such a message then every time I run it I'll get an email because something has been printed to stdout in cron, even though the site may be fine. How do I get the status code without getting junk into stdout? This code works except the bonus junk to the stdout preventing me from using it.
-s/--silent Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. So your res should look like res=`curl -s -I $1 | grep HTTP/1.1 | awk {'print $2'}` Result is Error 301 on google.com, for example.
How do I get (only) the http status of a site in a shell script?
1,389,384,877,000
How can we concatenate results from stdout (or stderr) and a file into a final file. For example ls -a | grep text1 concatenate with file2.txt into a final result (not file2.txt), without storing grep text1 to something intermediate such as grep text1 > file1.txt
ls -a | grep text1 | cat file2.txt - The - stands for standard input. Alternatively you may write ls -a | grep text1 | cat - file2.txt to have the output in different order. Yet another possibility using process substitution: cat <(ls -a | grep text1) file2.txt or in different order: cat file2.txt <(ls -a | grep text1)
Concatenate contents from stdout and from file
1,389,384,877,000
I know that GNU Parallel buffers std/stderr because it doesn't want jobs output to be mangled, but if I run my jobs with parallel do_something ::: task_1 task_2 task_3, is there anyway for task_1's output to be displayed immediately, then after task_1 finishes, task_2's up to its current output, etc. If Parallel cannot solve this problem, is there any other similar program that could?
From version 20160422 you can do: parallel -k --lb do_something ::: task_1 task_2 task_3
GNU Parallel: immediately display job stderr/stdout one-at-a-time by jobs order
1,389,384,877,000
I want to capture to a file the output of the ls command ls >> lsOutput.log This one works if executed in the command line. But when put inside a shell script (lsOutput.sh), returns ./lsOutput.sh: 3: ./lsOutput.sh: total: not found lsOutput.sh code #!/bin/sh `ls -lrt` >> lsOutput.log
Just remove the backticks from your script: #!/bin/sh ls -lrt >> lsOutput.log Otherwise, the command is executed, and then its output is substituted and executed. For example: echo date will output: date, while `echo date` will output current date, i.e. it will first evaluate to date, and then executed, which is calling the program date itself.
LS Command output to file [closed]
1,389,384,877,000
I like to use set -x in scripts to show what's going on, especially if the script is going to run in a CI/CD pipeline and I might need to debug some failure post-hoc. One annoyance with doing this is that if I want to echo some text to the user (e.g., a status message or "I'm starting to do $X") then that message gets output twice - once for the echo command itself being echoed, and then once as the output of that echo command. What's a good way to make this nicer? One solution is this: set -x ... bunch of normal commands that get echoed ( # Temporarily don't echo, so we don't double-echo set +x echo "Here is my status message" ) ... rest of commands get echoed again But the two problems with that are That's a lot of machinery to write every time I want to tell the user something, and it's "non-obvious" enough that it probably requires the comment every time It echoes the set +x too, which is undesirable. Is there another option that works well? Something like Make's feature of prepending an @ to suppress echoing would be great, but I've not been able to find such a feature in Bash.
This is a horrible kluge, and I feel dirty for suggesting it, but... you could do this with a magic alias. The key to this trick is that aliases are expanded as part of the parsing phase of command execution, so set -x won't make anything print as they expand (unlike a function). So you can make an alias that prepends the "turn off -x" boilerplate before the echo command, and then it turns out you also need a function to run the "turn -x back on" boilerplate at the end. You also need to turn on alias expansion in your script. It's normally disabled, so that e.g. if you have something like grep aliased to grep --color, that won't make color codes get randomly injected up whenever the script uses grep. So it's safest to run unalias -a first, to remove any potentially troublesome aliases. Anyway, here's the code: unalias -a shopt -s expand_aliases alias cleanecho='{ set +x; } 2>/dev/null; resetx_after echo' resetx_after() { "$@"; set -x; } set -x cleanecho "Ha, ha, you can't see the command that printed this!" How it works: a command like cleanecho "something" expands to: { set +x; } 2>/dev/null; resetx_after echo "something" { set +x; } 2>/dev/null turns off -x mode (with its own trace redirected to /dev/null). Then resetx_after echo "something" runs, executing: { echo "something"; set -x; } ...which prints the string and then turns -x tracing back on. BTW, if you want to be able to use other commands like printf similarly, you could add similar aliases for them: alias cleanprintf='{ set +x; } 2>/dev/null; resetx_after printf' ...or just make a generic don't-trace-this alias to use as a prefix: alias notrace='{ set +x; } 2>/dev/null; resetx_after' notrace printf 'set -x disabled for this command\n'
Temporarily unset bash option -x
1,389,384,877,000
I know we can use below format to redirect the screen output to a file: $ your_program > /tmp/output.txt However when I used below command, it says "-bash: /home/user/errors.txt: Permission denied" sudo tail /var/log/apache2/error.log > ~/errors.txt May I know how to make this output works? The ~/errors.txt doesn't exist. Do I need to create this txt file first before I use the redirect command?
Behind the pipe, the sudo doesn't work. I don't know why you can't write to your home - maybe the file belongs to root? sudo tail /var/log/apache2/error.log | sudo tee ~/errors.txt Maybe you need a different user behind the pipe. For sure, you don't need a preexisting file.
tail program output to file in Linux
1,389,384,877,000
Sometimes it is really handy to just add set -x to the top of your script to show all commands before they are executed. There is only one drawback creating scripts with a decent output: I don't know how to add text output to the script this way. If I use echo 'some comment' it will result in printing it doubled: + echo 'some comment' some comment And if I use # it isn't shown at all. How can I add comments that are printed out like with echo? If I use set -x?
One hacky way is just to write your comments as arguments to a no-op command. Particularly useful might be the : null utility: set -x : Some interesting notes on the following are ... results in: + : Some interesting notes on the following are... The colon command does nothing, accepts whatever arguments you give it, and always succeeds. You get an extra : at the start of your trace output, but that probably isn't a huge problem for your purpose. If you don't like the : an even nastier trick is to use a fake command: set -x seq 1 1 Some comment &>/dev/null true will output: + seq 1 1 1 + Some comment + true That is, the Some comment line is printed out as trace output when the shell tries to run it, but the resulting error message is sent to /dev/null. This is nasty for a lot of obvious reasons, but it also counts as an error for the purposes of set -e. Note that in either case, your comment is parsed by the shell in the ordinary way, so in particular if you have any special characters they need to be quoted, and because it's trace output the quoting will be displayed.
Adding comments using 'set -x'
1,389,384,877,000
I'm having some troubles on Ubuntu 14.04 initialization, it fails to mount an SSH folder and gives me the option of a manual recovery by pressing M, displaying a command-line logged at root user for debugging the problem. My troubles start when I try to read the sshfs help file and it is bigger than the screen, therefore impossible to read the cut-out part. I managed to solve this by doing sshfs -h >> read; nano read but I'm wondering if there is a easiest or more elegant/right way of doing this job. PS: I'm not at the Ubuntu terminal emulator, so it's impossible to adjust the "scrolling" tab, since it doesn't exists.
People usually use a pager like less to read such a long output: sshfs -h | less On less type H to show help. Q to quit. Note that you might occasionally need 2>&1 to see also additional output from stderr. For sshfs -h it has such an output so you'd better do that like this: sshfs -h 2>&1 | less Besides using a pager, on Linux text console you can scroll back/forward the screen without a scroll bar by typing Shift+PgUp or Shift+PgDn.
Bash output command too large, can't read it!
1,389,384,877,000
I write a lot of non-interactive scripts where I would like all output to go to a log file and have nothing appear on-screen. To solve this, I've been using: #!/bin/bash exec &> logfile echo "Step One:" /some/command/one echo "Step Two:" /some/command/two I want to make sure this is a sane method. Are there any significant drawbacks or issues I'm going to encounter if I move forward with this methodology? If so, what are they and how best can they be mitigated (including by changing my methodology).
Redirection of command output to a log file Redirecting all command output (including error messages) to a log file is standard practice for non-interactive shell scripts.   It’s particularly useful to have a record of command output for scripts that are run by cron or triggered by some other external event, and there are no downsides in such use-cases. Many of my shell scripts include the following lines near the start: exec 1>>"$logfile" exec 2>&1 The order that these redirection commands is important. The first exec command redirects all writes to the stdout (1) stream to append (>>) to the log file. The second command redirects all writes to the stderr (2) stream to the same file descriptor that stdout (1) currently points to. Using only one file descriptor for accessing a file ensures that the writes happen in the desired order. If using Bash, you can combine these commands into a single construct that does the same thing: exec &>>"$logfile" If you want the log file to be cleared of previous entries each time the script is run, use only a single > redirection operator (over-writes the previous contents): exec &>"$logfile" Use of the exec builtin for input/output redirection is specified by the POSIX definition for the Shell Command Language, and the exec builtin is available in any POSIX compatible shell. Redirection while running an interactive shell You can experiment with redirecting standard output to a file while in a temporary/disposable interactive shell session. After running exec 1>outfile, all future commands print their output to outfile instead of to the terminal. You can also experiment with redirecting standard error in an interactive shell session, but it can make the interactive shell session very hard to work with. After running exec 2>errorfile, the standard error produced by any further commands is written to the redirected error file – as expected. However, the problem is that from now on, the shell (Bash in this case) prints its prompt to this file and any text typed as a command is also redirected to this file.  Some shells (such as Bash) also echo characters received by stdin to stderr.  In others, such as dash, for the rest of the shell session, you’re essentially working blind, as nothing at all is sent to the terminal.  This obviously makes it very difficult to continue interacting with the shell. As Orion points out and Scott says, you can store references to the default stdout and stderr file descriptors before trying any such experiments using exec 3>&1 and exec 4>&2 respectively.  When you’ve finished your experiments, you can restore printing to standard error by running exec 2>&4 and restore printing to standard output with exec 1>&3. For interactive use, I’d advise redirecting standard out and standard error streams on a command-by-command basis: >> outfile 2>&1 command with arguments.
exec redirects in bash
1,389,384,877,000
I have a Perl script on a *nix system which, at one point, is processing 50,000 + lines of text. This takes some time. I am trying to find a resource friendly way to let the user know the program is not hanging while it is processing all of this text. Currently I am printing output in realtime as the text is being processed . I am flushing the output buffer then printing the output on one line with \r. This seems to be an unnecessary use of resources because it takes almost twice as long than when I print nothing, but as I have said, when printing nothing it looks like the program is hanging. So my question: Is there a standard or simple way to let the user know the program is indeed running while performing long running tasks?
[I just realized your script is perl, but the same logic applies, print "\r", etc. You will want to use STDERR or else turn off buffering, $| = 1. See bottom.] One way of implementing a CLI "progress indicator" involves the use of the \r (carriage return) character. This brings the cursor to the beginning of the current line: #!/bin/bash count=0 while ((1)); do echo -ne "\rCount: $count" sleep 1; count=$(($count+1)); done If it doesn't make sense, just try it. You could use that technique to indicate how many lines, or thousands of lines, have been processed so far. Tens or hundreds of lines may be good since it is not too often (more updates == slower runtime) but probably still often enough to show progress is continuing. You can specify a unit or just append zeros. Note the use of -n and -e with echo, that is important. You can also use \b (backspace) to similar effect. In perl: #!/usr/bin/perl use strict; use warnings FATAL => qw(all); $| = 1; # Pipeline stdout (i.e., no buffering). my $count = 1; while ($count) { print "\rCount $count"; sleep 1; $count++; }
Resource friendly way to indicate to user program is not hanging
1,389,384,877,000
How to save output of a command that modifies environment into a variable? I'm using bash shell. Assume that I have: function f () { a=3; b=4 ; echo "`date`: $a $b"; } And now, I can use commands to run f: $ a=0; b=0; f; echo $a; echo $b; echo $c Sat Jun 28 21:27:08 CEST 2014: 3 4 3 4 but I would like to save output of f to variable c, so I tried: a=0; b=0; c=""; c=$(f); echo $a; echo $b; echo $c but unfortunatelly, I've got: 0 0 Sat Jun 28 21:28:03 CEST 2014: 3 4 so I don't have any environment change here. How to save output of command (not only function) to variable and save environmental changes? I know that $(...) opens new subshell and that is the problem, but is it possible to do some workaround?
If you're using Bash 4 or later, you can use coprocesses: function f () { a=3; b=4 ; echo "`date`: $a $b"; } coproc cat f >&${COPROC[1]} exec {COPROC[1]}>&- read c <&${COPROC[0]} echo a $a echo b $b echo c $c will output a 3 b 4 c Sun Jun 29 10:08:15 NZST 2014: 3 4 coproc creates a new process running a given command (here, cat). It saves the PID into COPROC_PID and standard output/input file descriptors into an array COPROC (just like pipe(2), or see here or here). Here we run the function with standard output pointed at our coprocess running cat, and then read from it. Since cat just spits its input back out, we get the output of the function into our variable. exec {COPROC[1]}>&- just closes the file descriptor so that cat doesn't keep waiting forever. Note that read takes only one line at a time. You can use mapfile to get an array of lines, or just use the file descriptor however you want to use it in a different way. exec {COPROC[1]}>&- works in current versions of Bash, but earlier 4-series versions require you to save the file descriptor into a simple variable first: fd=${COPROC[1]}; exec {fd}>&-. If your variable is unset it will close standard output. If you're using a 3-series version of Bash, you can get the same effect with mkfifo, but it's not much better than using an actual file at that point.
Save output of command that modifies environment into a variable
1,389,384,877,000
I use the following to send an email at the end of a script. echo "Script finished on `date`" | /usr/bin/Mail -s "Script complete" "[email protected]". However, I want to echo the same message onto the screen as well. How do I do that in the same statement?
The easiest way is probably to tee the message to stderr as well as stdout: echo "Script finished on date" | tee /dev/stderr \ | /usr/bin/Mail -s "Script complete" "[email protected]" tee duplicates its input to multiple destinations, including stdout. By default, both stderr and stdout go to the screen; you're redirecting stdout to Mail, leaving just stderr going to the screen. If you need it in stdout for some reason, you could redirect it back using a subshell (or several other ways): ( echo "Script finished on date" | tee /dev/stderr \ | /usr/bin/Mail -s "Script complete" "[email protected]" ) 2>&1
How to send output to both screen and mail?
1,389,384,877,000
In a bash script, I must download a file from the web. I use the wget command for doing this. I would like to log the output of the wget command, and at "the same time" have the output prompting on terminal. I searched in the man wget without finding the way to achieve that. It seems that if you turn on the log with -o or -a parameter, then the prompt output is automatically 'redirected' to the log file, and nothing is shown on the terminal while executing the script, until it has completed the download. wget -a wget_log --no-check-certificate --auth-no-challenge --http-user=$jen_uname --http-password=$jen_psswd link_to_the_file Is it possible to do both? Output on prompt and writing on log file?
You use the lovely tee command to do this: wget --no-check-certificate --auth-no-challenge --http-user=$jen_uname --http-password=$jen_psswd 2>&1 | tee -a wget_log THe 2>&1 means that STDERR goes to the same place as STDOUT, and they're both piped to tee. The -a means append. tee will then send the output both to wget_log and to STDOUT.
wget a file, logging the output and showing the output on prompt
1,389,384,877,000
I have some output from iconv, e.g. $ iconv -l | grep ISO | head -5 CSISO4UNITEDKINGDOM// CSISO10SWEDISH// CSISO11SWEDISHFORNAMES// CSISO14JISC6220RO// CSISO15ITALIAN// durrantm:~ How can I change the //'s to, say --'s ? I tried $ (iconv -l | grep ISO).gsub('\/\/','--') but no luck.
There are a few approaches using either tr, awk or sed TR: iconv -l | grep ISO |head -5 |tr '/' '-' AWK: iconv -l | awk '/ISO/{gsub("//","--"); print $0}' |head -5 SED: iconv -l | grep ISO |head -5 | sed 's/\//-/g' # or, to avoid needing to escape the backslashes: iconv -l | grep ISO |head -5 | sed 's#/#-#g'
How to grep, then search and replace on the output?
1,389,384,877,000
I'm trying to run a command, write that to a file, and then I'm using that file for something else. The gist of what I need is: myAPICommand.exe parameters > myFile.txt The problem is that myAPICommand.exe fails a lot. I attempt to fix some of the problems and rerun, but I get hit with "cannot overwrite existing file". I have to run a separate rm command to cleanup the blank myFile.txt and then rerun myAPICommand.exe. It's not the most egregious problem, but it is annoying. How can I avoid writing a blank file when my base command fails?
You must have "noclobber" set, check the following example: $ echo 1 > 1 # create file $ cat 1 1 $ echo 2 > 1 # overwrite file $ cat 1 2 $ set -o noclobber $ echo 3 > 1 # file is now protected from accidental overwrite bash: 1: cannot overwrite existing file $ cat 1 2 $ echo 3 >| 1 # temporary allow overwrite $ cat 1 3 $ echo 4 > 1 bash: 1: cannot overwrite existing file $ cat 1 3 $ set +o noclobber $ echo 4 > 1 $ cat 1 4 "noclobber" is only for overwrite, you can still append though: $ echo 4 > 1 bash: 1: cannot overwrite existing file $ echo 4 >> 1 To check if you have that flag set you can type echo $- and see if you have C flag set (or set -o |grep clobber). Q: How can I avoid writing a blank file when my base command fails? Any requirements? You could just simply store the output in a variable and then check if it is empty. Check the following example (note that the way you check the variable needs fine adjusting to your needs, in the example I didn't quote it or use anything like ${cmd_output+x} which checks if variable is set, to avoid writing a file containing whitespaces only. $ cmd_output=$(echo) $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e '\n\n\n') $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e ' ') $ test $cmd_output && echo yes || echo no no $ cmd_output=$(echo -e 'something') $ test $cmd_output && echo yes || echo no yes $ cmd_output=$(myAPICommand.exe parameters) $ test $cmd_output && echo "$cmd_output" > myFile.txt Example without using a single variable holding the whole output: log() { while read data; do echo "$data" >> myFile.txt; done; } myAPICommand.exe parameters |log
How can I output a command to a file, without getting a blank file on error?
1,389,384,877,000
If I follow a file somehow like this: tail -f /var/log/syslog|grep s I see all lines containing an "s" Why does this not give any output, if I grep it again to the same "s"? tail -f /var/log/syslog|grep s|grep s
As Rubo77 mentioned, the issue is solved by adding the --line-buffered to the first grep command: tail -f /var/log/syslog|grep --line-buffered s|grep s However, you then may ask, why isn't this needed for a single grep command? The difference between the two is that in the following command: tail -f /var/log/syslog|grep s STDOUT for grep is pointed to a terminal. grep most likely writes to STDOUT via functions contained in the stdio library. Per the documentation (stdio(3)): Output streams that refer to terminal devices are always line buffered by default; Thus, the underlying library calls are flushing the buffer after each line without any action on grep's part. In this command: tail -f /var/log/syslog|grep --line-buffered s|grep s STDIO is now going to a pipe rather than a terminal device and the library functions that grep is using to write to STDOUT fully buffers these writes rather than using line buffering. When the --line-buffered flag is used, grep will call fflush, which will flush all of the buffered write.
double grep on tail -f gives no output
1,389,384,877,000
I've tried to get a file from airodump-ng via redirecting a output stream via: airodump-ng mon0 2>&1 | tee file.txt but this appends to a file rather than rewrites it. So after that I've tried to redirect an output to other output stream via a fifo pipeline: First terminal: mkfifo fifo1 echo "while [ 1 ]; do cat ~/fifo1 2>&1 | tee file.txt done" > readfifo.sh chmod +x readfifo.h xterm -e readfifo.sh Second terminal: airodump-ng mon0 2>&1 > fifo1 And in the result we have an appending file.txt, but why ? How to have only the output of a terminal in the file, but not append to it ? Is it possible to filter an output of airodump-ng while writeting to a file ? Best regards, V7
Check man airodump-ng. You want the -w option. airodump-ng -w myOutput --output-format csv mon0 Generates a .csv file of the screendump with the output from airodump-ng one line per station.
How to save an output of airodump-ng to a file?
1,389,384,877,000
I'm trying to hide the "output" of a gnupg command, but it seems that it is always printed. the command is: echo "thisprogramwørks" | gpg -q --status-fd 1 --no-use-agent --sign --local-user D30BDF86 --passphrase-fd 0 --output /dev/null It is a command to verify the password of pgp keys, and by using it like this: a=$(echo "thisprogramwørks" | gpg -q --status-fd 1 --no-use-agent --sign --local-user D30BDF86 --passphrase-fd 0 --output /dev/null) I recover the output: echo $a [GNUPG:] USERID_HINT F02346C1EA445B6A p7zrecover (7zrecover craking pgp test) <a@a> [GNUPG:] NEED_PASSPHRASE F02346C1EA445B6A F02346C1EA445B6A 1 0 [GNUPG:] GOOD_PASSPHRASE [GNUPG:] BEGIN_SIGNING [GNUPG:] SIG_CREATED S 1 8 00 1435612254 8AE04850C3DA5939088BE2C8F02346C1EA445B6A the problem is that when I use the command, the console prints: You need a passphrase to unlock the secret key for user: "test (test) <a@a>" 1024-bit RSA key, ID EA445B6A, created 2015-06-29 I've been trying to use command redirects like &>/dev/null and stuff like that, but passphrase text is always printed. It is possible to hide this text?
The "problem" is, that gpg writes directly to the TTY instead of STDOUT or STDERR. That means it cannot be redirected. You can either use the --batch option as daniel suggested, but as a more general approach you can use the script tool, which fakes a TTY. Any output is then sent to STDOUT, so you can redirect it to /dev/null: script -c 'echo "thisprogramwørks" | gpg -q --status-fd 1 --no-use-agent --sign --local-user D30BDF86 --passphrase-fd 0 --output /dev/null' > /dev/null The output is also written to a file, so you can still get and analyze it. See man script (link)
Silent GnuPG password request with bash commands
1,389,384,877,000
I'm asked to output the current day using the cal command. So far, I discovered that before the current date there is a _ symbol. I decided to use grep here: cal | grep '\b_*', but it outputs the whole week. I've tried several variants, but it didn't work out. Actually, there is also a case, when current day has only one digit, so it seems I have to usetr -d ' ' here. I have no idea how to combine all these commands together.
When the output of the cal command is not a terminal, it applies poor man's underlining to the day number for today, which consists of putting an underscore and a backspace character before each character to underline. You can see that by displaying the characters visually (^H means control-H, which is the backspace character): cal | cat -A cal | cat -vet or by looking at a hex dump: cal | hd cal | od -t x1 So what you need is to detect the underlined characters and output them. With GNU grep, there's an easy way to print all the matches of a regular expression: use the -o option. An underline character is matched by the extended regular expression _^H. where ^H is a literal backspace character, not the two characters ^ and H, and . is the character to print. Instead of typing the backspace character, you can rely on the fact that this is the only way cal uses underscores in its output. So it's enough to detect the underscores and leave the backspaces as unmatched characters. cal | grep -o '_..' We're close, but the output contains the underscore-backslash sequence, and the digits are on separate lines. You can strip away all non-digit characters (and add back a trailing newline): cal | grep -o '_..' | tr -d 0-9; echo Alternatively, you can repeat the pattern _.. to match multiple underlined digits. This leaves the underlining in the output, you can use tr or sed to strip it off. cal | grep -E -o '(_..)*' cal | grep -E -o '(_..)*' | tr -d '\b_' cal | grep -E -o '(_..)*' | sed 's/_.//g' You can do this with sed, but it isn't completely straightforward. Sed offers an easy way to print only matching lines (use the -n option to only get lines that are printed explicitly), but no direct way to print multiple occurrences of a match on a line. One way to solve this is to take advantage of the fact that there are at most two underlined characters, and have one s command to transform and output lines containing a single underlined character and another for lines with two. As before, I won't match the backspaces explicitly. cal | sed -n 's/.*_.\(.\)_.\(.\).*/\1\2/p; s/.*_.\(.\).*/\1/p' An alternative approach with sed, assuming that there is only one underlined segment on a line, is to remove everything before it and everything after it. cal | sed -n 's/^[^_]*_/_/; s/\(_..\)[^_]*$/\1/p' This leaves the underscores; we can remove them with a third replacement. cal | sed -n 's/^[^_]*_/_/; s/\(_..\)[^_]*$/\1/; s/_.//gp'
Output current day using cal
1,389,384,877,000
Is it true to say that CTRL+D stops input execution while CTRL+C stops output displaying (as plain data, without execution)?
No, it is not true. However, it is true to say that Ctrl+D signals an End of Transmission (EOT) event which will generally cause a program reading input to close the input file descriptor. Ctrl+D is used for this because its place on the ASCII table corresponds to the analogous End of File control character, even though the actual EOF control character is not actually transmitted in this case. Pressing Ctrl+C will generally (it's configurable with stty) generate an interrupt signal (SIGINT) which will be delivered to the processes that are in the current terminal (see man kill; man 3 tcgetpgrp).
CTRL+D vs CTRL+C
1,389,384,877,000
Let's assume I receive the following output after executing a bash script in CLI (so this text will be displayed in terminal): POST https://mycompany.com/ COOKIE='BLABLABLABLABLA' HOST='ANYIPADDRESS' FINGERPRINT='sha256:BLABLABLABLA' How can I store the content of COOKIE (only the text between ' and ') into a separate file? Furthermore, the mentioned text should be pasted into this external file at a specific position. The already existing file content looks like that: [global] Name = Name of VPN connection [provider_openconnect] Type = OpenConnect Name = Name of VPN connection Host = IP-address Domain = Domain name OpenConnect.Cookie = >>>INSERT CONTENT OF THE COOKIE HERE<<< OpenConnect.ServerCert = sha256:BLABLABLABLA How is that possible?
These types of thing are not generic in nature, but specific though approach is generic I am assuming, you want to replace OpenConnect.Cookie = line with OpenConnect.Cookie = BLABLABLABLABLA So, to first create required string , you can use sed -i "s/^OpenConnect.Cookie =.*$/$( command_giving_output | grep 'COOKIE=' | sed "s/COOKIE='//; s/'//g; s/^/OpenConnect.Cookie = /")/" external_filename Here I am using command substitution to first create required string command_giving_output | grep 'COOKIE=' | sed "s/COOKIE='//; s/'//g; s/^/OpenConnect.Cookie = /" and then substituting required line by this required string sed -i "s/^OpenConnect.Cookie =.*$/output from above command substitution /" external_filename
Bash: How to store a specific line of CLI output into a file?
1,389,384,877,000
Suppose I have an environment where there isn't a shell running, so I can't use redirection, pipes, here-documents or other shell-isms, but I can launch a command (through execvp or some similar way). I want to write an arbitrary string to a named file. Is there a standard command that will do something like: somecommand outputfile 'string' for instance: somecommand /proc/sys/net/ipv4/ip_forward '1' A really dumb example might be: curl -o /proc/sys/net/ipv4/ip_forward http://example.com/1.txt where I set up 1.txt to contain the string I want. Is there a common command that can be abused to do this?
If you know of any other non-empty file on the system, then with POSIX sed: sed -e 's/.*/hello world/' -e 'wtarget' -e q otherfile With GNU sed and just your own non-empty file, you can use: sed -i.bak -e '$ihello world' -e 'd' foo With BSD sed, this would work instead: sed -i.bak -e '$i\ hello world' -e d foo If you're not using a shell then presumably the linebreak isn't an issue. With ex, if the target file exists: ex -c '0,$d' -c 's/^/hello world/' -c 'x' foo This just deletes everything in the file, replaces the first line with "hello world", then writes and quits. You could do the same thing with vi in place of ex. Implementations are not required to support multiple -c options, but they generally do. For many ex implementations the requirement that the file already exist is not enforced. Also with awk: awk -v FN=foo -v STR="hello world" 'BEGIN{printf(STR) > FN }' will write "hello world" to file "foo". If there are existing files containing the bytes you want at known locations, you can assemble a file byte by byte over multiple commands with dd (in this case, alphabet contains the alphabet, but it could be a mix of input files): dd if=alphabet bs=1 skip=7 count=1 of=test dd if=alphabet bs=1 skip=4 count=1 seek=1 of=test dd if=alphabet bs=1 skip=11 count=1 seek=2 of=test dd if=alphabet bs=1 skip=11 count=1 seek=3 of=test dd if=alphabet bs=1 skip=14 count=1 seek=4 of=test cat test hello From there, just regular cp will work, or you might have been able to put it in-place to start with. Less commonly, the mispipe command from moreutils allows constructing a shell-free pipe: mispipe "echo 1" "tee wtarget" is equivalent to echo 1 | tee wtarget, but returning the exit code of echo. This uses the system() function internally, which doesn't strictly require a shell to exist. Finally, perl is a common command and will let you write arbitrary programs to do whatever you want on the command line, as will python or any other common scripting language. Similarly, if a shell just isn't "running", but it does exist, sh -c 'echo 1 > target' will work just fine.
Write string to a file without a shell [closed]
1,501,772,142,000
I am trying to interpret this result of hdparm: janus@behemoth ~ $ sudo hdparm -Tt --direct /dev/nvme0n1 /dev/nvme0n1: Timing O_DIRECT cached reads: 2548 MB in 2.00 seconds = 1273.69 MB/sec Timing O_DIRECT disk reads: 4188 MB in 3.00 seconds = 1395.36 MB/sec I do not understand how the cached reads can be slower than the direct disk reads. If I drop the --direct, I get what I would have expect: the disk reads are slower than the cached ones: janus@behemoth ~ $ sudo hdparm -Tt /dev/nvme0n1 /dev/nvme0n1: Timing cached reads: 22064 MB in 2.00 seconds = 11042.86 MB/sec Timing buffered disk reads: 2330 MB in 3.00 seconds = 776.06 MB/sec (Although it says "buffered disk reads" now). Can somebody explain to me what is going on?
Per hdparm man page: --direct Use the kernel´s "O_DIRECT" flag when performing a -t timing test. This bypasses the page cache, causing the reads to go directly from the drive into hdparm's buffers, using so-called "raw" I/O. In many cases, this can produce results that appear much faster than the usual page cache method, giving a better indication of raw device and driver performance. It so explains why hdparm -t --direct may be faster than hdparm -t. It also says that --direct only applies to the -t test, not to the -T test which is not supposed to involve the disk (see below). -T Perform timings of cache reads for benchmark and comparison pur‐ poses. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test. I guess -T works by reading the same cached part of the disk. But your --direct prevents this. So, logically, you should have the same results with -t --direct as with -T --direct.
Why cached reads are slower than disk reads in hdparm --direct?
1,501,772,142,000
Is it possible to restart a systemd service when there is no output generated for a predefined amount of time? I have a script which apparently can hang, but that is not detected by systemd (or Python for that case), and thus it does not enter the failed state. However, it does stop logging output, so I should be able to restart the service after no output is given for a minute. Is this possible with systemd? My current systemd file: [Unit] Description=SOmething After=network.target [Service] WorkingDirectory=/home/user/system/something User=nobody ExecStart=/usr/bin/python2 something.py Restart=on-watchdog RestartSec=10s [Install] WantedBy=multi-user.target
I don't think systemd allows you to do that, at least the systemd.service(5) manual page doesn't seem to mention anything like that. However, what you could do is use systemd's builtin watchdog. You would do that by setting WatchdogSec= and then having your service send WATCHDOG=1 with sd_notify regularly. If you set WatchdogSec=30 then your service must notify systemd at least every 30 seconds. When your service hangs, it won't notify systemd anymore, and systemd will kill your service as a result. With restart=on-watchdog (which you already have, but it doesn't do anything without WatchdogSec=), then systemd will restart your service after it's been killed by the watchdog.
Restart systemd service when output is no longer generated
1,501,772,142,000
Say I only need the first 5 lines of an output for logging purposes. I also need to know if and when the log has been truncated. I am trying to use head to do the job, the seq command below outputs 20 lines that get truncated by head, and I echo a truncating information: > seq -f 'log line %.0f' 20 | head -n 5 && echo '...Output truncated. Only showing first 5 lines...' log line 1 log line 2 log line 3 log line 4 log line 5 ...Output truncated. Only showing first 5 lines... But if the seq command outputs less than 5 lines, using the same above construction, I get a wrong "truncated" status: seq -f ' log line %.0f' 3 | head -n 5 && echo '...Output truncated. Only showing first 5 lines...' log line 1 log line 2 log line 3 ...Output truncated. Only showing first 5 lines... Is there a way for the head command (or another tool) to tell me if it truncated anything so that I only display the "...truncated..." message when needed?
A note of warning: When you do: cmd | head and if the output is truncated, that could cause cmd to be killed by a SIGPIPE, if it writes more lines after head has exited. If it's not what you want, if you want cmd to keep running afterwards, even if its output is discarded, you'd need to read but discard the remaining lines instead of exiting after 10 lines have been output (for instance, with sed '1,10!d' or awk 'NR<=10' instead of head). So, for the two different approaches: output truncated, cmd may be killed cmd | awk 'NR>5 {print "TRUNCATED"; exit}; {print}' cmd | sed '6{s/.*/TRUNCATED/;q;}' Note that the mawk implementation of awk accumulates a buffer-full of input before starting processing it, so cmd may not be killed until it has written a buffer-full (8KiB on my system AFAICT) of data. That can be worked-around by using the -Winteractive option. Some sed implementations also read one line in advance (to be able to know which is the last line when using the $ address), so with those, cmd may only be killed after it has output its 7th line. output truncated, the rest discarded so cmd is not killed cmd | awk 'NR<=5; NR==6{print "TRUNCATED"}' cmd | sed '1,6!d;6s/.*/TRUNCATED/'
Truncate output after X lines and print message if and only if output was truncated
1,501,772,142,000
How do I evaluate or calculate the return value of a command line? For exemple, I count the number of lines with a grep and I want to know if that value is above X. If so, I want to print the number to a file. Or I want to substract the value of a grep count to another grep count... How can I manipulate return values that way ?
I think you are mixing two things the return value typically indicates if a command was successful (return value 0) or not (anything else). You can get the return value of a command from the variable $? grep -c returns the count to stdout, to capture the count you can use something like variable=$(grep -c pattern filename) Afterwords you can calculate/access the variable how ever you want. See How to do integer & float calculations, in bash or other languages/frameworks? how to calculate stuff with the output.
Evaluating the return value of a command line
1,501,772,142,000
I am on elementaryOS 0.4.1 Loki, which is based on Ubuntu 16.04.1. I want to use OBS, a screen recorder, to record gameplay along with the sounds that come from the same video game. I also want to use a voice chat application in the background while playing the game, but I do not want any of it recorded by OBS. OBS can't selectively ignore the audio of certain programs, but it can record sound from a specified input device. I want to create a virtual input and use pavucontrol's output list to route my voice chat application to that input so it's not picked up by OBS. At the same time, I want this input to play back to a headphone/line out port. I know this can be done in Windows with software such as Virtual Audio Cables, but I don't know how to do the same thing in Linux. I already attempted to do something with sudo modprobe snd-dummy but it doesn't let me route the dummy to a headphone output. How should I approach this?
Let me repeat: A single program or group of programs A (game) should output sound both to OBS and the headphone, while another single program or group of programs B (voice chat) should only output sound to the headphone, all on the Pulseaudio level. Correct? Don't use snd-dummy, it works on the ALSA level. Instead, create a "null sink" on the Pulseaudio level: pacmd load-module module-null-sink sink_name=game_sink sink_properties=device.description=Game-Sink Use pavucontrol or, if it can do that, the sound configuration of elementary OS to switch all group A sound outputs to that sink. Each sink in Pulseaudio comes with a corresponding "monitor" source (you can see those in the OBS menu you included), so setup OBS to record from "Monitor of Game Sink". That takes care of recording from group A only, but doesn't output it to the headphones. For that, you need a loopback from the mentioned monitor source to the headphone sink: pacmd load-module module-loopback source="game_sink.monitor" sink="your-headphone-sink" You can find out the names of all sinks, including the headphone sink, with pacmd list-sinks | grep name: Leave out the angular brackets when using the names as arguments.
How to create a virtual audio output and route it in Ubuntu-based distro
1,501,772,142,000
On RHEL/CentOS the package manager dnf can search for strings in the names and summaries of packages. How can I tell it to list only matches in the name (or in name and summary), but not in the summary only?
DNF unfortunately doesn't have an option to search only in the package name, there is an old RFE for this feature, but there was no activity since 2015. You can try using dnf list which can show both installed and available packages and supports glob expressions so it can be potentially used for searching by name only $ sudo dnf list "*anaconda*" anaconda.x86_64 anaconda-core.x86_64 anaconda-dracut.x86_64 anaconda-gui.x86_64 ...
How to tell "dnf search" to list only matches in the package name (or name and summary), but not only in the summary?
1,501,772,142,000
I am writing a shell script to start my network in my virtual machine on boot, since it does not do this right away for some reason with a snapshot of a virtual machine. Since the eth device starts as being down, I have to get that device name with the following script and then start the device as well: gateway=ifconfig -a | awk '/eth/ {print $1}' dhclient $gateway However I keep getting the following error, line 1: -a: command not found. ifconfig -a works however from the command line. Is there a way to get ifconfig -a to work in my shell script?
You must use command substitution, otherwise, bash will think you assigned ifconfig result to variable gateway then run command -a: gateway=$(ifconfig -a | awk '/eth/ {print $1}') dhclient $gateway
ifconfig -a in a shell script