date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,393,281,409,000
I was reading the Linux man page for xargs recently, and it appears that the -i option is deprecated. To quote from the documentation: -i[replace-str], --replace[=replace-str] This option is a synonym for -Ireplace-str if replace-str is specified. If the replace-str argument is missing, the effect is the same as -I{}. This option is deprecated; use -I instead. Just curious, why is it deprecated? What's the reason to use the more verbose -I{} syntax instead?
muru is right, if you check findutil changelog @line 1645 Major changes in release 4.2.9, 2004-12-05 xargs now supports the POSIX options -E, -I and -L. These are synonyms for the existing options -e, -i and -l, but the latter three are now deprecated. There's an explanation in the man page too, see the -l option: The -l option is deprecated since the POSIX standard specifies -L instead. and also further down: The -l and -i options appear in the 1997 version of the POSIX standard, but do not appear in the 2004 version of the standard. Therefore you should use -L and -I instead, respectively.
Why is the xargs -i option deprecated?
1,393,281,409,000
More concise version With "\t": menu-complete in ~/.inputrc create these two files, $ touch one.two.txt $ touch one.four.txt start writing the following, $ ls one hit Tab and you'll get $ ls one.four.txt Now move the cursor to just before four, delete four, write only t, and then hit Tab again. You'll get $ ls one.two.txt.txt Is there a way to prevent the duplicated .txt in this workflow? Original question I use menu-complete bash function to cycle through completion when I press Tab, and I'm happy with it. But the following has too often happened to me. Suppose I'm looking for the file longparthardtoremember.with.QQQQQQQ.extension in a directory which contains the files longparthardtoremember.with.AAAAAAA.nice.long.extension longparthardtoremember.with.BBBBBBB.very.nice.long.extension ... If I Tab-complete $ long the first filename will be inserted. At that point, I'd like to move to the middle of the filename, delete the AAAAAAA part, type B, and then Tab-complete again. If I do so, all the part after BBBBBBB is inserted as well, thus leading to a duplication of it, which I obviously don't want. With vi editing mode, I'm quite quick in dealing with this (I quickly move to the repeated part and delete it), but it is still annoying. By pure chance I've find the skip-completed-text bash option in bash's man page. Isn't this what I need? I've set it on, but I can't see any difference in the behavior of in-middle-of-work Tab-completion. Have I misunderstood the man page?
skip-completed-text works this way (with your example touch one.four.txt): $ ls one.four # ^ cursor is on the f If you press tab with skip-completed-text on it will complete to $ ls one.four.txt If you press tab with skip-completed-text off it will complete to $ ls one.four.txtour So this setting does not help you when you edit the middle of the completion.
Bash how does skip-completed-text work?
1,393,281,409,000
I am perhaps picking nits here, but it would be really good to have this question that's been bothering me answered once and for all... Text file for reprex: Line one. Line two. Line three. Line four. To add an additional empty line consistently to this text file would require two sed commands for each line. This could be achieved with any of the following syntaxes: sed -e '/^$/d' -e '$!G' <file>... but NOT (sed -e '/^$/d' '$!G' <file> OR sed '/^$/d' '$!G' <file>) sed -e '/^$/d; $!G' <file> or sed -e '/^$/d ; $!G' <file> sed '/^$/d; $!G' <file> or sed '/^$/d ; $!G' <file> My questions are: Is there any real difference (universality?, compliance?...) between any of the five working syntaxes listed above? Richard Blum's Latest Command Line And Shell Scripting Bible says to use something like sed -e 's/brown/red/; s/dog/cat/' data1.txt before doling out the following advice... The commands must be separated with a semicolon (;), and there shouldn't be any spaces between the end of the first command and the semicolon. ...and then goes on to completely neglect his own advice by not using the -e option at all and also adding spaces between the end of a command and the semicolon (like shown in the second variant of the #3 above).. So, does the spacing around the semicolon make any real difference, at all? Although I couldn't find info on this in the manpage or documentation, my hunch is that the -e option is meant to be used as shown in syntax number #1 above, and using both -e and ; on the command line is redundant. Am I correct? EDIT: I should have mentioned this in my original question to make it more specific; but as some people have already pointed out, these nuances would matter when using branch (b) or test (t) commands. But it's interesting to note the other cases when these would make a difference. Thanks!
Let us use the Sed POSIX standard to answer the questions. Does the spacing around the semicolon make any real difference? Editing commands other than {...}, a, b, c, i, r, t, w, :, and # can be followed by a semicolon, optional blank characters, and another editing command. Thus /^$/d ; $!G is not compliant, but /^$/d; $!G is. But I do wonder if there is any modern Sed implementation that would stumble on that. Is there any real difference (universality, compliance...) between any of the three syntaxes listed above? No (except for the one with spaces before the semicolon, as argued above). This is clear in the synopsis: sed [-n] script [file...] sed [-n] -e script [-e script]... [-f script_file]... [file...] Do note, however, that as the previous quote mentioned, some commands cannot be followed by a semicolon, and then sed -e ':a' -e 's/x/y/' -e 't a' is compliant, while sed ':a;s/x/y;t a' is not, although but work the same at least in GNU Sed. My hunch is that (...) using both -e and ; on the command line is redundant. Am I correct? If you refer to the examples in the question, yes. If there is a single -e option, then just drop it and it is all the same (unless you also use the -f option (see the synopsis)). But in sed -e ':a' -e 's/x/y;t a' both -e and ; are present but they are not redundant.
Difference between 'sed -e' and delimiting multiple commands with semicolon
1,393,281,409,000
By mistake I have created a file named : --append. How do I delete it? Simply entering the usual command, rm -f --append, doesn't work.
Try this in-order to remove the file: rm -- --append
How can I delete a file named "--append"? [duplicate]
1,393,281,409,000
I'm currently developing a shell script, called up, which shows a usage string on the commandline when called with --help. The output looks like this: $ up --help usage: up [-n levels][--help][--versions][basename]... This looks okay but I'm wondering if I actually need to show the --help and --version options because they are a widely accepted standard and only seem to add noise to the usage string.
This is entirely up to you but most programs do someting like this program --help Usage: program [<options>][<arguments> ...] Options: --help show this message, then exit --something after some spaces for alignment, an explenation follows. You should check out getopt which most programs (this is also available in programming languages) and scripts use. This way people using your script will not get confused. Finally, you should add all your options even if they seem trivial to you to be complete. So, I would add both --help and --version in the Options section of the usage.
In the usage string of my custom shell script, shall I also display --help and --version?
1,393,281,409,000
Using -r or --recursive causes rsync to recurse into directories. -a or --archive equals -rlptgoD, so -a implies -r. If I have directories source/ and dest/ and I run: rsync source dest then rsync skips source/ and does not copy anything. If I run: rsync -a source dest then -a implies -r and rsync copies source/ and all of its contents to dest/. But if I have a file list.txt that contains the line source, and the full path of my directory source/ is /home/user/source/, and I run: rsync -a --files-from=list.txt /home/user/ dest then rsync only copies source/ to dest/ but does not copy its contents. The same happens if I run the command without the -a option. But if I run the same command with -r: rsync -r --files-from=list.txt /home/user/ dest then rsync copies source/ and all of its contents to dest/. My questions are: Why doesn't -a imply -r when the --files-from=FILE option is used? Is this expected behavior? Given that the command rsync source dest skips source/ and copies nothing because source/ is a directory and neither -a nor -r is used, why does the command rsync --files-from=list.txt /home/user/ dest still copy source/ to dest/? Do the other options implied by -a still work when the --files-from=FILE option is used? Is -r the only option that is left out? Edit: Looks like I should have read the man page more thoroughly. Under the description of the --files-from=FILE option it says: The --archive (-a) option’s behavior does not imply --recursive (-r), so specify it explicitly, if you want it. (Answers my first question.) The --dirs (-d) option is implied, which will create directories specified in the list on the destination rather than noisily skipping them (use --no-dirs or --no-d if you want to turn that off). (Answers my second question.)
-a, --archive This is equivalent to -rlptgoD. It is a quick way of saying you want recursion and want to preserve almost everything (with -H being a notable omission). The only exception to the above equivalence is when --files-from is specified, in which case -r is not implied. When you use --files-from, the recursion is disabled (this is the only option left out). It is assumed that the user knows exactly what specific files to transfer and that they have specified these in the file list that they use with --files-from. If a directory is specified in the file list, its ownership, timestamp etc. will be synchronised, but not its content. You may add the -r flag explicitly though: rsync -av --files-from=file.list -r src/ dst/ This will have the effect that you are looking for.
rsync: Why doesn't --archive imply --recursive when --files-from=FILE is used?
1,393,281,409,000
In here grep is used with the option -w. I did man grep and grep --help to try to find what the aforementioned option does. Neither output says anything about a -w option. What does that option do? Why does it not appear in manor --help? In case something similar happens again, where else can I check for an answer? I am currently using Ubuntu, if that is relevant (is it?)
# grep --help | grep -e -w -w, --word-regexp force PATTERN to match only whole words -H, --with-filename print file name with output lines -L, --files-without-match print only names of FILEs with no selected lines -l, --files-with-matches print only names of FILEs with selected lines # grep PRETTY /etc/os-release PRETTY_NAME="Ubuntu 18.04 LTS" # man grep | grep -e -w -A1 -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore.
What does grep -w do?
1,393,281,409,000
Like many (most), I use git, which by default sends its output (for diffs, logs, etc.) to less, with the options -FRSX. The options are overrideable in .gitconfig by setting the pager to be called with overriding options. E.g.: pager=less -F -+S When I set less to quit after less than one screen of output and not truncate lines (i.e. less -F -+S as in the example above), I get automatically returned to my command prompt after I run (say) a log command. However, if I do have it chop lines (i.e. use only less -F), and any lines get truncated, then when it ends, it doesn't quit immediately, but prints END and waits for me to press Q, which is somewhat annoying. (Note that the problematic behaviour does not happen if no lines are truncated because they are all narrower than my terminal. The problem is not occurring because it is asked to truncate the lines, but that it is actually doing so.) Is there a way to chop lines and still exit from less automatically after less than a screen?
Well... that would be against the idea of paging... wouldn't it? :-) But to answer your question: I'm pretty sure there isn't. This is from the source code of less: /* * The char won't fit in the line; the line * is too long to print in the screen width. * End the line here. */ if (chopline || hshift > 0) <--- you have chop lines (-S) { ... quit_if_one_screen = FALSE; <--- this resets -F } Sorry :-)
Is there a way for "less" to truncate lines and still exit after < 1 screen?
1,393,281,409,000
The cp command's infopage offers on the option --preserve= the following: links Preserve in the destination files any links between corresponding source files. Note that with -L' or-H', this option can convert symbolic links to hard links. followed by an example I don't get [now]; anyhow: Question: How to turn soft- into hardlinks with cp? And is there a way back too [converting hard- into softlinks]? Secondary Issue: Where does can in the quote above come into play? I understand the purpose of -L and -H, I'm able to copy fully functional softlinks etc., but so far I didn't manage to turn soft- into hardlinks.
The example in the info page shows you how though the example is a bit hard to follow: $ mkdir c; : > a; ln -s a b; cp -aH a b c; ls -i1 c 74161745 a 74161745 b Let's break that down into its component commands: mkdir c; : creates the directory c/ : > a; : just a quick way of creating an empty file. It is equivalent to echo "" > a. : is a bash built in which does nothing, see help :. ln -s a b : create a softlink to a called b. At this point, these are the contents of the current directory: $ ls -l | cc2ter total 4 -rw-r--r-- 1 terdon terdon 0 Oct 9 02:50 a lrwxrwxrwx 1 terdon terdon 1 Oct 9 02:50 b -> a drwxr-xr-x 2 terdon terdon 4096 Oct 9 02:50 c Note that b is a symbolic link (soft link) it does not point to the same inode as a: $ ls -i1c a b 16647344 a 16647362 b cp -aH a b c; : copy files a and b into directory c. This is where the conversion is happening, the options passed to cp are: -a, --archive same as -dR --preserve=all -d same as --no-dereference --preserve=links -H follow command-line symbolic links in SOURCE The -H is necessary because (from info cp): When copying from a symbolic link, `cp' normally follows the link only when not copying recursively. Since -a activates recursive copying (-R), -H is needed to follow symbolic links. -H means that links are followed despite recursion and will result in hard links being made in the target directory. These are the contents of c/ after the last step (the first column is the inode number): $ ls -li c total 0 17044704 -rw-r--r-- 2 terdon terdon 0 Oct 9 02:50 a 17044704 -rw-r--r-- 2 terdon terdon 0 Oct 9 02:50 b Now as to how exactly it works, as far as I can figure out from playing around with it, cp --preserve=links combined with -L or -H will convert symbolic links to hard links if both the link and the target are being copied to the same directory. In fact, as the OP found out, at least on Debian systems, cp --preserve=links is sufficient to convert symlinks to hard links if the target directory is the same.
convert soft- to hardlinks with cp
1,393,281,409,000
The way I understand man avconv (version 9.16-6:9.16-0ubuntu0.14.04.1), the following command should convert input.ogg to output.mp3 and carry over metadata: avconv -i input.ogg -map_metadata 0 output.mp3 It does not, however; ogginfo clearly shows the information (artist, album, title, ...) in input.ogg and id3info confirms that output.mp3 has empty (ID3) tags. The same happens when converting ogg to flac, or (presumably) any combination of the formats. Is my understanding of -map_metadata wrong? Is there a way to convert between formats and keep tags (without hardcoding like this)?
Following this answer on Stack Overflow, I tinkered around and found out that the correct parameter depends on the combination of input and output format/codec. These combinations work as intended: OGG → MP3: -map_metadata 0:s:0 FLAC → MP3: -map_metadata 0:g:0 FLAC → OGG: -map_metadata 0 Using -codec libvorbis. In case your FLACs contains covers (as stream), add -vn to drop that stream (all video streams, really); the result is otherwise a broken file¹. See here for ways to add cover images back in later. Since avconv is officially dead now, I'll note that the same options seem to work with ffmpeg (at least up to 3.4.8). According to some players, anyway. easyTag would log, "Ogg bitstream contains unknown data", and Android 12 would refuse to play the file, but VLC would see nothing wrong. So YMMV.
Mapping metadata with avconv does not work
1,393,281,409,000
How to autologin a specified user with xdm? I know it's possible with other display managers but I wasn't able to figure out how xdm has to be configured to autologin a certain user. Is it possible? Or should I rather remove xdm and simply use an initscript with startx?
I haven't used xdm in a long while but as far as I know autologin is not supported by xdm (and, as per one of the devs, not needed).
How to autologin with XDM?
1,393,281,409,000
How can we create a empty file with the unix name -stuff. It means the name start with -. I tried with touch -stuff but it didn't work.
In general, most utilities have options that begin with -. Most of those utilities have a feature that allows you to specify an argument that is not an option by supplying the special option --. For those utilities, -- means that no further arguments are options. So in your case, you can use touch -- -stuff. For more information about general conventions that many utilities follow, see Section 12: "Utility Conventions" of the Base Definitions Volume of the Single Unix Specification. Another way to create an empty file is by using the shell's redirection operator like so: > -stuff.
How can I create a empty file whose name begins with a dash? [duplicate]
1,393,281,409,000
In less for navigation purposes according with this tutorial Less Command in Linux indicates: g Go to the first line in the file. p Go to the beginning of the file. I tested both, and of course the result is the same (of course using G to go bottom) and testing each one. But just at a first glance if g and G do the opposite to each other and they are enough to go to the first line (top) and last line (bottom) respectively - so why there is the p option if it does the same as g?
He, this is mischaracterizing what these commands actually do. p is for "percentage". Try typing 20p and you'll jump to 20% of the file length. Nifty! 20g works too, but it goes to the twentieth line. Simply typing g or p just implies 0g or 0p; because the zeroth line and the zeroth byte are both the file's beginning, that works out as the same. You can test this rather easily; I'm assuming you're using zsh: #!/usr/bin/zsh (for i in {1..1000}; echo $i) | less will display 1000 numbered lines, and 33g will jump to line 33, but 33.3p will jump to line 333 :)
less command g vs p option
1,393,281,409,000
fgrep --help | fgrep "--help" returns just the whole fgrep --help, how do I return just the lines that have the literal "--help" in them? The quotes don't do anything, nor does \-\-help.
I believe you can use fgrep -- --help to achieve this. The man page mentions fgrep -e --help Quote from http://www.openbsd.org/cgi-bin/man.cgi?query=grep: -e pattern Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns. This option is most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’).
How do you get fgrep to find the literal "--help"?
1,393,281,409,000
Both a positional parameter ($1, $2, and so forth) and an option (and/or argument) are written directly after a command, so what is the definition or phrasing to explain how to distinct them? In other words, how to formally explain the difference between a positional parameter and an option (and/or argument)?
An option (also commonly called "flag" or "switch") is one type of command line argument. A command line argument is a single word (or quoted string) present on the command line of a utility or shell function. Upon calling a shell script or shell function with a certain number of arguments, each individual argument will be available as a positional parameter inside the script or function. Terminology: An "argument" can be an "option" (like -a, but only if the utility recognises it as an option), an "option-argument" (like foo in -a foo if -a is an option that takes an argument), or an "operand" (a non-option argument that is also not an option-argument, for example foo in -a foo if -a does not take an option-argument). Real example of all of the above (using GNU mv): mv -t targetdir -f file1 file2 Arguments: -t, targetdir, -f, file1, and file2 Options: -t and -f Option-arguments: targetdir Operands: file1 and file2. From the POSIX definitions: [An argument is, in] the shell command language, a parameter passed to a utility as the equivalent of a single string in the argv array created by one of the exec functions. An argument is one of the options, option-arguments, or operands following the command name. [An option is an] argument to a command that is generally used to specify changes in the utility's default behavior. [An option-argument is a] parameter that follows certain options. In some cases an option-argument is included within the same argument string as the option-in most cases it is the next argument. [An operand is an] argument to a command that is generally used as an object supplying information to a utility necessary to complete its processing. Operands generally follow the options in a command line. The positional parameters in a shell script or shell function will be the arguments given on the script's or function's command line, regardless of whether the arguments are options, option-arguments or operands. The positional parameters may also be set using set -- something "something else" bumblebees This sets $1, $2 and $3 to the three strings and clears any other positional parameters. In this case, the positional parameters no longer have any relation to the arguments passed on the utility's command line. See also: Confusion about changing meaning of arguments and options, is there an official standard definition? What is a "non-option argument"?
How to distinguish between a positional parameter and an option?
1,393,281,409,000
I have a bash script where I'm trying to assign a heredoc string to a variable using read, and it only works if I use read with the -d '' option, I.e. read -d '' <variable> script block #!/usr/bin/env bash function print_status() { echo echo "$1" echo } read -d '' str <<- EOF Setup nginx site-config NOTE: if an /etc/nginx/sites-available config already exists for this website, this routine will replace existing config with template from this script. EOF print_status "$str" I found this answer on SO which is where I copied the command from, it works, but why? I know the first invocation of read stops when it encounters the first newline character, so if I use some character that doesn't appear in the string the whole heredoc gets read in, e.g. read -d '|' <variable> -- this works read -d'' <variable> -- this doesn't I'm sure it's simple but what's going on with this read -d '' command option?
I guess the question is why read -d '' works though read -d'' doesn't. The problem doesn't have anything to do with read but is a quoting "problem". A "" / '' which is part of a string (word) simply is not recognized at all. Let the shell show you what is sees / executes: start cmd:> set -x start cmd:> echo read -d " " foo + echo read -d ' ' foo start cmd:> echo read -d" " foo + echo read '-d ' foo start cmd:> echo read -d "" foo + echo read -d '' foo start cmd:> echo read -d"" foo + echo read -d foo
How does the -d option to bash read work?
1,393,281,409,000
Is it possible to change the mount options of a filesystem after it got mounted (i.e without remounting) ?
In order to change the mount to as read-only, you can run: $ sudo mount -oro,remount /mountpoint
Changing the mount options after a filesystem got mounted
1,393,281,409,000
I have few files that was incorrectly encoded, during extraction, file names now become something similar to -a -b, Now I'm trying to fix this issue with: convmv -f ENCODING -t utf8 --notest * But got: Unknown option: a Unknown option: b So what's the right way to handle it, in a script ?
Because -a and -b start with - the command thinks they are options. To prevent that stick a -- before the list of filenames like this: convmv -f ENCODING -t utf8 --notest -- * That way everything after -- will be treated as regular arguments without trying to process them as options. This is common in a lot of unix commands.
Handling filenames that contains a hyphen, within a script
1,393,281,409,000
About curl about to hide the Progress Meter I found many answers through Stack Exchange branches doing mention of -s and -S or simply -sS where -s hides the progress meter -S only shows error messages, it even when -s is used Therefore is suggested work as -sS In some posts were mentioned as a new addition for curl about the --no-progress-meter option, such as: How do I get cURL to not show the progress bar? How to suppress cUrl's progress meter when redirecting the output? I read man --no-progress-meter Option to switch off the progress meter output without muting or otherwise affecting warning and informational messages like --silent does. Note that this is the negated option name documented. You can thus use --progress-meter to enable the progress meter again. Example: curl --no-progress-meter -o store https://example.com See also -v, --verbose and -s, --silent. Added in 7.67.0. and curl ootw: –silent (written by an important curl committer) But sadly is not clear for me how --no-progress-meter works. I thought at a first glance that --no-progress-meter is equivalent as -sS but - it is not indicated explicitly in the both resources - Therefore my assumption is incorrect. I did do some experiments: Without Error #1 curl https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz -O # Shows % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 8472k 100 8472k 0 0 3500k 0 0:00:02 0:00:02 --:--:-- 3501k #2 curl https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz -O -s # Shows Nothing #3 curl https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz -O -sS # Shows Nothing #4 curl https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz -O --no-progress-meter # Shows Nothing Practically is not clear the difference between -sS and --no-progress-meter because there is no error. With Error Error because the URL is incorrect, it only is based in the https part #1 curl https -O # Shows curl: Remote file name has no length! curl: (23) Failed writing received data to disk/application #2 curl https -O -s # Shows nothing #3 curl https -O -sS # Shows curl: (23) Failed writing received data to disk/application #4 curl https -O --no-progress-meter # Shows curl: Remote file name has no length! curl: (23) Failed writing received data to disk/application Observe #1 and #2 are the same Question When use -no-progress-meter over -sS? If you can share some real examples to understand the difference, it is appreciate.
The man page says: -s, --silent Silent or quiet mode. Don't show progress meter or error mes‐ sages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it. Use -S, --show-error in addition to this option to disable progress meter but still show error messages. So essentially, there are four possible combinations, in order of increasing quietness: with no options: display progress meter, warning messages and error messages with --no-progress-meter: display warning messages and error messages, but not the progress meter. This option provides information if something goes wrong, but is silent if there are no problems. with -sS: display error messages only, but not the progress meter nor warning messages. Good if you are writing a script and know that something might cause warning messages that are harmless in that particular situation, but still want to show error messages if something unexpected happens. with -s: be completely silent, no messages at all.
curl: when use "--no-progress-meter" over "-sS"?
1,393,281,409,000
Inspired by the recent question Why does the specific sequence of options matter for tar command?, in which the asker learned why tar -cfv test.tar *.jpg doesn't work, I'd like to ask a followup: seriously, why not? When a command has an option -f that requires an argument and an option -v that doesn't, this: cmd -fv foo can be interpreted in 2 different ways: the v is the argument for the -f option and foo is a non-option argument, or foo is the argument for the -f option and the -v option is present. The first interpretation is what POSIX getopt() does, so there are lots of commands that behave that way. I always preferred the second interpretation. Packing all the options together (regardless of whether they take arguments) seems more useful than squishing the foo up against the -f to turn -f foo into -ffoo. But this behavior barely exists anymore. The only command I've used lately that does it is Java's jar (which has a syntax clearly inspired by that Sun version of tar which accepts tar cfv tarfile ...). Xlib has a getopt-like function, XrmParseCommand, which allows options to be specified as either taking "separate" args or "sticky" args. But it deals with long options (-display, -geometry, etc.) so it sees -fv as just another option with no relation to either -f or -v. So it's not an example of my second interpretation. When and why did squished args become dominant? Was it already settled before POSIX, or did the POSIX mandate decide the issue? Did the first version of POSIX even have the same specific requirement as the current version? Is there any archived discussion of the subject from ancient times? Are there any other commands (besides tar and jar) that support or have historically supported the -fv foo = -f foo -v style of option parsing?
First of all, in standard getopt()-style argument processing, arguments don't have to be squished against the option they apply to, they just can be. So if -f takes an argument, both of the following are valid: command -ffoo positional arguments command -f foo positional arguments What you call the "second interpretation" is in fact very, very rare. As far as I can think of right now, tar and mt are the only extant commands that works that way... and, as you mention, jar, but that's only because it emulates tar. These commands process arguments very differently from the standard getopt()-style. The options are not even preceeded by -! I can't say for sure why it was rarely used, but I would guess that it's because of the fact that it's harder to tell what options go with what arguments. For example, if b and d take arguments but a, c, and e don't, then you have this: command abcde b-argument d-argument ...which means that while you are composing the command you have to look back at the option letter group, read it again, remember which options you specified require arguments, and write out the arguments in the same order. What about this? command adcbe b-argument d-argument Oops, the d option got the b-argument and vice versa. Worse, if you see: command lmnop foo bar baz ...and you are not familiar with the command, you have no idea which arguments go with which options. foo, bar, and baz might be arguments to n, o, p (and l and m take no arguments) or foo and bar might go with, say m and p while baz is a positional parameter... or many other possible combinations.
Why did "argument can be squished against option" prevail over "argument is always separate"?
1,393,281,409,000
Suppose I did some time ago cd /path/to/foo/bar and then evince file.pdf. Now if I want to open file.pdf again I have also to do both steps again (using history). However I would do it in a single step. I.e. I want that not evince file.pdf is written to .zsh_history but evince /path/to/foo/bar/file.pdf. How can I achieve this for example by modifying my .zshrc? evince is only an example. It should work with any command. Should I have any drawbacks in mind with this new behavior? N.B: Currently I am using z vor cd history and fzf for general history.
That's not something that could be done for arbitrary shell code as zsh has no way to know which of the words in the code are actually arguments that a command would treat as a file path let alone as a path relative to the current working directory at the time the code is stored onto the history. For the simplest shell code such as cmd with its literal arguments and for a limited predefined set of commands, you could do something like: commands_with_expanded_paths=(evince okular) zshaddhistory() { local words=( ${(z)1%$'\n'} ) if (( $commands_with_expanded_paths[(Ie)$words[1]] )); then local replaced_words=($words[1]) word for word in $words[2,-1]; do local decoded_word=${(Q)word} if [[ $decoded_word = [^/]* && -e $decoded_word ]]; then word=${(q+)decoded_word:P} fi replaced_words+=($word) done print -rs -- $replaced_words fc -p fi } Where if the first word is the name of a command in a given list, the remaining arguments are replaced with their absolute path if they're found to be relative paths to existing files in the text saved onto the history.
How can I force zsh to write automatically complete path to history?
1,393,281,409,000
I fear I may have to revert to system defaults if I can't get this sorted out. I'm trying to set various system configurations for more robust ext4 for a single-user desktop environment. Trying to assign desired configuration settings where they will take effect properly. I understand that some of these should be included in the file mke2fs.conf so that the filesystems are initially created with those proper settings. But I will address that later, keeping the distro default file for the following. I understand that the EXT4 options I wanted could be set in /etc/fstab. This following entry shows what I would typically want: UUID=00000000-0000-0000-0000-000000000000 /DB001_F2 ext4 defaults,nofail,data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 0 0 where each DB001_F{p} is a partition on the root disk ( p = [2-8] ). I repeat those options here, in the same sequence as a list, in case that makes it more easy to assimilate: defaults nofail data=journal journal_checksum journal_async_commit commit=15 errors=remount-ro journal_ioprio=2 block_validity nodelalloc data_err=ignore nodiscard Mounting during boot, the below syslog shows all as reporting what I believe to be acknowledged acceptable settings: 64017 Sep 4 21:04:35 OasisMega1 kernel: [ 21.622599] EXT4-fs (sda7): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 64018 Sep 4 21:04:35 OasisMega1 kernel: [ 21.720338] EXT4-fs (sda4): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 64019 Sep 4 21:04:35 OasisMega1 kernel: [ 21.785653] EXT4-fs (sda8): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 64021 Sep 4 21:04:35 OasisMega1 kernel: [ 22.890168] EXT4-fs (sda12): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 64022 Sep 4 21:04:35 OasisMega1 kernel: [ 23.214507] EXT4-fs (sda9): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 64023 Sep 4 21:04:35 OasisMega1 kernel: [ 23.308922] EXT4-fs (sda13): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 64024 Sep 4 21:04:35 OasisMega1 kernel: [ 23.513804] EXT4-fs (sda14): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard But mount shows that some drives are not reporting as expected, even after reboot, and this is inconsistent as seen below: /dev/sda7 on /DB001_F2 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal) /dev/sda8 on /DB001_F3 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal) /dev/sda9 on /DB001_F4 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal) /dev/sda12 on /DB001_F5 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal) /dev/sda13 on /DB001_F6 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal) /dev/sda14 on /DB001_F7 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal) /dev/sda4 on /DB001_F8 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal) I read somewhere about a limitation regarding the length of the option string in fstab, so I used tune2fs to pre-set some parameters at a lower level. Those applied via tune2fs are: journal_data,block_validity,nodelalloc which is confirmed when using tune2fs -l: Default mount options: journal_data user_xattr acl block_validity nodelalloc With that in place, I modified the fstab for entries to show as UUID=00000000-0000-0000-0000-000000000000 /DB001_F2 ext4 defaults,nofail,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,data_err=ignore,nodiscard 0 0 I did a umount for all my DB001_F? (/dev/sda*), then I did a mount -av, which reported the following: / : ignored /DB001_F2 : successfully mounted /DB001_F3 : successfully mounted /DB001_F4 : successfully mounted /DB001_F5 : successfully mounted /DB001_F6 : successfully mounted /DB001_F7 : successfully mounted /DB001_F8 : successfully mounted No errors reported for the options string for each of the drives. I tried using journal_checksum_v3, but mount -av failed all with that setting. I used the mount command to see what was reported. I also did a reboot and repeated that mount again for these reduced settings, and mount shows again that the drives are not reporting as expected, and this is still inconsistent as seen here: /dev/sda7 on /DB001_F2 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15) /dev/sda8 on /DB001_F3 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15) /dev/sda9 on /DB001_F4 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15) /dev/sda12 on /DB001_F5 type ext4 (rw,relatime,journal_async_commit,commit=15) /dev/sda13 on /DB001_F6 type ext4 (rw,relatime,journal_async_commit,commit=15) /dev/sda14 on /DB001_F7 type ext4 (rw,relatime,journal_async_commit,commit=15) /dev/sda4 on /DB001_F8 type ext4 (rw,relatime,journal_async_commit,commit=15) Since these are all ext4 type filesystems, and all on the same physical drive, I don't understand the behaviour of the journal_checksum not be uniformly actioned! I also, I find it interesting that there is a dividing line in terms of the 2 classes of behaviour, since the order listed above is the order specified in the fstab (according to /DB001_F?), which presumably is the mounting order ... so what "glitch" is causing the "downgrading" of the remaining mount actions ? My thinking (possibly baseless) is that some properties might be better set at time of creation of the filesystems, and that this would make them more "persistent/effective" than otherwise. When I tried to again shift some of the property settings by pre-defining those in mke2fs.conf. mke2fs.ext4 fails AGAIN, I suspect, because the option string is restricted to a limited length (64 characters ?). So ... I have backed away from making any changes to the mke2fs.conf. Ignoring the mke2fs.conf issue for now, and focusing on the fstab and tune2fs functionality, can someone please explain to me what I am doing wrong that is preventing mount from correctly reporting what is the full range of settings currently in effect? At this point, I don't know what I can rely on to provide the actual real state of the ext4 behaviour and am considering simply reverting to distro defaults, which leaves me wanting. Is it possible that all is well and that the system is simply not reporting correctly? I am not sure that I could comfortably accept that viewpoint. It is counter-intuitive. Can someone please assist? Environment UbuntuMATE 20.04 LTS Linux OasisMega1 5.4.0-124-generic #140-Ubuntu SMP Thu Aug 4 02:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux RAM = 4GB DSK = 2TB (internal, 8 data partitions, 3 1GB swap partitions) [ROOT] DSK = 500GB (internal, 2 data partitions, 1 1GB swap partitions) DSK = 4TB (external USB, 16 data partitions) [BACKUP drive] This is what is being reported by debugfs: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum Not very useful for additional insights into the problem. debugfs shows following supported features: debugfs 1.45.5 (07-Jan-2020) Supported features: (...snip...) journal_checksum_v2 journal_checksum_v3 Noteworthy is that debugfs is showing either journal_checksum_v2 or journal_checksum_v3 available but not the journal_checksum which is referenced in the manual pages. Does that mean that I should be using v2 or v3, instead of journal_checksum?
Given the discussion that has transpired as comments on my original post, I am prepared to conclude that the many changes to the Kernel over the 2+ years since my original install of the UbuntuMATE 20.04 LTS distro are the source of the differences in behaviour observed by the set of 8 ext4 filesystems that were created at different times, notwithstanding the fact that they reside on the same physical device. Consequently, the only way to ensure that all filesystems of a given fstype (i.e. ext4) react identically to mounting options, tune2fs options and behave/report identically by debuge2fs or mount commands, is to ensure that they are created with the same frozen version of an OS Kernel and the various filesystem utilities that are used to create and tune those filesystems. So, to answer my original question, there is no problem with the filesystems reporting differently because they are reporting correctly, each for their own historical context leading to their current state. Looking forward to my pending upgrade to UbuntuMATE 22.04 LTS (why I was digging into all this to begin with), to avoid the discrepencies, because the install disk is not the latest for the Kernel or utilities, my defined process must be to: upgrade to newer OS, reboot, apply all updates, create backup image of the upgraded+updated OS now residing on the root partition, re-create root partition with latest Kernel and utilities (using a duplicate fully-updated OS residing on secondary internal disk, which is the reason for existence of my 500 GB drive, namely testing, proving, confirming final desired install before rolling over into "production"), recover the primary fully-updated OS from backup image to its proper ROOT partition, reboot, then backup all other partitions on the primary disk, recreate those partitions, then restore the data for each of those partitions. Only in this manner can all the partitions be created as "equals" with the latest and best offered at the one snapshot in time. Otherwise, the root partition is out of step with all other partitions that are created post-updates following the distro installation. Also, having a script similar to the one I created ensures the required actions will be applied uniformly, avoiding any possible errors that might slip in from the tedium when performing it manually many times. For those who want to be able to manage and review these options in a consistent fashion with a script, here is the script I created for myself: #!/bin/sh #################################################################################### ### ### $Id: tuneFS.sh,v 1.2 2022/09/07 01:43:18 root Exp $ ### ### Script to set consistent (local/site) preferences for filesystem treatment at boot-time or mounting ### #################################################################################### TIMESTAMP=`date '+%Y%m%d-%H%M%S' ` BASE=`basename "$0" ".sh" ` ### ### These variables will document hard-coded 'mount' preferences for filesystems ### BOOT_MAX_INTERVAL="-c 10" ### max number of boots before fsck [10 boots] TIME_MAX_INTERVAL="-i 2w" ### max calendar time between boots before fsck [2 weeks] ERROR_ACTION="-e remount-ro" ### what to do if error encountered #-m reserved-blocks-percentage ### ### This OPTIONS string should be updated manually to document ### the preferred and expected settings to be applied to ext4 filesystems ### OPTIONS="-o journal_data,block_validity,nodelalloc" ASSIGN=0 REPORT=0 VERB=0 SINGLE=0 while [ $# -gt 0 ] do case ${1} in --default ) REPORT=0 ; ASSIGN=0 ; shift ;; --report ) REPORT=1 ; ASSIGN=0 ; shift ;; --force ) REPORT=0 ; ASSIGN=1 ; shift ;; --verbose ) VERB=1 ; shift ;; --single ) SINGLE=1 ; shift ;; * ) echo "\n\t Invalid parameter used on the command line. Valid options: [ --default | --report | --force | --single | --verbose ] \n Bye!\n" ; exit 1 ;; esac done workhorse() { case ${PARTITION} in 1 ) DEVICE="/dev/sda3" OPTIONS="" ;; 2 ) DEVICE="/dev/sda7" ;; 3 ) DEVICE="/dev/sda8" ;; 4 ) DEVICE="/dev/sda9" ;; 5 ) DEVICE="/dev/sda12" ;; 6 ) #UUID="0d416936-e091-49a7-9133-b8137d327ce0" #DEVICE="UUID=${UUID}" DEVICE="/dev/sda13" ;; 7 ) DEVICE="/dev/sda14" ;; 8 ) DEVICE="/dev/sda4" ;; esac PARTITION="DB001_F${PARTITION}" PREF="${BASE}.previous.${PARTITION}" reference=`ls -t1 "${PREF}."*".dumpe2fs" 2>/dev/null | grep -v 'ERR.dumpe2fs'| tail -1 ` if [ ! -s "${PREF}.dumpe2fs.REFERENCE" ] then mv -v ${reference} ${PREF}.dumpe2fs.REFERENCE fi reference=`ls -t1 "${PREF}."*".verify" 2>/dev/null | grep -v 'ERR.verify'| tail -1 ` if [ ! -s "${PREF}.verify.REFERENCE" ] then mv -v ${reference} ${PREF}.verify.REFERENCE fi BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}" BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}" rm -f ${PREF}.*.tune2fs rm -f ${PREF}.*.dumpe2fs ### reporting by 'tune2fs -l' is a subset of that from 'dumpe2fs -h' if [ ${REPORT} -eq 1 ] then ### No need to generate report from tune2fs for this mode. ( dumpe2fs -h ${DEVICE} 2>&1 ) | awk '{ if( NR == 1 ){ print $0 } ; if( index($0,"revision") != 0 ){ print $0 } ; if( index($0,"mount options") != 0 ){ print $0 } ; if( index($0,"features") != 0 ){ print $0 } ; if( index($0,"Filesystem flags") != 0 ){ print $0 } ; if( index($0,"directory hash") != 0 ){ print $0 } ; }'>${BACKUP}.dumpe2fs echo "\n dumpe2fs REPORT [$PARTITION]:" cat ${BACKUP}.dumpe2fs else ### Generate report from tune2fs for this mode but only as sanity check. tune2fs -l ${DEVICE} 2>&1 >${BACKUP}.tune2fs ( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.dumpe2fs if [ ${VERB} -eq 1 ] ; then echo "\n tune2fs REPORT:" cat ${BACKUP}.tune2fs echo "\n dumpe2fs REPORT:" cat ${BACKUP}.dumpe2fs fi if [ ${ASSIGN} -eq 1 ] then tune2fs ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE} rm -f ${PREF}.*.verify ( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.verify if [ ${VERB} -eq 1 ] ; then echo "\n Changes:" diff ${BACKUP}.dumpe2fs ${BACKUP}.verify fi else if [ ${VERB} -eq 1 ] ; then echo "\n Differences:" diff ${BACKUP}.tune2fs ${BACKUP}.dumpe2fs fi rm -f ${BACKUP}.verify fi fi } if [ ${SINGLE} -eq 1 ] then for PARTITION in 2 3 4 5 6 7 8 do echo "\n\t Actions only for DB001_F${PARTITION} ? [y|N] => \c" ; read sel if [ -z "${sel}" ] ; then sel="N" ; fi case ${sel} in y* | Y* ) DOIT=1 ; break ;; * ) DOIT=0 ;; esac done if [ ${DOIT} -eq 1 ] then workhorse fi else for PARTITION in 2 3 4 5 6 7 8 do workhorse done fi exit 0 exit 0 exit 0 For those who are interested, there is a modified/expanded script in a follow-on posting. Thank you all for your input and feedback.
OS seems to apply ext4 filesystem options in arbitrary fashion
1,393,281,409,000
I researched the kill, pkill and killall commands, and I understood most of their differences. However, I am confused about their signals: If I run kill -l, I see: 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX But pkill -l gives: pkill: invalid option -- 'l' Usage: pkill [options] <pattern> Options: -<sig>, --signal <sig> signal to send (either number or name) -e, --echo display what is killed -c, --count count of matching processes -f, --full use full process name to match -g, --pgroup <PGID,...> match listed process group IDs -G, --group <GID,...> match real group IDs -i, --ignore-case match case insensitively -n, --newest select most recently started -o, --oldest select least recently started -P, --parent <PPID,...> match only child processes of the given parent -s, --session <SID,...> match session IDs -t, --terminal <tty,...> match by controlling terminal -u, --euid <ID,...> match by effective IDs -U, --uid <ID,...> match by real IDs -x, --exact match exactly with the command name -F, --pidfile <file> read PIDs from file -L, --logpidfile fail if PID file is not locked --ns <PID> match the processes that belong to the same namespace as <pid> --nslist <ns,...> list which namespaces will be considered for the --ns option. Available namespaces: ipc, mnt, net, pid, user, uts -h, --help display this help and exit -V, --version output version information and exit For more details see pgrep(1). Even when there is no list of signals, this command supports/uses signals, just see in the previous output that appears -<sig>, --signal <sig> signal to send (either number or name) And finally, killall -l returns: HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS Question Why are the signal lists for kill, killall and pkill not the same? I assumed pkill and killall should had shown the same output as kill -l - and at first glance, it seems like pkill does not support signals. Environment: I have this situation for Ubuntu Server 18:04, 20:04 and Fedora Workstation 36
Why are the signal lists for kill, killall and pkill not the same? Most likely, because they were implemented differently, with different frames of mind, at different times, by different persons. You should note that all of the commands have some form of a --signal argument that can specify any signal the kernel is capable of sending, regardless of which signals the inline help or manual pages may have written into them by hand. As always, consult a command's documentation (generally available in the manual with man command) for details on its usage, invocation, and options. You can also check §7 of the manual for details- see List of Kill Signals for instance.
Why are the signal lists for kill, killall and pkill not the same?
1,393,281,409,000
I'm currently setting up my first web server without a control panel and so far things are going pretty good! I was just wondering if anyone could direct me to somewhere that explains all of the available command options and what they do? Since I'm mostly following guides to set up specific things on the server, sometimes I run in to a command like sudo mkdir -p /var/www/ve-server{1,2}.com/{html,logs} Which I understand apart from how the -p option is modifying the command.
I think you're looking for man command. Try doing man mkdir and look for what the -p switch does. You can use vim style searching here. Use man man for more info on how to use man command.
List of available command options?
1,453,781,082,000
I'm using bash on Debian. I have to write find -iname "*mp3" -exec cp {} /media/MP3Player/ \; escaping the final semicolon, or else I get an error.
You have to escape the semicolon so that your current shell doesn't see it and use it as a command separator before find gets to see it and use it to terminate the command you're sending to -exec. Also, the manual does state in relevant part: -exec utility [argument ...] ; [...] The expression must be terminated by a semi- colon (;). If you invoke find from a shell you may need to quote the semicolon if the shell would otherwise treat it as a control operator. n. b. quickly searching my find manual shows examples of the \; syntax.
Why should I escape the semicolon with find?
1,453,781,082,000
For I school project, I have to implement a shell/terminal app. But I am wondering how are command line options parsed, because it appears abit weird to be. For example for the command paste (and I believe other utilities behave similarly) [jiewmeng@JM tmp]$ paste --help Usage: paste [OPTION]... [FILE]... Behavior when combining multiple options into one [jiewmeng@JM tmp]$ paste -ds file1 file1 xxx file1 d aaws dafd a Here it appears options are just ignored ... [jiewmeng@JM tmp]$ paste -sd file1 It behaves the same as paste -, waiting for stdin. But when its valid combination [jiewmeng@JM tmp]$ paste -sd. file1 file1 xxx.file1.d aaws dafd a It works. Shouldn't the invalid one give an error instead of waiting for stdin? [jiewmeng@JM tmp]$ paste -d paste: option requires an argument -- 'd' Try 'paste --help' for more information. Ok, makes sense d requires an argument, but ... [jiewmeng@JM tmp]$ paste -d file1 ... waits for stdin. why?
paste -d takes a mandatory argument. See the man page -d, --delimiters=LIST reuse characters from LIST instead of TABs This format really means -d LIST or --delimiters=LIST reuse characters from LIST instead of TABs So with paste -d file1, you're setting LIST to file1, and no file name is specified. And as the man page says: With no FILE, or when FILE is -, read standard input. With paste -ds file1, you're setting delimiter to s. You'd have to supply multiple files to see the effect. Compare $ paste <(printf 'foo\nbar\n') <(printf 'one\ntwo\n') foo<TAB>one bar<TAB>two $ paste -ds <(printf 'foo\nbar\n') <(printf 'one\ntwo\n') foosone barstwo
Parsing of command line options
1,453,781,082,000
This site presents the xargs command with a -J option making able to pass the standard input into a desired position at the command argument: find . -name '*.ext' -print0 | xargs -J % -0 rsync -aP % user@host:dir/ but at a GNU xargs man page this option is not present. Which is the way to do this on, for commands accepting this?
I am not sure this is what you were expecting, but in the BSD world (such as macOS) -I and -J differ in how they pass the multiple "lines" to the command. Example: $ ls file1 file2 file3 $ find . -type f -print0 | xargs -I % rm % rm file1 rm file2 rm file3 $ find . -type f -print0 | xargs -J % rm % rm file1 file2 file3 So with -I, xargs will run the command for each element passed to it individually. With -J, xargs will execute the command once and concatenate all the elements and pass them as arguments all together. Some commands such as rm or mkdir can take multiple arguments and act on them the same way as if you passed a single argument and ran them multiple times. But some apps may change depending how you pass arguments to them. For instance the tar. You may create a tar file and then add files to it or you may create a tar file by adding all the files to it in one go. $ find . -iname "*.txt" -or -iname "*.pdf" -print0 | xargs -0 -J % tar cjvf documents.tar.bz2 %
Xargs `-J` Option
1,453,781,082,000
I have the following JPEG files : $ ls -l -rw-r--r-- 1 user group 384065 janv. 21 12:10 CamScanner 01-10-2022 14.54.jpg -rw-r--r-- 1 user group 200892 janv. 10 14:55 CamScanner 01-10-2022 14.55.jpg -rw-r--r-- 1 user group 283821 janv. 21 12:10 CamScanner 01-10-2022 14.56.jpg I use $ img2pdf to transform each image into a PDF file. To do that : $ find . -type f -name "*.jpg" -exec img2pdf "{}" --output $(basename {} .jpg).pdf \; Result : $ ls -l *.pdf -rw-r--r-- 1 user group 385060 janv. 21 13:06 CamScanner 01-10-2022 14.54.jpg.pdf -rw-r--r-- 1 user group 201887 janv. 21 13:06 CamScanner 01-10-2022 14.55.jpg.pdf -rw-r--r-- 1 user group 284816 janv. 21 13:06 CamScanner 01-10-2022 14.56.jpg.pdf How can I remove the .jpg part of the PDF filenames ? I.e., I want CamScanner 01-10-2022 14.54.pdf and not CamScanner 01-10-2022 14.54.jpg.pdf. Used alone, $ basename filename .extension prints the filename without the extension, e.g. : $ basename CamScanner\ 01-10-2022\ 14.54.jpg .jpg CamScanner 01-10-2022 14.54 But it seems that syntax doesn't work in my $ find command. Any idea why ? Note : if you replace $ img2pdf by $ echo it's the same, $ basename doesn't get rid of the .jpg part : $ find . -type f -name "*.jpg" -exec echo $(basename {} .jpg).pdf \; ./CamScanner 01-10-2022 14.56.jpg.pdf ./CamScanner 01-10-2022 14.55.jpg.pdf ./CamScanner 01-10-2022 14.54.jpg.pdf
The issue with your find command is that the command substitution around basename is executed by the shell before it even starts running find (as a step in evaluating what the arguments to find should be). Whenever you need to run anything other than a simple utility with optional arguments for a pathname found by find, for example if you need to do any piping, redirections or expansions (as in your question), you will need to employ a shell to do those things: find . -type f -name '*.jpg' \ -exec sh -c 'img2pdf --output "$(basename "$1" .jpg).pdf" "$1"' sh {} \; Or, more efficiently (each call to sh -c would handle a batch of found pathnames), find . -type f -name '*.jpg' -exec sh -c ' for pathname do img2pdf --output "$(basename "$pathname" .jpg).pdf" "$pathname" done' sh {} + Or, with zsh, for pathname in ./**/*.jpg(.DN); do img2pdf --output $pathname:t:r.png $pathname done This uses the globbing qualifier .DN to only match regular files (.), to allow matching of hidden names (D), and to remove the pattern if no matches are found (N). It then uses the :t modifier to extract the "tail" (filename component) of $pathname, :r to extract the "root" (no filename suffix) of the resulting base name, and then adds .png to the end. Note that all of the above variations would write the output to the current directory, regardless of where the JPEG file was found. If all your JPEG files are in the current directory, there is absolutely no need to use find, and you could use a simple loop over the expansion of the *.jpg globbing pattern: for pathname in ./*.jpg; do img2pdf --output "${pathname%.jpg}.png" "$pathname" done The parameter substitution ${pathname%.jpg} removes .jpg from the end of the value of $pathname. You may possibly want to use this substitution in place of basename if you want to write the output to the original directories where the JPEG files were found, in the case that you use find over multiple directories, e.g., something like find . -type f -name '*.jpg' -exec sh -c ' for pathname do img2pdf --output "${pathname%.jpg}.pdf" "$pathname" done' sh {} + See also: Understanding the -exec option of `find`
find -exec command options with basename [duplicate]
1,453,781,082,000
I just ran into a weird scenario, and I'm not sure if this is a feature, and if not, what sort of security implications does it represent? Likely nothing for grep, but other directory-crawling utilities, potentially? Here's how to reproduce: touch ./-vR grep hi * Notice that everything not hi is returned, recursively.
That's a known misfeature of GNU getopt (used for option parsing by GNU tools). grep hi -vR is required by POSIX to look for hi in the file called -vR as options many not be recognised past non-option arguments (like hi here). Most GNU tools or tools making use or the GNU getopt API in the default mode don't honour that unless POSIXLY_CORRECT is in the environment. So you need either: POSIXLY_CORRECT=1 grep hi * (force grep to behave in a POSIX compliant way) or grep -- hi * (explicitly mark the end of options with --) or grep hi ./* (make sure all file names start with ./, and so not -). In any case, with: grep -e hi -vR you'd have the problem with GNU and non-GNU grep as that hi is not a non-option argument, but an argument to the -e option, so you'd need: grep -e hi -- * or (better as it also addresses the problem of a file called -): grep -e hi ./* (POSIXLY_CORRECT wouldn't help).
Bug or Feature? Grep accepts files as flags
1,453,781,082,000
Zsh includes a powerful utility for parsing command line options,zparseopts. Is there an easy way to extract the array of all the command line arguments that don't begin with a hyphen?
Filter the positional parameters $@ with the parameter expansion suffix :#-* to strip elements matching the pattern -* and the parameter expansion flag @ inside double quotes to preserve empty elements. Add the M flag to retain only the elements that match the pattern. non_hyphen_arguments=("${(@)@:#-*}") hyphen_arguments=("${(@M)@:#-*}") However this is not a good way of parsing command line arguments; for example, given myscript hello -a world you will get hello and world in non_hyphen_arguments and -a in hyphen_arguments. The simpler form of argument parsing, with single-letter options, is getopts.
Is there a simple way to get array of all arguments that do not begin with a hyphen?
1,453,781,082,000
-n noexec ; Read commands and check syntax but do not execute them. Can you give an example of when we will need "noexec" which is one of the Bash options? Can someone give me an example of how to properly use this option?
First, make sure you do not have a file named file in your directory. Create this syntaxErr.bash: echo X > file for i in a b c; echo $i >> file done As you can see, it misses a do after the for loop. See what happens now: $ bash -n syntaxErr.bash syntaxErr.bash: line 4: syntax error near unexpected token `echo' syntaxErr.bash: line 4: ` echo $i >> file' $ cat file cat: file: No such file or directory $ bash syntaxErr.bash syntaxErr.bash: line 4: syntax error near unexpected token `echo' syntaxErr.bash: line 4: ` echo $i >> file' $ cat file X So, you get the syntax error feedback without actually executing the commands. If you are doing something quite important, you may not want to run the script until all syntax errors have been corrected. Note: This ctafind.bash does not contain a syntax error: echo X > file cta file find . -type z cat was misspeled by cta and there is no file of type z for find. Bash does not report the mistakes if you run it with the -n flag. $ bash -n ctafind.bash $ bash ctafind.bash ctafind.bash: line 2: cta: command not found find: Unknown argument to -type: z After all, Bash can neither know beforehand if there is executable cta nor what are the accepted options of an external command.
Bash Shell “noexec” Option Usage Purpose
1,453,781,082,000
Most programs use -v to enable verbose mode. Why does the GNU xargs use -t to enable verbosity?
And the shell uses -x (this is short for "execution trace"). For xargs, the -t option enables "trace mode", i.e. it will show you what it's doing by printing the commands that it is executing. Tracing is just another ways of "being verbose". Note that tracing the execution of commands is a particular way of being verbose, as verbosity in other commands (such as GNU mv) doesn't really show the commands being executed but just tells you that it's doing something. You will also notice that the GNU xargs utility has a --verbose option, which is a synonym for -t.
Why xargs uses -t to enable verbose mode?
1,453,781,082,000
I'm trying to open mplayer to play video without any terminal output using this: mplayer -msglevel all=-1 /path/to/video also: mplayer -really-quiet /path/to/video but it doesn't make it completely silent! and these are printed to output: [flv @ 0x9a5d100]Estimating duration from bitrate, this may be inaccurate [ass] Init [ass] Updating font cache How can I make mplayer completely silent? Thanks
Try this: mplayer file > /dev/null 2>&1
mplayer -msglevel all=-1 doesn't make it completely silent!
1,453,781,082,000
I see this in my dmesg log EXT4-fs (md1): re-mounted. Opts: commit=0 EXT4-fs (md2): re-mounted. Opts: commit=0 EXT4-fs (md3): re-mounted. Opts: commit=0 I think that means that dealloc is disabled? does mdadm not support dealloc?
mdadm supports dealloc. commit=sec is the time, the filesystem syncs its data and metadata. Setting this to 0 has the same effect as using the default value 5. So I don't get the link between mdadm and commit=0 in your question?
what is commit=0 for ext4? does mdadm not support it?
1,453,781,082,000
Before you hit me with the obvious, I know, the backup option makes a backup of a file. But the thing is, the cp command in general backs up a file. One could argue a copy of a file is a backup. So more precisely, my question is this: what does the -b option do that the cp command doesn't do already? The cp(1) man page gives the following description of the --backup option: make a backup of each existing destination file This definition isnt very useful, basically saying "the backup option makes a backup". This gives no indication as to what -b adds to the cp I know -b puts some suffix at the end of the name of the new file. But is there anything else it does? Or is that it? Is a -b backup just a cp command that adds something to the end of the filename? Thank you P.S. Do you typically use -b when making backups in your daily work? Or do you just stick to -a?
It makes a backup copy of each destination file that already exists. The ones that would otherwise get overwritten and lost. $ mkdir foo; cd foo $ echo hello > hello.txt $ echo world > world.txt $ cp -b hello.txt world.txt $ ls hello.txt world.txt~ world.txt $ cat world.txt hello $ cat world.txt~ world That world.txt~ being the backup file it created. If you look closely, you'll see that the backup file is actually the original file, just renamed. (i.e. the inode number stays the same, and so do e.g. the permissions of that file.)
What precisely does cp -b (--backup) actually do?
1,453,781,082,000
With the unzip -n /path/to/filename/filename.zip command the compressed file is uncompressed but does not overwrite existing files. This approach is useful when the same compressed file was uncompressed before and some files were either deleted or renamed - so with this n option they can retrieve them. What is same approach for the tar command but for extracting purposes? Normally I use the tar -xzf /path/to/filename.tar.gz command
Two options come to mind that should do what you want. From the tar man page: -k, --keep-old-files don't replace existing files when extracting, treat them as errors Alternatively: --skip-old-files don't replace existing files when extracting, silently skip over them
How to extract the tar.gz file but without overwriting existing files?
1,453,781,082,000
Through the following valuable tutorial: Ps Command in Linux (List Processes) If the ps -ef command is executed then the output has the following header: UID PID PPID C STIME TTY TIME CMD ... ... In the same tutorial exists an explanation of the STIME column/header. But through the man ps in the STANDARD FORMAT SPECIFIERS section - and even doing a search in the same man through the /STIME search term, well the STIME term/column/header does not appear. Note I am assuming it would happen for other columns/headers according the option(s) applied for the ps command. So ... Question How is expected to know all the headers with their respective descriptions? Linux Distribution This scenario happens for Ubuntu Server 18:04 and 20:04
If you’re using the procps implementation of ps (which is the most common on Linux distributions you’re likely to be using), the headers are listed in the “STANDARD FORMAT SPECIFIERS” section. STIME is documented since version 3.3.17, released in February 2021: stime STIME see start_time. (alias start_time). The fact that it wasn’t documented until version 3.3.17 illustrates that it does happen that headers aren’t documented, but such cases are bugs. You’ll get procps 3.3.17 or later in Ubuntu 21.10 and later.
ps command: how to know all the headers with their respective descriptions?
1,453,781,082,000
If is executed ps -p 3384 3395 (observe -p is lowercase) then the output is as follows: PID TTY STAT TIME COMMAND 3384 tty6 S+ 0:00 man ls 3395 tty6 S+ 0:00 pager Until here all fine and is expected. Just by mistake was executed ps -P 3384 3395 (observe -P is uppercase) then the output is as follows: PID PSR TTY STAT TIME COMMAND 3384 2 tty6 S+ 0:00 man ls 3395 3 tty6 S+ 0:00 pager Observe in this output appears a new header - it is PSR Question What does -P mean in the context of the ps command? And yes, I already read both man ps and ps --help all where appears the documentation for the -p option/parameter as follows respectively: # Approach 1 p pidlist Select by process ID. Identical to -p and --pid. -p pidlist Select by PID. This selects the processes whose process ID numbers appear in pidlist. Identical to p and --pid. # Approach 2 -p, p, --pid <PID> process id --ppid <PID> parent process id But about -P does not appear nothing. To be honest when the ps command was executed with -P - theoretically I expected an error because -P does not exist, it because -P is not documented. Extra Questions If -P theoretically does not exist because is not documented, Why an error was not thrown? What does PSR mean?
I don't have any knowledge of why -P is not documented. I wonder if the feature was not fully supported at some point in the past? -P (or -o psr) sets the output to include PSR, which the manual states is: psr PSR processor that process is currently assigned to. There is a comment in the help code for ps that suggests -P is "missing" from the help. Going back to the initial checkin that I can find, several other flags used to be so marked (including -c, -L, and -M). This is the only of the "dash" options I see still listed as missing from the help page.
What does '-P' mean in the context of the 'ps' command?
1,453,781,082,000
Is it possible to create a symbolic link to an executable that executes it with a certain option/argument? I know a workaround would be to create a bash script in one of the PATH directories but can I achieve it somehow with a symbolic link? EDIT: Thanks for the answers, in my case an alias wouldn't do the job because i'm looking for a way to start matlab from dmenu and at least on arch matlab is only invokable from a terminal at first. Since dmenu does not consider aliases it wouldn't work .. i should have made my problem more clear.
No, a symbolic link is a type of file that references the path of another file. Now, if you do: ln -s /bin/cat foo And invoke foo as: $ ./foo -A /proc/self/cmdline ./foo^@-A^@/proc/self/cmdline^@ You'll notice that the first argument that cat/foo received was ./foo and not cat. So, in a way, through that symlink, we've had cat receive a different first argument. That's probably not what you had in mind for your first argument though. Using a shell script wrapper is the typical way to address it. You don't need to use bash for that though. Your system's standard sh will be more than enough for that: #! /bin/sh - exec /path/to/my/executable --extra-option "$@" Other options include defining an alias or function in your ~/.bashrc/~/.zshrc... for it
Symbolic link with option
1,453,781,082,000
I know the commands to restart/stop/start, but when I try to pass options it doesn't seem to work! CENTOS 6, MySQL 5.14 service mysql restart service httpd restart Then I tried this: /etc/init.d/mysql --general_log /my/log/path.log That doesn't work either (the error message says it doesn't have access) =/ Update: Apparently I can't run mysqld as root because of security issues
To change options permanently and in the sanctioned manner, edit the files in /etc/sysconfig that have the same name as the service. For example, consider httpd. On one system I have, there are several things you can set: # Processing model HTTPD=/usr/sbin/httpd.worker # Additional options OPTIONS= # Set locale HTTPD_LANG=C (The actual file is much more verbose and explanatory than this.) There should be files in /etc/sysconfig for virtually every service.
If I'm logged in as root, how do I restart mysql or apache with options?
1,453,781,082,000
I want to make some configuration on my centos+apache,let httpd server send 200 response when client make a options request. There is a very old post here(2011). Returning “200 OK” in Apache on HTTP OPTIONS requests The configuration may not be fit for current's os and apache. If the configuration is in good status,curl -X OPTIONS -i http://remote_ip/remote.html may get 200 return code. Herre are my tries: 1.cat .htaccess AuthName "login" AuthType Basic AuthUserFile /var/www/html/passwd require user usernam Options -Indexes <LimitExcept OPTIONS> Require valid-user </LimitExcept> Reboot it with systemctl restart httpd.Error info for command :curl -X OPTIONS -i http://remote_ip/remote.html <title>500 Internal Server Error</title> </head><body> <h1>Internal Server Error</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> Delet the above config in .htacccess. 2.cat /etc/httpd/conf/httpd.conf. <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride AuthConfig Require all granted Header always set Access-Control-Allow-Origin "*" Header always set Access-Control-Allow-Methods "POST, GET, PUT, DELETE, OPTIONS" Header always set Access-Control-Allow-Credentials "true" Header always set Access-Control-Allow-Headers "Authorization,DNT,User-Agent,Keep-Alive,Content-Type,accept,origin,X-Requested-With" RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ blank.html [QSA,L] </Directory> Reboot it with systemctl restart httpd.Error info for command :curl -X OPTIONS -i http://remote_ip/remote.html HTTP/1.1 401 Unauthorized Date: Sat, 08 Sep 2018 00:34:36 GMT Server: Apache/2.4.6 (CentOS) Access-Control-Allow-Origin: * Access-Control-Allow-Methods: POST, GET, PUT, DELETE, OPTIONS Access-Control-Allow-Credentials: true Access-Control-Allow-Headers: Authorization,DNT,User-Agent,Keep-Alive,Content-Type,accept,origin,X-Requested-With WWW-Authenticate: Basic realm="login" Content-Length: 381 Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Unauthorized</title> </head><body> <h1>Unauthorized</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html>
First, there is an issue with your .htaccess file: In lines 6-8; you require that the user is authenticated, but only if it's not an OPTIONS request. This is fine. In line 4 however, you require that the user is authenticated as user usernam, regardles of the request method (GET, POST, OPTIONS, etc...) So if you remove line 4 or move it into the LimitExcept section your config should work. For more information see the mod_authz_core docs Second, the error message for the first solution you posted ("The server encountered an internal error or misconfiguration...") hints at an invalid httpd.conf file. There may be something else misconfigured. Check your configuration and the Apache documentation. As a reference, the config files I used for testing can be found at: https://github.com/mhutter/stackexchange/tree/master/467654
How to make httpd response 200 for options request?
1,453,781,082,000
According to several sources, the UNIX utility guidelines specify that operands should always be processed after options: utility_name[OPTIONS][operands...] Some older UNIX utilities are known to not follow these conventions quite so, e.g, find, but newer and well-established utilities do too break the rules without an apparent explanation, e.g, curl <url>. I would like to know if there is a good reason for this and what is the community general consensus on this.
The normal convention is that arguments always follow options. The first non-option (the first string on the command line that does not start with -) terminates the options and begins the arguments. Some tools, notably the build tools (compilers, linkers), have always gone against this convention. Another example that you note is find. Sometimes this is done because the options take effect at the point on the command line where they appear, so you need a way to specify arguments both before and after the option, where the option applies to that argument only if the argument appears after the option. This convention allows you to write a shell script that contains a line like this: rm foobar ${more_things_to_remove} ...and guarantee that you can't accidentally add options to the rm command even if the shell variable more_things_to_remove has a nasty value like "-rf". That convention predates the more recent convention of using the special option -- to terminate option processing. -- is a much better way of marking the end of options explicitly: rm -- foobar ${more_things_to_remove} # and it works even if you don't need to delete something called "foobar": rm -- ${more_things_to_remove} So lately (and by lately, I mean this has already been doing on for many, many years) lots more command line parsers appear to have been moving toward breaking the earlier convention and allowing options and arguments to be mixed apparently everywhere (subject always to -- forcing the end of options) even if they don't have any special reason to break the convention like compilers and some other tools did. Personally I never know which utilities still adhere to the convention and which don't, so I always place options before arguments as before, and I am mildly surprised when I see someone else's working code which does it in the opposite order!
Why do some utilities parse operands before options?
1,453,781,082,000
TLDR: How is useradd --no-log-init actually used [in GNU/Linux [Debian]? I read the command's man and info page about this option as: »user will not be listed in the lastlog and faillog files ⁄ output«. I know that the PAM module took over most of the actual login work. I understand the lastlog and the faillog commands, and I'm aware that via the latter e.g. the number of login attempts and such can be set. I also know that bad login attempts are recorded in /var/log/utmp. This strengthens my suspicion that this command is »leftover« from back then before PAM module took over the job.
If you look at the useradd.c source there is this bit that shows the following. Assuming the command line switch --no-log-init was set, the faillog_reset and lastlog_reset functions are called: if ((!lflg) && (getpwuid (user_id) == NULL)) { faillog_reset (user_id); lastlog_reset (user_id); When lastlog_reset is called this bit will modify the lastlog file: fd = open (LASTLOG_FILE, O_RDWR); if ( (-1 == fd) || (lseek (fd, offset_uid, SEEK_SET) != offset_uid) || (write (fd, &ll, sizeof (ll)) != (ssize_t) sizeof (ll)) || (fsync (fd) != 0) || (close (fd) != 0)) { The above shows the file lastlog being opened for read & write (O_RDWR) followed by a if statement that makes sure the file was opened successfully followed by a seek within the file to a location and a write of the new user's info to the file. Afterwards the file is closed. Based on this I would assume that that option controls whehter a user's UID is added to the lastlog "database" file and nothing more.
useradd --no-log-init [comprehension question]
1,453,781,082,000
I would like to selectively replace a command-line argument that is being passed to automatically format it for the downstream command being executed. The argument will have spaces and that is the point of contention. I'm presently doing this: set -- $(echo $* | sed -e "s/$_ARG/--description=\"$_ID - $_SUMMARY\"/") The new argument, --description="$_ID - $_SUMMARY" gets split. I run a downstream command: <cmd> "$@" I may have any number of arguments, but a sample use case is: FROM activity --description='handle null' TO: activity --description='$SOME_VARIABLE - handle null' Ultimately, when I run the downstream command even with "$@" it is already split there, so it doesn't work as I intend. It ends up like activity --description=value - handle null --description=value, -, handle, and null then are considered separate arguments.
There are a few issues in your code. One of them is using $* unquoted, which will cause the shell to split the original arguments into words on whatever characters are in $IFS (space, tab, newline, by default) and apply filename globbing on the generated words. Quoting $* as "$*" is also not quite what you want if you ever want to support multiple arguments containing spaces, tabs or newlines as this would be a single string. Switching to using "$@" would not help as echo would just produce a each argument with spaces in-between for sed to read. echo may do special processing of any string containing backslash sequences like \n and \t, depending on the shell and its current settings. In some shells, echo -n may not output -n (there may be other problematic strings too, like -e). Using sed to modify the arguments would possibly work on a single argument if you're happy treating it as text (arguments could potentially be multi-line strings), but in this case you are applying some editing script on all arguments at once, which may misfire. What splits the resulting string though, is the non-quoting of the command substitution used with set. This re-splits the result from sed and applies filename globbing on the result again. You will need to parse the command line options that you intend to modify. In short, loop over the arguments, and modify the ones you want to modify. The following sh script adds the string hello - at the start of the option-argument of each instance of the --description long option. If the long option is immediately followed by a space, as in --description "my thing", then this is rewritten with a =, as if the script had been called with --description="my thing", before this is modified into the final --description="hello - my thing". #!/bin/sh SOME_VARIABLE=hello skip=false for arg do if "$skip"; then skip=false continue fi # Re-write separate option-argument with "=". # This consumes an extra argument, so need to skip # next iteration of the loop. case $arg in --description) arg=--description=$2 shift skip=true esac # Add the value "$SOME_VARIABLE - " to the start of the # option-argument of the --description long option. case $arg in --description=*) arg=--description="$SOME_VARIABLE - ${arg#--description=}" esac # Put the (possibly modified) argument back at the end # of the list of arguments and shift off the first item. set -- "$@" "$arg" shift done # Print out the list of arguments as strings within "<...>": printf '<%s>\n' "$@" ${arg#--description=} removes the prefix string --description= from the value of $arg, leaving the original option-argument string. Example run: $ sh ./script -a -b --description="my thing" -c -d --description "your thing" -e <-a> <-b> <--description=hello - my thing> <-c> <-d> <--description=hello - your thing> <-e> The code may be simplified significantly if you always will be expecting to have the long option and its option-argument delimited by a = character: #!/bin/sh SOME_VARIABLE=hello for arg do # Add the value "$SOME_VARIABLE - " to the start of the # option-argument of the --description long option. case $arg in --description=*) arg=--description="$SOME_VARIABLE - ${arg#--description=}" esac # Put the (possibly modified) argument back at the end # of the list of arguments and shift off the first item. set -- "$@" "$arg" shift done printf '<%s>\n' "$@" Test run using same arguments as above (the second instance of --description will not be modified as it does not match the pattern --description=*): $ sh ./script -a -b --description="my thing" -c -d --description "your thing" -e <-a> <-b> <--description=hello - my thing> <-c> <-d> <--description> <your thing> <-e> A bash variant of the shorter second script from above, using shell pattern matching with [[ ... ]] in place of case ... esac, and using an array to hold the possibly modified arguments during the course of the loop: #!/bin/bash SOME_VARIABLE=hello args=() for arg do if [[ $arg == --description=* ]]; then arg=--description="$SOME_VARIABLE - ${arg#--description=}" fi args+=( "$arg" ) done set -- "${args[@]}" printf '<%s>\n' "$@"
Replacing command line arguments while preserving spaces
1,453,781,082,000
What is the preferred way to test if a command takes an option? What are the caveats? As a motivating example, at login my shell aliases grep to add several --exclude-dir options but this option is not available on all the machines I access.
You can do a test on dummy data which should succeed if and only if the option is available and working as expected: trap 'if [ -e "$tmp" ]; then rm -rf -- "$tmp"; fi' EXIT tmp="$(mktemp -d)" cd -- "$tmp" mkdir exclude mkdir include echo foo > include/test.txt echo foo > exclude/test.txt [ "$(grep --exclude-dir exclude --recursive foo . | wc -l)" -eq 1 ]
Test if command accepts a specific option
1,453,781,082,000
The question of why some commands rely on manpages whereas others rely on something like the --help flag for providing command usage reference is not new. There is usually a difference in scope between documentation for a command and a command usage synopsis. The latter is often a subset of the former. But even when most commands and utilities have manpages for instance, there exists differences in their formatting of the synopsis section which have very practical implications when trying to extract such information. In other cases one might find clues with the strings utility when a command has seemingly no documentation. I was interested with the commands I have on this QNX platform and discovered the use command1 to display usage information. As explained in usemsg, the framework involves setting a standard usage record in the utilities source and once compiled this can be accessed with the use command and you can also wrap the native functionality etc. It is quite convenient as I could simply do use -d dir >>file on /base and /proc/boot to extract all the usage for all the commands on the system basically. So I then briefly looked at the source for GNU coreutils ls and FreeBSD ls to see if they did something like that and the former puts usage information in some usage named function(for --help I guess) while the latter doesn't seem to put it anywhere at all(?). Is this sort of solution(use) typical of what you find with commercial Unix to present command usage reference interactively? Does POSIX/SUS recommend or suggest anything about presenting/implementing command usage reference in commands(as opposed to specifying notation for shell utilities)? 1.use command: use Print a usage message (QNX Neutrino) Syntax: use [-aeis] [-d directory] [-f filelist] files Options: -a Extract all usage information from the load module in its source form, suitable for piping into usemsg. -d directory Recursively display information for all files under directory. -e Include only ELF files. -f filelist Read a list of files, one per line, from the specified filelist file, and display information for each. -i Display build properties about a load module. -s Display the version numbers of the source used in the executable. files One or more executable load modules or shell scripts that contain usage messages.
Commercial unices generally present usage information only in man pages. Having the command itself display usage information is not a traditional Unix feature (except for displaying the list of supported options, but without any explanation, on a usage error). POSIX and its relatives don't talk about anything like this. Having a --help option that displays a usage summary (typically a list of options, one per line, with a ~60 characters max description for each option) is a GNU standard. As far as I know, this convention was initiated by the GNU project, as part of the double-dash convention for multi-letter option names. There are other utilities, such as X11 utilities, that use multi-letter option names with a single dash and support -help; I don't know which one came first. The use command is a QNX thing.
Interactive command usage reference: do you generally have that on Unix?
1,453,781,082,000
I am running a program fls (from the Sleuth Kit) with option -v for verbose mode. However it takes too long, and the program is still running since yesterday. I guess it will run faster without verbose mode, but I am not sure how long it takes to finish running and whether it is worth to stop and rerun it without verbose. so I wonder if it is possible to turn off verbose mode in the middle of running and resume the running after that? Thanks!
As lynxlynxlynx points out, unless the program author makes provisions for it, you cannot change the verbosity while the program is running, but you can keep it from printing to a terminal in case that is a bottle neck. To do this, close the terminal after telling the shell not to send a SIGHUP. Most shells will send a SIGHUP to any jobs that are still running when you try to exit. You can tell the shell not to do this. There are various ways to do this; the most straightforward is probably with disown. If you haven't yet, suspend the job with ctrl+z, then make it run again in the background with bg, then run disown. The shell no longer tracks this process as a job, so it will not send a SIGHUP when exiting. If you have already put the program in the backgound, then if there are any other background jobs that were started after it, you'll need the jobspec of the program you're interested in to use as a parameter to pass to bg and disown.
Is it possible to disable verbose in the middle of running?
1,453,781,082,000
I know how to restrict standard users to run a command by removing execute permissions for that command. But it's possible restrict standard users to run a command with a specific option/argument? For example a standard user should be able to run the following command: ls but not: ls -l I think that this can be possible since there are some commands like chsh or passwd which a standard user can run them, but he get permission denied when he runs chsh root or passwd -a -S.
I think the only way would be to write your own wrapper to the command/utility in question and have it decide what is allowed or not allowed based on the (E)UID of the user who started it. The tools you mention that do this such as chsh or passwd have this functionality built into their implementation. How to write a wrapper for ls #!/usr/bin/perl use strict; use warnings; my $problematic_uid = 1000; # For example my $is_problematic = $< == $problematic_uid; unless(/ -l / ~~ @ARGV or $is_problematic){ exec qq{/new/path/to/ls }.join '',@ARGV }else{ die "Sorry, you are not allowed to use the -l option to ls\n" } You need to ensure that the path to the original ls isn't in your user's PATH. Which is why I wrote /new/path/to/ls. The problem is, this wrapper requires that your user be able to execute the original ls so the user may still circumvent it by calling the original ls directly.
Restrict standard users to run a command with a specific argument
1,453,781,082,000
I'm sure there used to be an option in less which allowed you to page onto the next file after you reached the end of the current file, so you could just keep *space*ing through a bunch of short files without having to keep :ning to get to the next one. Could someone remind me what it is?
You might try the -e option.
What is the `less` command line option to page to the next file at the end of the current one?
1,453,781,082,000
I am trying to make a shell script which will ask some questions to the user, and will issue a final command with some or other options depending on what the user chose. Right now, the script looks like this: if [[ $a == "y" ]] ; then command --option 1 argument elif [[ $a == "n" ]] ; then command --option 2 argument else command --option 3 argument fi Considering the command is very long, and contains a lot of options and arguments which remain constant between the different statements, I was wondering if I could in some way write a single line, with variable options being considered only if the corresponding condition is true. This also applies to GNU parallel issuing one or more commands if [[ $b == "n" ]] ; then find ./ -name '*.extension' | parallel -j $(nproc) command1 --option argument else find ./ -name '*.extension' | parallel -j $(nproc) command1 --option argument\; command2 --option argument
Sure, you can store the options to pass in a variable. Your first example could be something like this (also, [[ is a bash feature, not available in POSIX shell): if [[ $a == "y" ]] ; then arg=1 elif [[ $a == "n" ]] ; then arg=2 else arg=3 fi command --option "$arg" argument Your second example: if [[ $b != "n" ]] ; then extra="; command2 --option argument" fi find ./ -name '*.extension' | parallel -j $(nproc) command1 --option argument$extra # if unset, $extra will be empty—you can of course explicitly # set it to '' if this bothers you. These work because of how variable expansion works: it's just substituted in to the command line, then (if unquoted) word-splitted, then passed to the command. So the called command doesn't know about the variables at all, the shell expanded them before calling it. Since you're using bash, you can also use arrays: args=() if [ -n "$OPT_LONG" ]; then args+=(-l) fi if [ -n "$OPT_SORT_TIME" ]; then args+=(-t) fi ls "${args[@]}" The array feature lets you easily build up arbitrarily long argument lists without worrying about word splitting breaking your code.
Issuing commands with options determined by condition
1,453,781,082,000
I'm taking a look at the optparse library for bash option parsing, specifically this bit in the generated code: params="" while [ $# -ne 0 ]; do param="$1" shift case "$param" in --my-long-flag) params="$params -m";; --another-flag) params="$params -a";; "-?"|--help) usage exit 0;; *) if [[ "$param" == --* ]]; then echo -e "Unrecognized long option: $param" usage exit 1 fi params="$params \"$param\"";; ##### THIS LINE esac done eval set -- "$params" ##### AND THIS LINE # then a typical while getopts loop Would there be any real reason to use eval here? The input to eval seems to be properly sanitized. But wouldn't it work the same to use: params=() # ... --my-long-flag) params+=("-m");; --another-flag) params+=("-a");; # ... params+=("$param");; # ... set -- "${params[@]}" That seems cleaner to me. In fact, wouldn't this allow options to be parsed directly out of the params array (without even using set) by using while getopts "ma" option "${params[@]}"; do instead of while getopts "ma" option; do?
You don't need to use a bash array here (but do so if it feels better). Here's how to do it for /bin/sh: #!/bin/sh for arg do shift case "$arg" in --my-long-flag) set -- "$@" -m ;; --another-flag) set -- "$@" -a ;; "-?"|--help) usage exit 0 ;; --*) printf 'Unrecognised long option: %s\n' "$arg" >$2 usage exit 1 ;; *) set -- "$@" "$arg" esac done This is cleaner than the bash array solution (personal opinion) since it doesn't need to introduce another variable. It's also better than the auto-generated code that you show as it retains each command line argument as a separate item in "$@". This is good, because this allows the user to pass arguments containing whitespace characters as long as they are quoted (which the auto-generated code does not do). Style comments: The loop above is supposed to translate long options into short options for a loop over getopts later. As such, it breaks from that task by actually acting on some options, such as -? and --help. IMHO, these should instead be translated to -h (or some suitable short option). It also does the translation of long options past the point where options should not be accepted. Calling the script as ./script.sh --my-long-flag -- -? should not interpret -? as an option due to the -- (meaning "options ends here"). Likewise, ./script.sh filename --my-long-flag should not interpret --my-long-option as an option, as the parsing of options should stop at the first non-option. Here's a variant that takes the above into account: #!/bin/sh parse=YES for arg do shift if [ "$parse" = YES ]; then case "$arg" in --my-long-flag) set -- "$@" -m ;; --another-flag) set -- "$@" -a ;; --help) set -- "$@" -h ;; --) parse=NO set -- "$@" -- ;; --*) printf 'Unrecognised long option: %s\n' "$arg" >$2 usage exit 1 ;; *) parse=NO set -- "$@" "$arg" esac else set -- "$@" "$arg" fi done What this does not allow is long options with separate option arguments, such as --option hello (the hello would be treated as a non-option and the option parsing would end). Something like --option=hello would be fairly easy to handle with a bit of extra tinkering though.
Can a bash array be used in place of eval set -- "$params"?
1,453,781,082,000
On GNU/Linux with fdisk (util-linux 2.20.1), when using, say fdisk /dev/sda3 there are quite some options and the and even an "expert mode" (x). Most of these are explained through the m option. But I can't find an documentation on these, neither in the man not the info page. As I don't want to fumble around with my file systems at the moment - any idea? Just to make this clear: I'm not talking about the "regular" option, e.g. fdisk -v but the ones where fdiskfirst had to be started. My guess was that I might get luck in another, related manpage but I couldn't find anything so far. Did I miss something?
This page says Expert mode can be used to force the drive geometry to match another drive: x: Enter expert mode c: Change the number of cylinders h: Change the number of heads r: Return to normal mode Additionally, fdisk/README.fdisk on source package tells following story: Extra commands for experts -------------------------- The eXtra command `x' puts `fdisk' into `expert' mode, in which a slightly different set of commands is available. The Active, Delete, List, New, Type, Verify, and `eXpert' commands are not available in expert mode. The commands Write and Quit are available as in ordinary mode, the Print command is available, but produces output in a slightly different format, and of course the Menu command prints the expert menu. There are several new commands. 1. The Return command brings you back to the main menu. 2. The Extended command prints the list of table entries which point to other tables. Ordinary users do not need this information. The data is shown as it is stored. The same format is used for the expert Print command. 3. The dangerous Begin command allows you to move the start of data in a partition away from its beginning. Other systems create partitions with this format, and it is sometimes useful to be able to reproduce it. 4. The slightly dangerous Cylinders command allows you to change the available number of cylinders. For SCSI disk owners, note that we require not the actual number of physical cylinders, but the number of logical cylinders used by DOS and other operating systems. 5. The extremely dangerous Heads and Sectors commands allow you to change the number of heads and sectors. It should not be necessary to use these commands unless you have a SCSI disk, whose geometry Linux is not always able to determine. SCSI disk owners note that we need not the actual number of heads or of sectors per track, but the number believed to exist by DOS and other operating systems. *Warning*: If you set either of these numbers to a bad value, you may lose all data on your disk. Always, after giving any of the commands Begin, Cylinder, Heads, or Sectors, you should Return to the main menu and give the Verify command.
fdisk (expert mode) options
1,453,781,082,000
Using the Linux find command -iname option, I want to find and move files that have many different extensions (.pdf, .doc, .xlx, .ppt). I know I can use multiple patterns with grep. But can that also be done with find?
Yes, but not with -iname alone. find itself has an "OR": expr1 -o expr2 Or; expr2 is not evaluated if expr1 is true. So you could do: find /path/to/dir -iname '*.pdf' -o -iname '*.doc' -o -iname '*.xlx' -o -iname '*.ppt' Beware that if you need to perform some action on either of those matching files (like -exec, -print), or add extra filtering that applies to all (like -type f), or in other words if you need to match on either of those and to do/check something else, since in find like in many other languages and has higher precedence than or, you'd need to use parentheses: find /path/to/dir '(' -iname '*.pdf' \ -o -iname '*.doc' \ -o -iname '*.xlx' \ -o -iname '*.ppt' \ ')' -type f -exec ls -ld {} +
Can the Linux find -iname option take more the one pattern
1,453,781,082,000
In the man pages it says: -C list entries by columns However, I really cannot notice any difference between the output of ls or ls -C, could someone explain this to me?
To add what @muru said in the comments; have a look at info coreutils ls `-C' `--format=vertical' List files in columns, sorted vertically. This is the default for `ls' if standard output is a terminal. It is always the default for the `dir' program. GNU `ls' uses variable width columns to display as many files as possible in the fewest lines. I take this to mean -C exists specifically for the case where you redirect or pipe the output and want to preserve columnation. Otherwise ls will switch to ls -1 when it detects that it's not displaying to a terminal.
What does the option -C achieve in ls output?
1,453,781,082,000
On a GNU/Linux system, I found the following, (for me very confusing seeming entry) about an option for userdel in the German version of its man page: I'm truly sorry, but I can't really provide you with a translation because a) I don't understand what it means (even with German as mother-tongue) and b) I don't understand what this option is supposed to do.
Here's the version from my English manpage: -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. In other words, instead of editing /etc/passwd and friends, you're editing CHROOT_DIR/etc/passwd. For example, you might boot a live CD, mount the hard drive as /mnt, and then use -R /mnt to edit its users.
What is "userdel --root?" supposed to do
1,453,781,082,000
Can somebody please explain to me the exact differences between useradd -b and useradd -d in [Debian] Linux? Both seem to work quite similar to me, but then I spot differences that confuse me.
-b specifies the location of users' home directories. On your average Debian box, this will be /home; you can change the default by editing /etc/default/useradd. useradd will add the new username to this path to get the home directory. This means that if you do useradd -b /somewhere ian the new user's directory will be /somewhere/ian. -d sets the home directory explicitly, irrespective of defaults. So useradd -d /somewhere-else/ian ian then the user's home directory will be set to /somewhere-else/ian. Note that the directory will be set in the password file, but won't actually be created unless -m is also specified (or the CREATE_HOME setting is enabled in the defaults file).
difference between useradd -b and useradd -d
1,453,781,082,000
The following command prints a message over ssh : xmessage Message -display :0 & How does it work? there is no -display option in xmessage's man page.
It's included by (obscure) reference. SEE ALSO X(7), echo(1), cat(1) And buried down a ways in X(7): OPTIONS Most X programs attempt to use the same names for command line options and arguments. All applications written with the X Toolkit Intrinsics automatically accept the following options: -display display This option specifies the name of the X server to use. followed by a number of other X Toolkit Intrinsics (Xt) standard options. More modern toolkits have similar common options, which you can see with the --help-all option.
xmessage over ssh
1,453,781,082,000
I want to write a shell script which will take some arguments with some options and print that arguments. Suppose the name of that script is abc.ksh. Usage of that script is - ./abc.ksh -[a <arg>|b <arg>|c|d] <some_string> Now I write a shell script which will take options and arguments #!/bin/ksh # Default Values vara="," varb=false varbname="" varc=false # Scanning inputs while getopts :a:b:cd option do case $option in a) vara=$OPTARG;; #shift $((OPTIND-1));; b) varb=true varbname=$OPTARG;; #shift $((OPTIND-1));; c) varc=true;; #shift $((OPTIND-1));; d) echo "Usage $0 \-[a|b|c|d] <filename>" exit 0;; \?) echo "Invalid option -$OPTARG. Please run '$0 -h' for help" exit 1;; :) echo "Option -$OPTARG requires an argument. Please run '$0 -d' for help" exit 1;; esac done print "Args: $* \nvara: $vara \noptfile: $varb \nvarbname: $varbname \nvarc: $varc" Examples of Correct Inputs: ./abc.ksh -a "sample text" "some_string" ./abc.ksh "some_string" -a "sample text" ./abc.ksh -asample\ text some_string etc... some_string input is not catch in my script. How can I catch that?
It is typical for programs to force the "some_string" part to be the last argument so that .abc.ksh "some_string" -a "sample text" is an error. If you do this, then after parsing the options, $OPTIND holds the index to the last argument (the "some_string" part). If that is not acceptable, then you can check at the beginning (before you enter the while to see if there is a non-prefixed argument. This will let you have "some_string" at the beginning and at the end. If you needed to have it in the middle, you could either not use getopts or you could have two sets of getopts. When the first one errors out, it could be due to the non-prefixed argument; get it and start a new getopts to get the remaining args. Or you can skip getopts all together and roll your own solution.
How to catch optioned and non optioned arguments correctly?
1,453,781,082,000
I am trying to use the tcpdump command in a project and I have some difficulties understanding the help page. SYNOPSIS tcpdump [ -AbdDefhgHIJKlLnNoOpPqRStuUvxX ] [ -B buffer_size ] [ -c count ] [ -C file_size ] [ -G rotate_seconds ] [ -F file ] [ -i interface ] [ -j tstamp_type ] [ -k (metadata_arg) ] [ -m module ] [ -M secret ] [ -r file ] [ -s snaplen ] [ -T type ] [ -w file ] [ -W filecount ] [ -E spi@ipaddr algo:secret,... ] [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ] [ -Q packet-metadata-filter ] [ expression ] First, what is this "[ -AbdDefhgHIJKlLnNoOpPqRStuUvxX ]" at the top ? What is the meaning of that ? I also see a lot of people on the internet doing crazy things with this command, for example tcpdmp -nnvvXSs 1514 ... what is that -nnvvXSs, and how can we know this can be used ? I see codes examples that according to me does not correspond to the man page, I just don't get how to read, how to understand this help file. Anybody tell me how to read this and understand it ?
By convention, the brackets indicate something that is optional. So you can run tcpdump, or tcpdump -c 3 -i eth0, or tcpdump -c 3 -r /path/to/file, etc. Also, unless explicitly indicated, options can be used in any order, so you can run tcp -i eth0 -c 3, etc. Most commands allow options to be clustered when they use a single letter. For example, tcpdump -AX is equivalent to tcpdump -A -X. The manual groups all options that don't take arguments to make the presentation shorter: [ -Abd ] would be a shortcut for [ -A ] [ -b ] [ -d ], etc. The synopsis is just a summary. Read the “description” or “options” section to see what each option does and what the word after each option can be replaced with. For example, tcpdmp -nnvvXSs 1514 is a shorter equivalent of tcpdump -n -n -v -v -X -s -s 1514, and means: -n: don't do name resolution. Repeating this option has no additional effect. -v: causes tcpdump to print out more stuff. Repeating this option causes it to print even more stuff. -X adds a dump of the content of each packet to the output. -S causes absolute TCP sequence numbers to be printed. -s 1514 causes only the first 1514 bytes of each packet to be captured.
How to read this tcpdump man page?
1,453,781,082,000
How to know all DHCP options and values (on the client side, Linux Ubuntu / Debian / ArchLinux ) provided by the server. I need to pass non-standard option to the client by the DHCP option code example: 222-223 Unassigned 224-254 Reserved (Private Use) All DHCP Options here The file /var/lib/dhcp.lease does not contiend my options
just edit the config file /etc/dhcp/dhclient.conf and add also request # custom dhcp option (72 = www-server) also request www-server; the value is avaible in /var/lib/dhcp/dhclient.lease
How to know DHCP options value on debian/ubuntu and other linux
1,453,781,082,000
mke2fs -r offers Set the filesystem revision for the new filesystem. Note that 1.2 kernels only support revision 0 filesystems. The default is to create revision 1 filesystems. Trying to look up what was meant by that I found loads of screenshots etc. of dumpe2fscontaining the line Filesystem revision #: 1 (dynamic) Question: What does that mean? What does the option actually do and what is meant with this output? Where is documented what a value of zero would mean, what dynamic actually means; and is there a value of two also? I could live with »keep untouched as you won't ever need to change it« – but as there is an option for that and no note about backward compatibility, this makes me wonder…
It seems to really only hinge on what version of the Linux Kernel you're pairing with the filesystem you're attempting to mke2fs and also later use with the resulting ext2,3,4 filesystem. fs_param.s_rev_level = 1; /* Create revision 1 filesystems now */ if (is_before_linux_ver(2, 2)) fs_param.s_rev_level = 0; Here it's defaulting to 1 unless the kernel's version is below version 2.2. The man page from freeBSD has a little more info on this: -O feature[,...] Create filesystem with given features (filesystem options). Currently, the sparse_super and filetype features are turned on by default unless mke2fs is run on a system with a pre-2.2 Linux kernel. Filesystems that may need to mounted on pre-2.2 kernels should be created with -O none (or -r 0 for 1.2 kernels) which will disable these features, even if mke2fs is run on a system which can support them. So I'm imagining that there are some features that must be lacking in the older kernels (1.2, 2.2, etc.) and this switch is here so that if you need to create a filesystem that will later be mounted on one of these older systems, that you'll be able to create it on the systems with the newer kernels. There is also additional info in the release notes for e2fsprogs (the package which comprises mke2fs). excerpts ref#1: [E2fsprogs 1.41.1 (September 1, 2008)] Mke2fs will correctly enforce the prohibition against features (specifically read-only features) in revision 0 filesystems. (Thanks to Benno Schulenberg for noticing this problem.) ref#2: [E2fsprogs 1.20 (May 20, 2001)] E2fsck will now bump the filesystem revision number from zero to one if any of the compatibility bits are set. ref#3: [E2fsprogs 1.15 (July 18, 1999)] Mke2fs now creates revision 1 filesystems by default, and with the sparse superblock feature enabled. The sparse superblock feature is not understood by Linux 2.0 kernels, so they will report errors when mounting the filesystem. This can be worked around by using the mount options "check=none". ref#4: [E2fsprogs 1.10 (April 24, 1997)] Mke2fs once again defaults to creating revision #0 filesystems, since people were complaining about breaking compatibility with 1.2 kernels. Warning messages were added to the mke2fs and tune2fs man pages that the sparse superblock option isn't supported by most kernels yet (1.2 and 2.0 both don't support parse superblocks.) ref#5: [E2fsprogs 1.08 (April 10, 1997)] Dumpe2fs now prints more information; its now prints the the filesystem revision number, the filesystem sparse_super feature (if present), the block ranges for each block group, and the offset from the beginning of the block group. ref#6: [E2fsprogs 1.03 (March 27, 1996)] Support (in-development) filesystem format revision which supports (among other things) dynamically sized inodes. These comments would seem to address all your questions!
"mke2fs -r fs-revision-level" - how is this used?
1,453,781,082,000
I have an application that needs a modified LD_PRELOAD. I want to start the application using the originally provided rc script, so I can benefit from an automatically updated rc script on an update of the application. I can't modify the original rc script of course, because any change would be lost on the next update. So, is there maybe some system settings like: If starting application X, use a modified LD_PRELOAD? Or would my best way really be to copy the original rc script, modify it and use the modified rc script?
The best way is probably to create your own rc-script that you will use instead of the "official one". Otherwise, your rc-script probably includes an external "config" file if you check it. The include may look like this: . /etc/default/mydaemon-config So that you can edit /etc/default/mydaemon-config and do something like: export LD_PRELOAD=whateveryouwant But be careful, it may not be what you want, because every process started from the script will have that LD_PRELOAD configuration. Otherwise, the original script may have something like: DAEMON=/usr/bin/mydaemon So you might be able to change it from /etc/default/mydaemon-config with: DAEMON="LDPRELOAD=whateveryouwant $DAEMON" This depends on your original rc-script, that we don't have, so it's only speculation... Anyway, these are all workarounds, and IMHO, you should rather look for a solution to avoid using LD_PRELOAD in the first place.
Automatically start an application with a modifed LD_PRELOAD?
1,453,781,082,000
So I am writing a script that mixes options with arguments with options that don't. From research I have found that getopts is the best way to do this, and so far it has been simple to figure out and setup. The problem I am having is figuring out how to set this up so that if no options or arguments are supplied, for it to run a separate set of commands. This is what I have: while getopts ":n:h" opt; do case $opt in n) CODEBLOCK >&2 ;; h) echo "script [-h - help] [-n <node> - runs commands on specified node]" >&2 exit 1 ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done I have tried adding something like this to the top of the code to catch no arguments, but it then runs the same code even when options and arguments are supplied (something is probably wrong in my syntax here): [[ -n "$1" ]] || { CODEBLOCK1 } while getopts ":n:h" opt; do case $opt in n) CODEBLOCK2 >&2 ;; h) echo "script [-h - help] [-n <node> - runs commands on specified node]" >&2 exit 1 ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done The man page for getopts was sparse and I have found relatively few examples on searches that provide any insight into getopts, let alone all the various features of it.
You can use any of the following to run commands when $1 is empty: [[ ! $1 ]] && { COMMANDS; } [[ $1 ]] || { COMMANDS; } [[ -z $1 ]] && { COMMANDS; } [[ -n $1 ]] || { COMMANDS; } Also, you don't need to quote the expansion in this particular example, as no word splitting is performed. If you're wanting to check if there are arguments, though, you'd be better to use (( $# )). If I've understood your intentions, here is how your code could be written with getopts: #!/bin/bash (( $# )) || printf '%s\n' 'No arguments' while getopts ':n:h' opt; do case "$opt" in n) [[ $OPTARG ]] && printf '%s\n' "Commands were run, option $OPTARG, so let's do what that says." [[ ! $OPTARG ]] && printf '%s\n' "Commands were run, there was no option, so let's run some stuff." ;; h) printf '%s\n' 'Help printed' ;; *) printf '%s\n' "I don't know what that argument is!" ;; esac done
How to run a specified codeblock with getopts when no options or arguments are supplied?
1,453,781,082,000
I cannot find documentation on some long, double-dash options of apt, upon which I stumbled with Bash's tab-completion. $ apt install --<TAB><TAB> --allow-change-held-packages --fix-broken --purge --allow-downgrades --fix-missing --reinstall --allow-insecure-repositories --fix-policy --remove --allow-remove-essential --force-yes --show-progress --allow-unauthenticated --ignore-hold --show-upgraded --arch-only --ignore-missing --simulate --assume-no --install-recommends --solver --assume-yes --install-suggests --target-release --auto-remove --no-install-recommends --trivial-only --download --no-install-suggests --upgrade --download-only --only-upgrade --verbose-versions --dry-run --print-uris I have looked in apt --help man apt but they only provide brief information on the main apt arguments, so I moved on to man apt-get man dpkg where I found some of the long options, e.g. --simulate, --dry-run, --download-only. But others seem to be lacking, such as --upgrade, --solver and --fix-policy. Is there some other manual page I am missing or should the missing options be considered undocumented?
--fix-policy is indeed not documented yet, see https://salsa.debian.org/apt-team/apt/-/blob/master/debian/changelog https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=578020: - new "--fix-policy" option to install all packages with unmet important dependencies (useful with --install-recommends to see what not-installed recommends are on the system) Unfortunately I've been unable to find the documentation for --solver (which looks like an internal flag not meant to be used by the end user). From apt-private/private-cmndline.cc: ... addArg(0, "reinstall", "APT::Get::ReInstall", 0); addArg(0, "solver", "APT::Solver", CommandLine::HasArg); addArg(0, "planner", "APT::Planner", CommandLine::HasArg); ... There's no such option as --upgrade - looks like it's been deprecated. You can always peruse apt sources to find out more: https://salsa.debian.org/apt-team/apt/-/tree/master/
Are some apt long options undocumented?
1,453,781,082,000
I spent quite a while researching the problem I encountered but none of the getopts tutorial say anything about the leading whitespace in OPTARG when using getopts. In bash(on Ubuntu and OSX), executing below commands: OPTIND=1 && getopts ":n:" opt "-n 1" && echo "OPTARG: '$OPTARG'" and it echos: OPTARG: ' 1' However, if I execute this: OPTIND=1 && getopts ":n:" opt "-n1" && echo "OPTARG: '$OPTARG'" then I will get what I expect: OPTARG: '1' From what I read online: Normally one or more blanks separate the value from the option letter; however, getopts also handles values that follow the letter immediately [Reference] If the above quote is universally right for getopts, what do I do wrong that I get that leading whitespace in OPTARG?
You should just leave out the double quotes around "-n -1", as that is what preserves the space before the 1: OPTIND=1 && getopts ":n:" opt -n 1 && echo "OPTARG: '$OPTARG'" gives: OPTARG: '1'
Strange leading whitespace in OPTARG when using getopts
1,546,521,739,000
What difference does it make when using the -print and -depth parameters in find command, given that they produce the same outcome: /home/pkaramol/Desktop/testdir $ find . . ./testfile3.txt ./testfile1.txt ./testfile4.txt ./testdir1 ./testfile2.txt ./testdir2 /home/pkaramol/Desktop/testdir $ find . -depth ./testfile3.txt ./testfile1.txt ./testfile4.txt ./testdir1 ./testfile2.txt ./testdir2 . /home/pkaramol/Desktop/testdir $ find . -depth -print ./testfile3.txt ./testfile1.txt ./testfile4.txt ./testdir1 ./testfile2.txt ./testdir2 .
-print will ensure that the current pathname is printed to standard output. Some flags turns off the default printing of pathnames (-exec for example). -depth will cause a depth-first traversal of the file hierarchy, so that pathnames in directories without subdirectories are handled first (bottom up rather that top down). In your example, it makes little difference as you are working in directory without subdirectories, but notice that . is reported after the other pathnames when you use -depth (this is since . is the top-most directory to be searched, so its pathname is handled last with -depth and first without -depth). It is useful to use -depth if one is deleting directories with find as you would get errors from trying to access already deleted directories without it. As Scott points out in comments below, you would definitely need -depth when renaming directories too, or you would potentially not be able to traverse the directory structure at the same time as you're renaming directories in it. The -delete flag turns on -depth by default. Example: Delete all directories beneath the current directory whose names matches *deleteme (for example folder-deleteme), and also print the paths to the successfully deleted directories: find . -depth -type d -name '*deleteme' -exec rm -rf {} ';' -print Given the following directory structure, $ tree . `-- folder-deleteme `-- another-deleteme `-- evenmore-deleteme 3 directories, 0 file executing the above find command without -depth would result in $ find . -type d -name '*deleteme' -exec rm -rf {} ';' -print ./folder-deleteme find: ./folder-deleteme: No such file or directory because find deletes the top-most folder-deleteme directory (and prints its path) and then tries to enter it to look for further directories to delete. Also: $ find . . ./folder-deleteme ./folder-deleteme/another-deleteme ./folder-deleteme/another-deleteme/evenmore-deleteme $ find . -depth ./folder-deleteme/another-deleteme/evenmore-deleteme ./folder-deleteme/another-deleteme ./folder-deleteme .
Parameters of find command
1,546,521,739,000
I need to find every symbolic link on the server. The version is AIX 6.1. man find says -L Follow symbolic links But find -L is not a proper usage. Usage: find [-H | -L] Path-list [Expression-list] I tried to Google this but couldn't find answers.
You need to pass a top directory name. Some versions of find assume the current directory if you omit it, but not AIX's. Also, -L isn't what you want here: it tells find to follow symbolic links, but that's not what you're asking, you're asking to find symbolic links. find / -type l -print will print out all the symbolic links. See man find
How to find every symbolic link on a server?
1,546,521,739,000
I have many files like xyz_123_foo.ext for which I would like to add -bar to the filenames at the end to result in xyz_123_foo-bar.ext. I tried: rename . -bar. xyz_* which resulted in: rename: invalid option -- 'b' followed by the usage text. I then tried variations with '-bar' and "-bar" to no avail. How can I get rename to accept - as part of the replacement string? Or would another command be more efficient or appropriate? My shell is bash and I am using the rename from util-linux on SuSe Linux SLE12.
mmv is nice for tasks like this ex. mmv -n -- '*.ext' '#1-bar.ext' or for any dot extension mmv -n -- '*.*' '#1-bar.#2' Remove the -n once you are happy that it is doing the right thing.
'rename' with expression|replacement with a leading '-' (hyphen|minus)
1,546,521,739,000
I try to delete this file on my solaris machine rm "-Insi" rm: illegal option -- I rm: illegal option -- n rm: illegal option -- s I also try this rm "\-Insi" -Insi: No such file or directory rm '\-Insi' -Insi: No such file or directory so what other option do I have?
Try: rm -- -Insi or: rm ./-Insi
how to delete file that start with "-" [duplicate]
1,546,521,739,000
Why does the POSIX standard reserve the -W option for vendor extensions of the system utilities? I do not understand why the letter ‘W’ is used. ‘V’ (for vendor) could make more sense. Maybe this question should be moved to Retrocomputing SE.
This provision was added between Single Unix v2 (1997) and Single Unix v3 (2001). It wasn't done in a vacuum: it had to take into account both the previous specifications and existing practice. If a letter was already specified for some commands, the existing commands would have to be grandfathered in and wouldn't be able to follow this guideline. If a letter was already used by popular programs not specified by POSIX or by popular implementations of POSIX programs, this would have made it harder to specify those utilities later, and harder for users to remember options with similar meanings but different letters for different commands. Looking at the documented options in SUSv2: grep -h -Po '(?<=^<dt><b>-)[[:alnum:]]' /usr/share/doc/susv2/susv2/xcu/*.html | sort | uniq -c we can see that all the lowercase letters are taken by at least one utility, and most uppercase letters as well. The free letters are -B, -J, -K, -Y and -Z. -V is taken only for two commands: command, where it's a variant of -v (added — I don't know by who originally, possibly one of the Unix specification working groups or ksh — because the original definition of -v wasn't quite satisfactory). dis, where it's an option to print the version of the utility. POSIX could have chosen -V for vendor, but it would have meant that command would not have followed the guidelines. This would have been annoying since command was created for the sake of portability (both for its behavior of avoiding differences between shell builtins and external utilities, and for its function similar to type but without the output formatting variability). In addition, dis was far from the only program out there to use -V for “version” (most of these weren't codified by POSIX because they weren't part of the base system: you don't need a “print version” option for a utility that's part of the base system, you just use the version number of the base system). So -V would have had too many exceptions, both inside POSIX and out, to be a good choice. -W was only taken by cc. cc implementations tended to differ quite a lot between vendors (in particular, with respect to which C dialect it expected), which led to it being removed from future versions of the standard (replaced by c89, c99, etc.). Since the next version of the standard no longer had cc, giving -W a new meaning didn't exclude any standard utility. As far as I know, it wasn't a particularly common choice of option letter in non-POSIX utilities, so it was up for grabs. Why -W and not another of the uppercase letters that wasn't used at all? I don't know for sure, it could have been arbitrary, but it didn't come out of the blue. The -W option was codified for cc with an argument that itself had to have a certain structure allowing multiplexing: it had to start with a character specifying what “subdomain” (compilation phase) the option applies to, followed by “subdomain-specific” options. Since POSIX.1-2001 only leaves one letter for implementation-specific options, this letter would have to be multiplexed in order to allow more than one implementation-specific behavior change. So the -W option of cc was an inspiration for how the implementation-specific -W could be used — not necessarily the exact syntax, but the basic principle of taking an argument with a prefix indicating a “sub-option” of some sort.
Why is `-W` reserved for vendor extensions?
1,546,521,739,000
I stumbled upon the following answer on Unix stackexchange, where the -plow option is used with dpkg-reconfigure, but I can't find anything about it in the dpkg or dpkg-reconfigure manpages: http://man7.org/linux/man-pages/man1/dpkg.1.html http://manpages.ubuntu.com/manpages/cosmic/en/man8/dpkg-reconfigure.8.html So what does this option doing exactly? Smells like cargo-cult to me.
From the manpage you linked for dpkg-reconfigure: -pvalue, --priority=value Specify the minimum priority of question that will be displayed. dpkg-reconfigure normally shows low priority questions no matter what your default priority is. See debconf(7) for a list. And from man 7 debconf: Another nice feature of debconf is that the questions it asks you are prioritized. If you don't want to be bothered about every little thing, you can set up debconf to only ask you the most important questions. On the other hand, if you are a control freak, you can make it show you all questions. Each question has a priority. In increasing order of importance: low Very trivial questions that have defaults that will work in the vast majority of cases. medium Normal questions that have reasonable defaults. high Questions that don't have a reasonable default. critical Questions that you really, really need to see (or else). Only questions with a priority equal to or greater than the priority you choose will be shown to you. You can set the priority value by reconfiguring debconf, or temporarily by passing --priority= followed by the value to the dpkg-reconfigure(8) and dpkg- preconfigure(8) commands, or by setting the DEBIAN_PRIORITY environment variable. So, -plow will show all questions, irrespective of whatever default might have been set elsewhere. That might be want you want (that's often what I want, when I run dpkg-reconfigure).
What does "-plow" option do in dpkg-reconfigure
1,546,521,739,000
What does ln -T do? I know the flag does not exist in the BSD version of ln, and it only exists in the GNU version, and I have read the documentation that it will make ln "treat LINK_NAME as a normal file always", but what does that mean and why does the BSD version not have it?
The -T (--no-target-directory) option to GNU ln provides a safety feature that may be useful in scripts. Suppose that you want to create a new name, $newname, for a file $filename, where the new name is maybe provided from external sources. The command ln -T "$filename" "$newname" would then fail if $newname was an already existing directory, instead of unexpectedly creating the name $filename inside that directory (which may cause further operations to fail in hilarious ways). It's a shortcut for something like if [ ! -e "$newname" ]; then ln "$filename" "$newname" else printf 'failed to create hard link "%s": File exists\n' "$newname" >&2 # Further code to handle failure to create link here. fi Likewise, the -t (--target-directory) provides a way of ensuring that the new name for the file is actually created inside an existing directory, and nowhere else. Also, as pointed out by Stephen Kitt in comments, moving the test on the filetype of the target/"link name" into the utility itself may also decrease the risk of being affected by the race condition whereby the target is changed in-between testing for its existence and/or type and actually creating the link. Why does POSIX or BSD not have -T or -t? Well, GNU tool in general have many extensions added that provide convenience. The -T and -t options to ln are some of these. They don't really let you do something that couldn't be done without them, and they don't add functionality. Some systems, like the BSDs, have not even considered adding them, or have considered but rejected the idea of adding them (I don't really know, I can't recall seeing anyone send in a patch to add it on the openbsd-tech mailing list, for example).
What does "ln -t" do [duplicate]
1,546,521,739,000
My system: OS: MacOS / Mac OS X (Mojave 10.14.5) OS core: Darwin (18.6.0) Kernel: Darwin Kernel / XNU (18.6.0 / xnu-4903.261.4~2/RELEASE_X86_64) ls: version unknown, but man ls gives a page from the BSD General Commands Manual Shells: Bash: GNU bash, version 5.0.7(1)-release (x86_64-apple-darwin18.5.0) Zsh: zsh 5.7.1 (x86_64-apple-darwin18.2.0) In MacOS, in a terminal CLI using a shell such as bash or zsh, I'd like to use the (BSD) command ls (or perhaps a similarly common and useful tool) to list the contents of a directory other than the current working directory, where all files except those ending with a tilde (~) are shown. Excluding the last stipulation, ls naturally accomplishes this task when the non-current directory is used as an argument to ls: ls arg where arg is an absolute or relative path to the non-current directory (such as /absolute/path/to/directory, ~/path/from/home/to/directory, or path/from/current/dir/to/directory). I know how to list non-backup contents in the current directory, using filename expansion (aka "globbing") and the -d option (to list directories and not their contents), like so: ls -d *[^~] (or ls -d *[!~]). I want the same sort of results, but for a non-current directory. I can almost achieve what I want by using ls -d arg/*[^~], where arg is the same as described above, but the results show the path to each content element (ie, each file and directory in the directory of interest). I want ls to display each element without the path to it, like is done with ls arg. In Linux, using the GNU command ls, I can achieve exactly what I want using the -B option to not list backup files: ls -B arg. Although this is what I want, I'd like to achieve this using tools native to MacOS, preferably the BSD ls. Note: I do not want to use grep (eg, ls arg | grep '.*[^~]$'), because grep changes the formatting and coloring of the output. Question recap: On a Mac, how can I list the contents of a non-current directory but not the backup files, preferably using ls?
You could execute ls in a subshell: (cd arg; ls -d *[^~])
On a Mac, how can I list contents of a non-current directory without showing backup files (ending with ~), preferably with BSD command ls?
1,546,521,739,000
Say, I have custom kernel from my distribution, how could I get list of all options the kernel was build with? It's possible to get them by reading config file from kernel package from vendor's repo, but is there any other way? I mean ways to get that information form the kernel itself, maybe from procfs?
In addition to what @Stephen Kitt said, at least on my Debian system you can find the information in: /boot/config-<version> Where version, in my case, is: 3.16.0-4-686-pae So, issuing: less /boot/config-3.16.0-4-686-pae Spits out the kernel configs in a long list!
How to determine the options Linux kernel was build with? [duplicate]
1,546,521,739,000
As sort's man page says: -m, --merge merge already sorted files; do not sort Here are my two simple text files and the result of sort command with -m option: soroush@pop-os:~/Desktop$ cat a_file.txt aa ff hh bb soroush@pop-os:~/Desktop$ cat b_file.txt gg tt ss ii cc soroush@pop-os:~/Desktop$ sort -m a_file.txt b_file.txt aa ff gg hh bb tt ss ii cc I expected to see this output: aa ff hh bb gg tt ss ii cc Could anyone explain this behavior please?
Merging assumes the files are already sorted: "merge already sorted files; do not sort", so will attempt to merge them into alphabetic order. It is not a simple concatination. So in your example: aa < gg : print aa, move to the next line in a_file ff < gg : print ff, move to the next line in a_file hh > gg : print gg, move to the next line in b_file hh < tt : print hh, move to the next line in a_file bb < tt : print bb, move to the next line in a_file No a_file left, so print the rest of b_file.
How does -m option work in sort command?
1,546,521,739,000
I'm using this : for example ./imgSorter.sh -d directory -f format the scripts' content is : #!/bin/bash while getopts ":d:f:" opt; do case $opt in d) echo "-d was triggered with $OPTARG" >&2 ;; f) echo "-f was triggered with $OPTARG" >&2 ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done use cases : $ ./imgSorter.sh -d myDir -d was triggered with myDir OK $ ./imgSorter.sh -d -f myFormat -d was triggered with -f NOK : how is it that a string beginning with - is not detected as a flag ?
You have told getopts that the -d option should take an argument, and in the command line you use -d -f myformat which clearly (?) says "-f is the argument I'm giving to the -d option". This is not an error in the code, but in the usage of the script on the command line. Your code needs to verify that the option-arguments are correct and that all options are set in an appropriate way. Possibly something like while getopts "d:f:" opt; do case $opt in d) dir=$OPTARG ;; f) format=$OPTARG ;; *) echo 'error' >&2 exit 1 esac done # If -d is *required* if [ ! -d "$dir" ]; then echo 'Option -d missing or designates non-directory' >&2 exit 1 fi # If -d is *optional* if [ -n "$dir" ] && [ ! -d "$dir" ]; then echo 'Option -d designates non-directory' >&2 exit 1 fi If the -d option is optional, and if you want to use a default value for the variable dir in the code above, you would start by setting dir to that default value before the while loop. A command line option can not both take and not take an argument.
how to properly parse shell script flags and arguments using getopts
1,546,521,739,000
I'm scripting a sequence of commands I used to enter by hand. The rough outline goes something like this $ echo ${FILENAME_ARGS} | some | big | pipeline | sort -options | etc >temp $ gs -OptionsThatNeverChange `cat temp` Basically, the only thing that changes between runs are the options passed to sort and the files contained in FILENAME_ARGS. What I want to do is be able to type something like $ do-thing -some -options *.pdf and have all of the -whatever go to the options for sort, while all of the *.pdf go into FILENAME_ARGS. To do this I need a way to pick out all of the things that start with a - from $@. Is there some simple way to pull options out of the arguments passed to a shell script?
echo ${FILENAME_ARGS} | grep -e - | sort >options echo ${FILENAME_ARGS} | grep -v -e - >files
Passing options to subcommands in bash
1,546,521,739,000
joe (Joe's Own Editor) manual outlines the command syntax like so: joe [global-options] [ [local-options] filename ]... My question is, how do I demarcate global-options from local-options? An example: joe --wordwrap -nobackup file1 file2 file3 Even though I placed --wordwrap (to turn wordwrap off), and -nobackup (to turn backup file creation off), they only apply to the first file. The subsequent files, file2 and file3, still will have word wrap on, and backup files will be created for them if edited and saved. Of course I could do this: joe --wordwrap -nobackup file1 --wordwrap -nobackup file2 --wordwrap -nobackup file3 .. but that is cumbersome, and would imply there wouldn't be global-options at all. I could also edit /etc/joe/joerc and /etc/joe/ftyperc (or copy them to the user's home dir, and make the overriding edits there) to turn word wrap and backups off for all files, but on systems where I'm only visiting (and that might have, say, a shared /home/ubuntu user/homedir, say, rather than individual user accounts/homedirs), I would rather not make permanent changes to the system tools that other users might use, yet it would be handy to be able to enter the editor args on the command line (perhaps even via a keyboard macro) without having to repeat the args for each file. So is there a way to have global-options in joe on the command line for parameters that can be also used as local-options? ("Why don't you use Vi[m] or Emacs instead?" Because I've never found vi[m] intuitive, I have forgotten the Emacs chords which I had mastered in the 90's, and joe does the job nicely, so why not? :-)
Whether an option is global or local is a property of the option, not something you can control. In the documentation, there are two separate lists of options: the first is the list of global options, the second the list of local options. Global options include options like asis, assume_color, etc. and affect the overall behaviour of the editor (e.g. your terminal's support for colour doesn't depend on the file you're editing). Local options include autoindent, encoding etc. and can be set automatically based on a file's extension. I don't see a way of applying a local option to all files on the command-line, apart from using shell expansion.
joe (editor) global vs. local options on the command line?
1,546,521,739,000
I'm designing a terminal-based application, and I want to implement a --silent flag to suppress noise if they don't want it. In this application, errors are most commonly logged when the application cannot perform a necessary task, warnings are logged when something couldn't be performed, but the application can still operate. So, that stated, should a --silent flag suppress warnings and errors, or just warnings? What is the general convention on that? In ruby, ruby -w0 turns off warnings for your script (information via ruby --help) But in curl, curl --silent supresses all output.
As you see with curl / ruby, there is no genereal convention. It greatly depends on your application and what can go wrong with it. It also depends on how it is used. For some application it makes sense to have --quiet and --really-quiet flags, for some it's just overkill. Also a --really-quiet flag is usually not required technically, as you can throw away all messages with 2>/dev/null. As general guidelines I suggest the following: Have a meaningfull returncode. If your application can destinguish different error classes (like user error, external error), have different returncodes and document them. If your application can produce lots of warnings, have a flag to filter only warnings. If your application has different loglevels (like INFO, NOTICE, WARNING, ERROR), have a flag to filter them. (Like: -q, -qq, -qqq.) If your application is used mostly interactivly, suppress warnings but not errors. Especially if the application does not stop after the error. If your application is used mostly in an automatic setting, suppress warnings and errors, because nobody is looking at them anyway. But only if the application stops after that error and produces a meaningfull returncode.
Is a "--silent" flag supposed to suppress warnings and errors, or just warnings?
1,546,521,739,000
I am effectively making a recycling bin via some scripts I made. The first script is pretty much an alternative to the rm command (instead of actually deleting a file, it moves it to a deleted folder). I've managed to allow the script to move multiple files to the deleted folder: sh moveToBin file1 file2 fil3 (similar to: rm file1 file2 file3) The start of my first script is: #!/bin/bash for param in "$@" do .. .. (main part of my code) One by one, each parameter (file) is moved to the deleted folder. I am now trying to incorporate adding switch -parameters, but I'm not quite sure how to incorporate that. The above works for sh moveToBin file1 file2 file3 but how do I incorporate the possibility that the first argument (only the first) COULD be a switch -i (ask to delete), -v (confirm deletion), -iv (ask to delete then confirm deletion). Hence the switch only applies to $1. I tried out something called getopts but I'm not familiar with the use. Once a switch is used, this applies to $2 onwards i.e. sh moveToBin -i file1 file2 this asks to delete file1, and after I decide, it then asks to delete file2 I thought of something like this, but I doubt it will work. any help? counter=1 for param in "$@" do while [[ if $param = "-*" && counter -eq1]]; do getopts "iv" arg; case "$arg" in i) read -p "want to delete $param ?" ans if [ ans =~ ^[Y][y] ] then #run main code for that param value fi;; v) #run main code for that param value echo "file @param deleted";; esac counter=$((counter+1)) continue done #run main code for that param value done The while loop condition means that it is the first parameter and that this parameter starts with a hyphen.
The getopts builtin parses options. You run it only once for all the options, then you process the operands (non-option arguments) that are left. getopts allows the caller to indifferently write e.g. moveToBin -iv file1 or moveToBin -i -v file1, and you can write moveToBin -- -file to handle file names that begin with a dash (anything after -- is interpreted as an operand). getopts keeps track of how many arguments it's already processed through the OPTIND variable. When it's finished its job, OPTIND is the index of the first operand; since arguments are numbered from 1, this means that the first OPTIND-1 arguments were options. As long as you're parsing options, you don't know yet the list of files to process. So remember the option by setting a variable, and query the variable later. #!/bin/bash confirm= verbose= while getopts 'iv' OPTLET; do case $OPTLET in i) confirm=y;; v) verbose=y;; \?) exit 3;; # Invalid option; getopts already printed an error message esac done shift $((OPTIND-1)) for file in "$@"; do if [[ -n $confirm ]]; then read -p "want to delete $param ?" ans if [[ $ans != [Yy]* ]]; then continue # the user said no, so skip this file fi fi … # do that moving stuff if [[ -n $verbose ]]; then echo "File $file deleted" fi done Note that getopts follows the traditional option parsing model, where anything after the first operand is a non-option. In other words, in moveToBin -i foo -v bar, there is the -i option and then three files foo, -v and bar. If you want to allow the GNU option parsing model where options can be mixed with operands, getopts isn't particularly helpful. Nor can bash's getopts builtin parse GNU long options (--verbose would be parsed like -v -e -r -b -o -s -e plus an error about - being unsupported).
Function with many arguments but only one switch
1,546,521,739,000
Is there a command which will just list all the options for a given command on one or two lines and not as long and words as man or info ?
There is no universal answer here....as the output from a given command is the responsibility of the author of that particular program not anything in the Linux operating system or any of the shells. In general, I try to use --help and hope for the best, but in cases of filter type programs, you might not get anything at all. Assumming the man pages are installed, they are one of the best sources for command line information.....otherthan the source code itself. Not all commands contain data for the info command either.
What is a command that will only show me the command syntax and options?
1,546,521,739,000
I read on the man page defaults Use default options: rw, suid, dev, exec, auto, nouser, async, and relatime. Do the options set depend on a mounted filesystem or not?
In the man page, defaults is listed under Filesystem Independent Mount Options, which means it doesn't depend on the filesystem type.
mount defaults and various filesystems
1,546,521,739,000
The key is about semantics. curl -I, which means curl --head, puzzled me. I don't know what the semantic words the alphabet I stand for? Is it just a reference rather than an abbreviation of a semantic word? Likewise curl -b , which means curl --cookie, has the same confused question. Can someone make it clear how the inventor do such design at first? Do the examples above mean the options don't have to be semantic?
At first glance it seemed to me that this wasn't answerable except to say, no, options that seem meaningless can always be found. You can make your own program that deliberately has totally meaninglessly named options. (Of course, even then, you might say they are meaningful in their meaninglessness, if they were deliberately chosen to be meaningless.) Upon further consideration, however, I realized that the ambiguity in what it means for an option to be semantic is actually an important and useful part of how command-line options are named, and curl -I is a particularly illustrative case of this. As muru says, options don't have to be semantic. But curl's -I option is semantic. From curl(1): -i/--include         (HTTP) Include the HTTP-header in the output. The HTTP-header includes things like server-name, date of the document, HTTP-version and more... -I/--head        (HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only. -i is the short form of --include, causing the HTTP header to be included. Although -I is the short form of --head, it is semantically the stronger form of -i, in that while -i gives you the HTTP header, -I gives you only the HTTP header. This offers insight into your larger question: there are many different criteria with which one might judge if an option name is semantic. When an option is semantic it might be intentional or unintentional. If you're only interested in whether or not there exists some way to remember the option as though it is semantic, then yes, you can always make up a reason the name relates to its meaning. Some options are semantic in more than one way. You can make GNU grep show lines adjacent to matching lines with -A for after, -B for before, and -C for context which gives you both. Thus the short forms of the three options -A/--after-context, -B/--before-context, and -C/--context, which are very close to one another in meaning, are also close to one another in the alphabet. Is that semantic? To pursue this further and get a rigorous answer, you can search applicable standards like POSIX.1-2008. It seems extremely implausible that it prohibits meaninglessly named options, but I suppose you'd have to carefully read the whole thing to be sure. A cursory search does not reveal any requirement that options mean anything. In particular, these official guidelines--required only for commands whose documentation declares compliance with them--recommend various restrictions on how options may be named and the effect of passing them, but they don't mention anything that could be interpreted as a requirement or recommendation that option names make sense. Even if you did find something, many Unix-like systems don't aim for full POSIX compliance... But that whole line of thinking--consulting official sources to determine if (vendors have to pretend that) every option's name means something--is sort of silly. The real useful thing to know about options is that their names can relate to each other in multiple ways. They can be named after words, after other options, or for alphabetical proximity to other options. Sometimes they're just a letter (or numeral) that happened to be available. Thinking about these ways can help you to remember options, to find options when searching manpages, and to make good decisions about what option names your own scripts or programs should take. As a final note, it's good to keep in mind that it's not just short-form options that can be named in ways that don't allow you to infer their meaning. For example, the long-form options --regex and --regexp to mlocate are named semantically in the sense that they both have to do with regular expressions. But there is nothing in the way they are named to tell you that --regexp means the next argument is a BRE while --regex means all pattern arguments are EREs.
is each CLI command option an semantic abbreviation [closed]
1,546,521,739,000
I trying to figure out what the following Mountoption for (v)FAT exactly does (in Linux): allow_utime=### -- This option controls the permission check of mtime/atime. 20 - If current process is in group of file's group ID, you can change timestamp. 2 - Other users can change timestamp. The default is set from `dmask' option. (If the directory is writable, utime(2) is also allowed. I.e. ~dmask & 022) Normally utime(2) checks current process is owner of the file, or it has CAP_FOWNER capability. But FAT filesystem doesn't have uid/gid on disk, so normal check is too unflexible. With this option you can relax it. [source] Question: What does this (above) mean? Trying to look it up I endend with the C code which doesn't help me a lot, so neither this nor man 2 utime (as mentioned) help me much at the moment. I'd love to use the source… From utime: The utime() system call changes the access and modification times of the inode specified by filename to the actime and modtime fields of times respectively. I read this as: Enable to change timestamps. Super Extra Kudos for you who can give an actual example of how to use this Mountoption (allow_utim)
On a filesystem that supports normal Unix file attributes, each file has a user who is designated as owner. Only the owner of a file may change its timestamps with utime. Other users aren't allowed to change timestamps, even if they have write permission. FAT filesystems don't record anything like an owner. The FAT filesystem driver pretends that a particular user is the owner of every file: either the user doing the mounting or the user given by the uid parameter. Using the normal rules, only that user is allowed to change timestamps. Files also have an owning group, determined by the gid parameter. FAT filesystem don't record Unix file permissions, so the driver makes them up. It assigns permissions based on the umask, fmask and dmask parameters, so all directories and all regular files have the same permissions. When users other than the owner have write access to the filesystem, it would make sense that they'd be allowed not only to modify regular files and directories, but also file metadata. The main metadata of interest on a FAT filesystem is the timestamps on files. Normally, only the owning user can modify timestamps. By passing the allow_utime mount option, you can allow other users to change timestamps as well. For example, to allow the group foo to modify anything in the filesystem, and allow others to read but not write, you would pass the parameters gid=foo,umask=002,allow_utime=20 (this is actually the default value for allow_utime based on the umask).
FAT Mountoption allow_utime explained
1,546,521,739,000
TL;DR: What does mount -e <device> do? Overview Over ssh, mount commands are being sent by software that I need to maintain. Twice, the mount commands use an -e option. I don't know what -e is for and can't find a good enough answer. Details In my case, the procedure that is doing this is named "Verify Backup OS" and the command being sent over ssh is the following: mount -e /dev/hd0; umount -f /dev/hd0t177.1; mount -e /dev/hd0; mount -tqnx6 /dev/hd0t177.1 /BackupPartition After doing that, the software then appears to do a search in /BackupPartition so I think that's where the "verify" part of the procedure name comes into play. Man output If I do man mount, the text for the command and for -e is: Usage: mount [-abwruv] -t type [-o options] [special] mntpoint mount [-abwruv] -T type [-o options] special [mntpoint] mount [-abwruv] -e [-t|T type] [-o options] special [mntpoint] mount … -e Enumerate the device given by special Product Documentation On the QNX page for mount, it has -e Enumerate the children of the special device. By the description it would appear that the mount -e somehow enumerates the device in some capacity. I was expecting that to mean "prints out some data to stdout", but that doesn't make much sense in this case since the output is not visible to the software user and is not piped or redirected anywhere, and I tried the command manually on the command line where it produced no output and no sign that anything happened. What does mount -e <device> do?
Did you also read the examples on qnx man mount? mount -e This will re-read the disk partition table for /dev/hd0, and create, update or delete /dev/hd0tXX block-special files for each partition. With your ssh command it makes sense. I understand not much.
What does "mount -e <device>" do?
1,546,521,739,000
Sometimes I see usage information like some_utility [arg [arg [...]]] or some_utility [arg[, arg[...]]] that indicates that you can pass more than one of the same argument. I've also seen it like some_utility [args] or some_utility [arg][, arg][...] Is there a standard way to do this?
Nope, not really, but most times it's rather consistent. Most manuals would list optional arguments with square brackets ([foo]), mandatory ones with angle brackets or no brackets at all (<bar>, baz, latter one often underlined). Also, in almost all cases if the number of arguments is variable, you'll have some kind of list (as you mentioned, [quux ...] or [quux[, quux[...]]] ...), where similarly [quuz] most of the times denotes exactly one argument. In rare cases, [quuz] could also mean »one or more arguments«, but most manuals conform to the ellipsis of listing form. It's a quasi standard, but you couldn't definitely rely on it. But in my experience it's okay in 95% times. Point is, there are hundreds of software vendors of any couleur out there, not only »big players« but there's also software written by single individuals that proofed to be useful. All of them (hopefully) write their manpages in some individual way… most of them know long enough to take care of common conventions, but there are also ones who don't care or have some kind of mental defect concerning style taste etc. ;)
Correct way to document variable length options in man pages
1,546,521,739,000
I'm looking for way to process shell script arguments that is cleaner and more "self documenting" than getopt/getopts. It would need to provide... Full support of long options with or without a value after '=' or ' '(space). Proper handling of hyphenated option names (i.e. --ignore-case) Proper handling of quoted option values (i.e. --text "A text string") I would like to eliminate the overhead of the big loop with embedded case statement that getopt/getopts requires and reduce option processing to something like... option=argumentparse "$@" [[ option == "" ]] && helpShow [[ option =~ -h|--help ]] && helpShow [[ option =~ -v|--version ]] && versionShow [[ option =~ -G|--GUI ]] && GUI=$TRUE [[ option =~ --title ]] && TITLE=${option["--title"]} Here, an argumentparse() function resolves the various syntax possibilities into a consistent format, perhaps an associative array. There must be something coded out there somewhere. Any ideas? (updated and retitled)
Since this question has been viewed so much (for me at least) but no answers were submitted, passing on the solution adopted... NOTE Some functions, like the multi-interface output functions ifHelpShow() and uiShow() are used but not included here as their calls contain relevant information but their implementations do not. ############################################################################### # FUNCTIONS (bash 4.1.0) ############################################################################### function isOption () { # isOption "$@" # Return true (0) if argument has 1 or more leading hyphens. # Example: # isOption "$@" && ... # Note: # Cannot use ifHelpShow() here since cannot distinguish 'isOption --help' # from 'isOption "$@"' where first argument in "$@" is '--help' # Revised: # 20140117 docsalvage # # support both short and long options [[ "${1:0:1}" == "-" ]] && return 0 return 1 } function optionArg () { ifHelpShow "$1" 'optionArg --option "$@" Echo argument to option if any. Within "$@", option and argument may be separated by space or "=". Quoted strings are preserved. If no argument, nothing echoed. Return true (0) if option is in argument list, whether an option-argument supplied or not. Return false (1) if option not in argument list. See also option(). Examples: FILE=$(optionArg --file "$1") if $(optionArg -f "$@"); then ... optionArg --file "$@" && ... Revised: 20140117 docsalvage' && return # # --option to find (without '=argument' if any) local FINDOPT="$1"; shift local OPTION="" local ARG= local o= local re="^$FINDOPT=" # # echo "option start: FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG, @=$@" >&2 # # let "$@" split commandline, respecting quoted strings for o in "$@" do # echo "FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG" >&2 # echo " o=$o" >&2 # echo "re=$re" >&2 # # detect --option and handle --option=argument [[ $o =~ $re ]] && { OPTION=$FINDOPT; ARG="${o/$FINDOPT=/}"; break; } # # $OPTION will be non-null if --option was detected in last pass through loop [[ ! $OPTION ]] && [[ "$o" != $FINDOPT ]] && { continue; } # is a positional arg (no previous --option) [[ ! $OPTION ]] && [[ "$o" == $FINDOPT ]] && { OPTION="$o"; continue; } # is the arg to last --option [[ $OPTION ]] && isOption "$o" && { break; } # no more arguments [[ $OPTION ]] && ! isOption "$o" && { ARG="$o"; break; } # only allow 1 argument done # # echo "option final: FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG, @=$@" >&2 # # use '-n' to remove any blank lines echo -n "$ARG" [[ "$OPTION" == "$FINDOPT" ]] && return 0 return 1 } ############################################################################### # MAIN (bash 4.1.0) (excerpt of relevant lines) ############################################################################### # options [[ "$@" == "" ]] && { zimdialog --help ; exit 0; } [[ "$1" == "--help" ]] && { zimdialog --help ; exit 0; } [[ "$1" == "--version" ]] && { uiShow "version $VERSION\n"; exit 0; } # options with arguments TITLE="$(optionArg --title "$@")" TIP="$( optionArg --tip "$@")" FILE="$( optionArg --file "$@")"
Simpler processing of shell script options
1,546,521,739,000
I have a file with the below contents: sh-4.2$ cat file1 example of multiple pattern this is an example of multipole sorry multiple pattern matching using grep so the example is the file itself -example -multiple -bye tata ! While searching for "-example" in the above file, the grep command is not giving the desired output. I know if the pattern contains '-' then -e option should be used: In the first example I used -example directly without any quotes: sh-4.2$ grep -example file1 example of multiple example of multipole so the example is the -example -example with single quotes: sh-4.2$ grep '-example' file1 example of multiple example of multipole so the example is the -example -example with double quotes and escape characters sh-4.2$ grep "\-example" file1 -example sh-4.2$
Well, you know that the search pattern contains a '-', and you know that when the search pattern contains a '-' you need to use the -e flag. Since you're not using the -e flag the shell is interpreting your "pattern" as an argument (and a parameter) instead. You can see that with: $ grep "-foo" file1 grep: oo: No such file or directory By extension, your code grep "-example" file1 is telling the shell that you want to run grep with the -e argument and a parameter "xample". This is the same problem we run into when we try something like rm -my-silly-file-name - it won't work, and we need to use something like rm ./-my-silly-file-name instead. Another workaround would be rm -- -my-silly-file-name. We can use that idiom here: $ grep -- "-example" < file1 -example The "--" tells the shell that everything after it is not an argument. Alternatively, you can simply escape the "-" with a "\", as you've seen: grep "\-example" file1 This article goes into some details about quoting: the relevant part is Parameters and backquoted commands that should be interpreted by the shell are enclosed in double quotes. When you use double-quotes, the contents are interpreted by the shell.
Why doesn't grep return what I expect when I use single quotes
1,546,521,739,000
I'm curious if anyone can help me with what the best way to protect potentially destructive command line options is for a linux command line application? To give a very hypothetical scenario: imagine a command line program that sets the maximum thermal setting for a processor before emergency power off. Lets further pretend that there are two main options, one of which is --max-temperature (in Celsius), which can be set to any integer between 30 & 50. There is also an override flag --melt which would disable the processor from shutting down via software regardless of how hot the processor got, until the system electrically/mechanically failed. Certainly such an option like --melt is dangerous, and could cause physical destruction at worst case. But again, lets pretend that this type of functionality is a requirement (albeit a strange one). The application has to run as root, but if there was a desire to help ensure the --melt option wasn't accidentally triggered by confused, or not experience users how would you do that? Certainly a very common anti-pattern (IMO) is to hide the option, so that --help or the man page doesn't reveal its existence, but that is security through obscurity and could have the unintended consequence of a user triggering it, but not being able to find out what it means. Another possibility is to change the flag to a command line argument that requires the user to pass --melt OVERRIDE, or some other token as a signifier that they REALLY mean to do this. Are there other mechanisms to accomplish the same goal?
I'm assuming you're looking at this from the POV of the utility programmer. This is broad enough that there isn't (and can't be) a single right answer, but some things come to mind. I think most utilities just have a single "force" flag (-f), that overrides most safety checks. On the other hand, e.g. dpkg has a more fine-grained --force-things switch, where things can be a number of different keywords. And apt-get makes you write a complete sentence to verify in some cases, like removing "essential" packages. See below. (I think it's not just a command line option here, since essential packages are e.g. those that are required to install packages, so undoing a mistaken action may be very hard. Besides, the whole operation may not be known up front, before apt has had a chance to calculate the package dependencies.) Then, I think cdrecord used to make the user wait a couple of seconds before actually starting the work, so that you had a chance to verify the settings were sane while the numbers were running down. Here's what you get if you try to apt-get remove bash: WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! bash 0 upgraded, 0 newly installed, 2 to remove and 2 not upgraded. After this operation, 2,870 kB disk space will be freed. You are about to do something potentially harmful. To continue type in the phrase 'Yes, do as I say!' ?] ^C Which one to choose is up to you as the program author - you'll have to base the decision on the danger level of the action, and on your own level of paranoia. (Be it based on caring about your users, or on the fear of getting blamed for the mess.) Something that has the potential to cause the processor to literally (halt and) catch fire probably goes in the high end of the "danger" axis and probably warrants something like the "type 'Yes, do what I say'" treatment. That said, one thing to realise is that many of the actual kernel-level interfaces are not protected by any means. Instead, there are files under /sys that can change things just by being opened and written to, no questions asked apart from the file access permissions. (i.e. you need to be root.) This goes for hard drive contents too (as we should know), and, in one case two years back, to the configuration variables of the motherboard firmware. It seems it was possible to "brick" computers with a misplaced rm -rf. No, really. See lwn.net article and the systemd issue tracker. So, whatever protections you would implement, you would only protect the actions done using that particular tool.
How to protect potentially destructive command line options?
1,546,521,739,000
Is there a list of all the if switches for use in bash scripting? Sometimes I see someone using it and I wonder what the switch they're using actually does. Example is the -z in this one. I know how to use it, but I don't know where it was derived from. if [ -z "$BASH_VERSION" ]; then echo -e "Error: this script requires the BASH shell!" exit 1 fi Any references, guides, posts, answers would be appreciated. Thank you!
Technically, those are not "if switches" as you state them, but bash conditional expressions used by [[ compound command and the test and [ builtin commands. The list is here.
List of 'if' switches anywhere? [closed]
1,546,521,739,000
I am reading a book "Linux Command Line", there's -u update option for command mv and `cp' -u, --update When moving files from one directory to another, only move files that either don't exist, or are newer than the existing corresponding files in the destination directory. The option is not included in BSD 'mv' command. What's the alternative options for --update?
You can use rsync instead of mv combining these two options: -u, --update skip files that are newer on the receiver --remove-source-files sender removes synchronized files (non-dir)
The alternative to Option '--update' in BSD command 'mv'
1,546,521,739,000
I'm writing a bash script that has optional flags but also an input. I can't get the input as $1 because when flags are present the input is shifted. So for example if I run script.sh test then $1 will be equal to test. But if I run script.sh -b test then $1 will be equal to -b. while getopts 'bh' flag; do case "${flag}" in b) boxes= 'true' ;; h) echo "options:" echo "-h, --help show brief help" echo '-b add black boxes for monjaro' ;; *) error "Unexpected option ${flag}" ;; esac done echo $1; The amount of flags I have is not set, I know I will add more in the future. How can I consistently get the first non-flag value?
You typically use getopts as: while getopts...; do # process options ... done shift "$((OPTIND - 1))" printf 'First non-option argument: "%s"\n' "$1" The shift above discards all option arguments (including the trailing -- if any) processed by getopts.
Bash get input while flag present?
1,455,096,620,000
I heard that I should never use --nodeps option when I do a rpm -e command. Why does this option exist then?
It exists for broadly the same reasons rm will allow you to delete the filesystem root, or dd will allow you to overwrite the physical hard drive: Linux and unix have a long history of giving you all the ammo you need when you really insist on shooting yourself in the foot. Less flippantly, when something has gone badly wrong during a package install, whether due to a badly built package or an outage at the worst possible moment, it's possible to wind up with your package manager's dependency database in gridlock -- IE, it can't resolve the problem because attempting any of the solutions would violate the dependencies of the other packages involved. In that case, you can use --nodeps, or for dpkg, the --force-* options to manually and forcibly remove the offending package, and then immediately issue what commands are necessary to fix the now broken dependencies. That's something you should only do if you're really sure of what you're doing, however; as a rule of thumb, if you aren't sure what use --nodep is, don't use it. You're essentially taking all the safeties off, and gods help you if you screw something up while doing it.
In which case can I use the option '--nodeps' of rpm command?
1,455,096,620,000
I learned that -i option is interactive mode and -f option is force model in rm command. When I tried both options rm -if test.txt it did not ask me and just deleted it which means -f option overrode -i option. Of course, I would not use options -i and -f at the same time in real life. But I wonder if there is a priority if two contradictory options are used at the same time. I tried this in Ubuntu 22.04.
In addition to @ckhan's answer: the implementations of rm I worked with always considered the last given argument as final. That means: rm -fi # will work interactively rm -if # will work non-interactively rm -ffifi # will work interactively etc.. For instance, the AIX manpage (AIX 7.2) states: If both the -f and -i flags are specified, the last one specified takes affect.
rm command contradictory options -i and -f
1,455,096,620,000
About the tar command If is executed the command: tar -czf numbers.tar.gz numbers The numbers.tar.gz file is created - from the numbers - in the current directory But for script purposes - by testing - if is executed: tar -czf ~/numbers.tar.gz /home/username/numbers tar -czf /home/username/numbers.tar.gz /home/username/numbers Both commands work as expected, but always appears the following message: "tar: Removing leading `/' from member names" It happens because at least one path - in this case for both have/include the / character. I know it is not an error, but being curious Question How to avoid to show the "tar: Removing leading `/' from member names" message in the terminal? I am assuming it is either an info or warn message and for script purposes is need it define the paths for the tar.gz to create and the source directory to compress - and I don't want see in the terminal that message. Is possible? with what option?
Having absolute paths as archive members is a bad idea. That's why GNU tar actually removes the initial / by default (from archive member names and from hard link targets if any). If you're happy for tar do that stripping but want to remove the warning, you can do the stripping by yourself: tar -C / -czf ~/numbers.tar.gz home/username/numbers Or: tar -C /home/username -czf ~/numbers.tar.gz numbers For the archive members to be numbers/file instead of home/username/numbers/file. You can tell tar not to do the stripping with --absolute-names / -P in which case you'll get /home/username/numbers/file as archive members. Upon extractions, most tar implementations will also strip that leading / by default as a safety measure. If you extract it from within the /tmp/test directory, the files will be extracted as /tmp/test/home/username/numbers/file whether they're stored in the archive as home/username/numbers/file or /home/username/numbers/file, unless you pass the -P / --absolute-names option again (though doing a cd / or pass a -C / would make more sense if you do want the paths to be interpreted as relative to the root, same as absolute paths).
tar create: How to avoid to show the "tar: Removing leading `/' from member names" message in the terminal?
1,455,096,620,000
It is well known that it's a bad idea to do something of the kind <command> $FILENAME, since you can have a file whose name is for example -<option> and then instead of executing <command> with the file -<option> as an argument, <command> will be executed with the option -<option>. Is there then a general safe way to accomplish this? One hypothesis would be to add -- before the filename, but I'm not sure if that is 100% safe, and there could be a command that doesn't have this option.
First of all, always enclose your variable between double quotes (there are exceptions to this rule, but you will easily recognize them when the moment comes). The risk that your filename contains space characters is equally high (and probably higher) than a filename that begins with a minus sign. Your first command should thus be: <command> "$FILENAME" Next, the -<option> case. You sum up the issue quite well: using -- is OK but you cannot blindly add -- to your command as all commands do not support this syntax. A perfectly safe way to protect against variables that contain a filename starting with a minus sign (or any other sensitive character) is to change the filename in order to prepend a path to it, either a relative path (./file) or an absolute path (/foo/bar/file). That way, the first char is harmless since it is either . or /. This code will add a relative path to $filename unless it is already an absolute path: [[ "$filename" != /* ]] && filename="./$filename" Personally, in my shell scripts, I prefer to change file arguments to their canonical representation (full path): filename="$(readlink -f -- "$filename")" or: filename="$(realpath -m -- "$filename")" (if you wonder which among realpath and readlink you should use, see this excellent answer)
Passing arguments to a command safely
1,455,096,620,000
In the example bellow: function zp () { zparseopts -E -walk:=o_walk echo "walk: $o_walk" } I get the following output: $ zp --walk "Walking" walk : --walk Walking $ zp --walk zp:zparseopts:2: missing argument for option: -walk walk : Here the argument of the option is mandatory so I am getting this error. How can I make the option mandatory so that I must pass --walk to zp else it will throw an error?
I don't know exactly about zparseopts, but I think getopt doesn't have that and I only see references to mandatory arguments in the manual for zparseopts. You can always just check manually if the resulting option is set: function zp () { if ! zparseopts -E -walk:=o_walk; then return 1 fi if [ $#o_walk = 0 ]; then echo "required option --walk missing" >&2 return 1 fi echo "walk: $o_walk" } Here, zparseopts fails if the option is given without an argument, and the second if explicitly tests if the o_walk array has any items. Using an associative array to collect the arguments is also an option, and to me it feels cleaner: function zp () { if ! zparseopts -E -A opts -walk: ; then return 1 fi if ! [ ${opts[--walk]+x} ]; then echo "required option --walk missing" >&2 return 1 fi echo "walk: $opts[--walk]" }
How do I make an option (not argument of the option) mandatory in zparseopts?
1,455,096,620,000
I'm learning about the cut command. In the man page of cut, they show the -n option like: -n (ignored) But I didn't understand the usage of the n option or when we would use it. Can anyone explain with an example?
Your man cut describes -n option as "ignored", simply because it is not implemented, in the cut implementation from coreutils. However, the -n option is implemented on some others cut implementations, at least in the *BSD \ POSIX.2 implementation(s). Thus cut from coreutils implements a stub option to it, for portability sake, for not breaking compatibility with scripts. However, as it is not implemented, it won't have any effect using it. From man cut in FreeBSD 12.0: -n Do not split multi-byte characters. Characters will only be output if at least one byte is selected, and, after a prefix of zero or more unselected bytes, the rest of the bytes that form the character are selected. From the POSIX standard cut page, link pointed out by @Kusalananda: -n Do not split characters. When specified with the -b option, each element in list of the form low- high ( -separated numbers) shall be modified as follows: If the byte selected by low is not the first byte of a character, low shall be decremented to select the first byte of the character originally selected by low. If the byte selected by high is not the last byte of a character, high shall be decremented to select the last byte of the character prior to the character originally selected by high, or zero if there is no prior character. If the resulting range element has high equal to zero or low greater than high, the list element shall be dropped from list for that input line without causing an error. Each element in list of the form low- shall be treated as above with high set to the number of bytes in the current line, not including the terminating . Each element in list of the form - high shall be treated as above with low set to 1. Each element in list of the form num (a single number) shall be treated as above with low set to num and high set to num.
What is the use of n option in cut command?
1,455,096,620,000
Is there an option in find that allows me to suppress the error messages that I get from it trying to access directories for which I don't have access? I know I can just discard stderr, but it seems like such an obvious need that I'm not convinced that an option that does this does not exist, despite me not finding one in the documentation.
To avoid getting permission errors from find, you would have to avoid provoking these errors. You do that by avoiding entering directories that are not accessible. Find and display the pathnames of directories that are not readable by the current user, but don't descend into them, GNU find style: find / -type d ! -readable -prune The -prune action removes the pathname currently under investigation from the search path of find. With standard find, you would have to combine -perm and -user and -group in a complicated way to test the permissions on each directory depending on the ownerships of the directory. I think I've tried to do that a couple of times, but it's difficult. To only care about the "others" permission bits: find / -type d ! -user "$(id -u)" ! -group "$(id -g)" ! -perm -005 -prune This would find any directory not owned by the current user, not belonging to the current user's group, and whose permission bits does not allow "others" to read (list) or execute (enter) it, and then prune these from the search path. The full thing, testing all the permission bits, may possibly look something like find / -type d \( \( -user "$(id -u)" ! -perm -500 \) -o \ \( ! -user "$(id -u)" -group "$(id -g)" ! -perm -050 \) -o \ \( ! -user "$(id -u)" ! -group "$(id -g)" ! -perm -005 \) \) -prune The difference between this and the -readable of GNU find is that -readable also considers ACLs etc. To discard permission errors from find, redirect its standard error stream to /dev/null.
Discard "access denied" stderr natively in find
1,455,096,620,000
[user@mymachine folder]$ echo `date --date=tomorrow +%Y%m%d` 20160802 [user@mymachine folder]$ echo `date -d=tomorrow +%Y%m%d` date: invalid date `=tomorrow' I'm using Centos 5 if that makes any difference.
The short options or unix style options are usually separated from its argument using a space, but space is not strictly required in some cases For instance echo `date -dtomorrow +%Y%m%d` and echo `date -d tomorrow +%Y%m%d` would work just fine However in case of, echo date -d=tomorrow +%Y%m%d =tomorrow is considered the argument to d but it doesn't make a valid date string
Why is one of these date commands valid and the other not?