date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,507,439,499,000
I have this bash code combined with getopts and if I understood getopts correctly OPTIND contains the index of the next command line option and all the command line options provided to shell script are presented in the variables $1, $2, $3 etc.. Correct me if I am wrong but basically the same concept as local variables in the functions. So according to this why options [-a somevalue ] or [-b somevalue ] won't give me any results. What am I doing wrong? OPT_A=A OPT_B=B while getopts :a:b FLAG; do case $FLAG in a) OPT_A=$OPTARG ;; b) OPT_B=$OPTARG ;; esac done shift $((OPTIND-1)) while [ $# -ne 0 ]; do if [[ -z $OPT_A ]]; then if [ `echo $1 | grep -o '\.' | wc -l` -ne 3 ]; then echo "Parameter '$1' does not look like an IP Address (does not contain 3 dots)."; exit 1; elif [ `echo $1 | tr '.' ' ' | wc -w` -ne 4 ]; then echo "Parameter '$1' does not look like an IP Address (does not contain 4 octets)."; exit 1; else for OCTET in `echo $1 | tr '.' ' '`; do if ! [[ $OCTET =~ ^[0-9]+$ ]]; then echo "Parameter '$1' does not look like in IP Address (octet '$OCTET' is not numeric)."; exit 1; elif [[ $OCTET -lt 0 || $OCTET -gt 255 ]]; then echo "Parameter '$1' does not look like in IP Address (octet '$OCTET' in not in range 0-255)."; exit 1; fi done fi fi if [[ -z $OPT_B ]]; then if [[ "$2" =~ ^[0-9]+$ ]] && [ "$2" -ge 1 -a "$2" -le 10000 ]; then echo "chosen variable: $2"; exit 1; else echo "variable $2 is not in range '1 - 10000'"; exit 1; fi fi done exit 0
It's because all of your logic depends on one of $OPT_[AB] being null. But even if you don't pass a -[ab] $OPTARG parameter, you're still setting them at the top of the script with OPT_[AB]=[AB]. So your logic chains never get past the root... if [[ -z $OPT_A ]]; then... ...statement. Well... not all of your logic depends on that. You're also doing: shift $((OPTIND-1)) while [ $# -ne 0 ]... So if you passed script -a arg then getopts would set $OPT_A to arg and $OPTIND would come to 3. So you would shift 2 (all of your positionals) then immediately fail the test in while. So your case would set $OPT_A and the next thing that happens is exit 0. So I guess you'd never even get to check for an empty $OPT_A anyway. And even that would be ok since most of your logic is designed to test for failure - but your script only exits. You probably do the $OPT_A var, but you don't do anything with it. You can't use that value after the script exits - not without some preset IPC, but there's none of that here. The script is called in a subshell and the values it sets are lost when it returns to the parent shell. What's more the optstring :a:b doesn't allow for an $OPTARG to -b. A leading colon in the optstring signifies quiet operation - it doesn't write to stderr if there's an issue with the options or their arguments. But a colon trailing an option char is what signifies that the option should expect an argument. Like: while getopts :a:b: FLAG ...that would indicate two options that expect arguments. It can be tricky though because if you indicate that it is supposed to take an argument then if getopts finds it without one it flags it as an error: sh -c 'getopts :a:b: opt -a; echo "$opt $OPTARG"' ...which prints... : a In that case the option winds up in $OPTARG and the : winds up in $opt. It is clearer if we're less :quiet about it: sh -c 'getopts a:b: opt -a; echo "$opt $OPTARG"' No arg for -a option So, you need to check for : colon and for ? - which is another type of error and which is conventionally rerouted to print some short --help sort of thing. Personally, I would make sure $OPT_[AB] were both empty to start with, do some logic on setting them correctly, and, when through with the test loop, make my last test one for a null value. If they haven't any value at all then it must be for some reason I haven't handled, and it is an error regardless. Here's a start at how I would go about working that test loop... param_err(){ set '' "$@" : "${1:?Parameter '$OPTARG' does not look like an IP Address $2}" } test_oct(){ oIFS=$IFS; unset "${IFS+o}oIFS"; IFS=. for oct in $1; do [ $oct -lt 256 ] || param_err "('$oct' too large)" done; unset IFS oct : "${oIFS+${IFS=$oIFS}}" } a_param() case ${1##*.*.*.*.*} in (a) OPT_A=A;; #some default - no arg provided (.*|*..*|*.) param_err '(empty octet)';; (*[!.0-9]*) param_err '(octet is not positive integer)';; (*.*.*.*) test_oct "$1"; OPT_A=$1;; (*?*) param_err '(too few octets)';; (*) param_err ${1:+"(too many octets)"} '(null param)';; esac unset OPT_A OPT_B while getopts :a:b:c:d: opt do case ${opt#:}$OPTARG in a*) a_param "$OPTARG";; b*) b_param "$OPTARG";; #tests as needed similar to a_param() ?*) help;; #param_err should call this too, but it just exits esac done
How do I use inserted values using getopts
1,507,439,499,000
I'm looking for way to process shell script arguments that is cleaner and more "self documenting" than getopt/getopts. It would need to provide... Full support of long options with or without a value after '=' or ' '(space). Proper handling of hyphenated option names (i.e. --ignore-case) Proper handling of quoted option values (i.e. --text "A text string") I would like to eliminate the overhead of the big loop with embedded case statement that getopt/getopts requires and reduce option processing to something like... option=argumentparse "$@" [[ option == "" ]] && helpShow [[ option =~ -h|--help ]] && helpShow [[ option =~ -v|--version ]] && versionShow [[ option =~ -G|--GUI ]] && GUI=$TRUE [[ option =~ --title ]] && TITLE=${option["--title"]} Here, an argumentparse() function resolves the various syntax possibilities into a consistent format, perhaps an associative array. There must be something coded out there somewhere. Any ideas? (updated and retitled)
Since this question has been viewed so much (for me at least) but no answers were submitted, passing on the solution adopted... NOTE Some functions, like the multi-interface output functions ifHelpShow() and uiShow() are used but not included here as their calls contain relevant information but their implementations do not. ############################################################################### # FUNCTIONS (bash 4.1.0) ############################################################################### function isOption () { # isOption "$@" # Return true (0) if argument has 1 or more leading hyphens. # Example: # isOption "$@" && ... # Note: # Cannot use ifHelpShow() here since cannot distinguish 'isOption --help' # from 'isOption "$@"' where first argument in "$@" is '--help' # Revised: # 20140117 docsalvage # # support both short and long options [[ "${1:0:1}" == "-" ]] && return 0 return 1 } function optionArg () { ifHelpShow "$1" 'optionArg --option "$@" Echo argument to option if any. Within "$@", option and argument may be separated by space or "=". Quoted strings are preserved. If no argument, nothing echoed. Return true (0) if option is in argument list, whether an option-argument supplied or not. Return false (1) if option not in argument list. See also option(). Examples: FILE=$(optionArg --file "$1") if $(optionArg -f "$@"); then ... optionArg --file "$@" && ... Revised: 20140117 docsalvage' && return # # --option to find (without '=argument' if any) local FINDOPT="$1"; shift local OPTION="" local ARG= local o= local re="^$FINDOPT=" # # echo "option start: FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG, @=$@" >&2 # # let "$@" split commandline, respecting quoted strings for o in "$@" do # echo "FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG" >&2 # echo " o=$o" >&2 # echo "re=$re" >&2 # # detect --option and handle --option=argument [[ $o =~ $re ]] && { OPTION=$FINDOPT; ARG="${o/$FINDOPT=/}"; break; } # # $OPTION will be non-null if --option was detected in last pass through loop [[ ! $OPTION ]] && [[ "$o" != $FINDOPT ]] && { continue; } # is a positional arg (no previous --option) [[ ! $OPTION ]] && [[ "$o" == $FINDOPT ]] && { OPTION="$o"; continue; } # is the arg to last --option [[ $OPTION ]] && isOption "$o" && { break; } # no more arguments [[ $OPTION ]] && ! isOption "$o" && { ARG="$o"; break; } # only allow 1 argument done # # echo "option final: FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG, @=$@" >&2 # # use '-n' to remove any blank lines echo -n "$ARG" [[ "$OPTION" == "$FINDOPT" ]] && return 0 return 1 } ############################################################################### # MAIN (bash 4.1.0) (excerpt of relevant lines) ############################################################################### # options [[ "$@" == "" ]] && { zimdialog --help ; exit 0; } [[ "$1" == "--help" ]] && { zimdialog --help ; exit 0; } [[ "$1" == "--version" ]] && { uiShow "version $VERSION\n"; exit 0; } # options with arguments TITLE="$(optionArg --title "$@")" TIP="$( optionArg --tip "$@")" FILE="$( optionArg --file "$@")"
Simpler processing of shell script options
1,507,439,499,000
I am using the customary way of using getopts through a variable named arg. I can capture the option names as follows. Is it possible to detect the moment getopts reaches "--" so that I can issue a message? while getopts "$shortopts" arg; do echo "--> arg: $arg" case $arg in ("V") printf '%s\n' "Version" return ;; ("u") printf '%s\n' "Usage" return ;; ("h") printf '%s\n' "Help" return ;; esac done
Is it possible to detect the moment getopts reaches "--" so that I can issue a message? You shouldn't need to. getopts implements the standard option processing, which means that it stops looking for options when it either sees an argument that's not an option, or if it sees the argument --, which explicitly terminates the list of options. (That first point is different from the GNU custom which looks for options on the whole command line.) There's no need for the program to care about meeting --. That said, since getopts doesn't trash the list of positional parameters, you could peek in there to see if the last argument was --. #!/bin/bash while getopts a:bc opt; do echo "option $opt arg $OPTARG" done last= if [ "$OPTIND" -ge 2 ]; then shift "$((OPTIND - 2))" last=$1 shift 1 else shift "$((OPTIND - 1))" fi if [ "$last" = "--" ]; then echo "options were terminated by a double-dash (or last arg was an option-argument '--')" fi echo "remaining args: $*" That would give e.g. $ bash opts.sh -a blah -- -b foo option a arg blah options were terminated by a double-dash (or last arg was an option-argument '--') remaining args: -b foo but since it only looks at the last argument, it could be either the -- separator, or -- as an option-argument to some option. E.g. this is the false positive, the -- is not the separator here: $ bash opts.sh -a -- foo option a arg -- options were terminated by a double-dash (or last arg was an option-argument '--') remaining args: foo Of course you could also implement your own option processing, but it's a bit annoying to do since the shell makes it awkward to process substrings. (You need to recognize -abc as three different options, or as one, or as two, depending on if -a or -b take an option-argument.) In any case, unless you're doing something far more complex than usually needed, there shouldn't be any reason to look at --. Even if you do something more complex, you might consider doing it outside getopts (and with another separator), similarly to how the expression in find is given, or how GNU parallel takes lists of arguments separated by :::, :::: etc.
Detecting getopts `--` (double dash) to issue a message
1,507,439,499,000
I am switching away from BSD to completely Linux. Script in Ubuntu 16.04 #!/bin/sh while (( "$#" )); do case "$1" in -i | --ignore-case) [ $# -ne 2 ] && echo "2 arguments i needed" && exit 1 case_option=-i ;; -*) echo "Error: Unknown option: $1" >&2 exit 1 ;; *) # No more options break ;; esac shift done # -o, if not, then ... find $HOME ! -readable -prune -o \ -type f -name "*.tex" -exec grep -l $case_option "$1" {} + | vim -R - Mistake is in looping. sh ./script masi returns the same output as the expected output. Run sh ./script -i masi. Output: blank file. Expected output: list of results. STOUT is ./script: 2: ./script: 2: not found Vim: Reading from stdin.... Possible mistakes while (( "$#" )) ... I cannot use the options at all for some reason. Moving to getopts - motivation by terdon's answer Tutorial and answer here case_option="" while getopts "i:" opt; do case $opt in i | ignore_case) [[ $# -ne 2 ] && echo "2 arguments i needed" && exit 1 case_option=-i ;; -*) echo "Error: Unknown option: $1" >&2 exit 1 ;; *) # No more options break ;; esac done find $HOME ! -readable -prune -o \ -type f -name "*.tex" -exec grep -l $case_option "$1" {} + | vim -R - where calling by ./script masi or ./script -i masi. How can you do cases in while loop?
Here are two examples of options processing, first with the shell built-in getopts and then with getopt from util-linux. getopts doesn't support --long options, only short options. getopt supports both. If you want to use getopt, use ONLY the version from the util-linux package. DO NOT use any other version of getopt, they are all broken and unsafe to use, util-linux's getopt is the only one that works. Fortunately, on Linux, the util-linux version is the only version you're likely to have unless you go out of your way to install a broken version. getopts is more portable (works in most or all bourne-shell descendants) and does more for you automatically (e.g. less setup is required and you don't need to run shift or shift 2 for every option, depending on whether the option takes an argument or not) but is less capable (it doesn't support long options). Anyway, in addition to processing the -i (--ignore-case) option, I've added a -h (--help) option and an example of an option that requires an argument -x (--example). It does nothing useful, it's just there to show you how to do it. With getopts, the code is: #! /bin/bash usage() { # a function that prints an optional error message and some help. # and then exits with exit code 1 [ -n "$*" ] && printf "%s\n" "$*" > /dev/stderr cat <<__EOF__ Usage: $0 [-h] [ -i ] [ -x example_data ] -i Ignore case -x The example option, requires an argument. -h This help message. Detailed help message here __EOF__ exit 1 } case_option='' case_example='' while getopts "hix:" opt; do case "$opt" in h) usage ;; i) case_option='-i' ;; x) case_example="$OPTARG" ;; *) usage ;; esac done shift $((OPTIND-1)) find "$HOME" ! -readable -prune -o -type f -name "*.tex" \ -exec grep -l ${case_option:+"$case_option"} "$1" {} + | vim -R - With getopt from util-linux: #! /bin/bash usage() { # a function that prints an optional error message and some help. # and then exits with exit code 1 [ -n "$*" ] && printf "%s\n" "$*" > /dev/stderr cat <<__EOF__ Usage: $0 [ -h ] [ -i ] [ -x example_data ] $0 [ --help ] [ --ignore-case ] [ --example example_data ] -i, --ignore-case Ignore case -x, --example The example option, requires an argument. -h, --help This help message Detailed help message here __EOF__ exit 1 } case_option='' case_example='' # getopt is only safe if GETOPT_COMPATIBLE is not set. unset GETOPT_COMPATIBLE # POSIXLY_CORRECT disables getopt parameter shuffling, so nuke it. # parameter shuffling moves all non-option args to the end, after # all the option args. e.g. args like "-x -y file1 file2 file3 -o optval" # become "-x -y -o optval -- file1 file2 file3" unset POSIXLY_CORRECT OPTS_SHORT='hix:' OPTS_LONG='help,ignore-case,example:' # check options and shuffle them TEMP=$(getopt -o "$OPTS_SHORT" --long "$OPTS_LONG" -n "$0" -- "$@") if [ $? != 0 ] ; then usage ; fi # assign the re-ordered options & args to this shell instance eval set -- "$TEMP" while true ; do case "$1" in -i|--ign*) case_option='-i' ; shift ;; -x|--exa*) case_example="$2" ; shift 2 ;; -h|--hel*) usage ;; --) shift ; break ;; *) usage ;; esac done find "$HOME" ! -readable -prune -o -type f -name "*.tex" \ -exec grep -l ${case_option:+"$case_option"} "$1" {} + | vim -R -
Why this while-case does not work in Ubuntu?
1,507,439,499,000
I am writing a simple bash script. My script installs ppa. The problem is I can't add two arguments. I want to write something simple like this: ./ppa.sh -i ppa:chris-lea/node.js nodejs I tried this, but doesn't read the second argument 'nodejs'... #! /bin/sh # Install/add PPA or Program while getopts ":i:e:" option; do case $option in i) echo received -i with $OPTARG ang='sudo apt-add-repository' ;; e) echo received -e with $OPTARG ang='other line' ;; :) echo "option -$OPTARG needs an argument" exit ;; *) echo "invalid option -$OPTARG" exit ;; esac # done if [ "`echo $OPTARG | cut -d ':' -f1`" == "ppa" ]; then echo 'is a ppa' $ang $OPTARG ; sleep 2s && sudo apt-get update; clear sudo apt-get -y install $OPTARG2 fi done
You should put two arguments in quote or double quote: % ./ppa.sh -i 'ppa:chris-lea/node.js nodejs' received -i with ppa:chris-lea/node.js nodejs
How to input two arguments with getopts? [duplicate]
1,507,439,499,000
I'm trying to figure out a way to deny the usage of more than one getopts args in a certain situation. Say we have something like this: while getopts "a:b:c:def" variable; do case $variable in a)a=$OPTARG b)b=$OPTARG c)c=$OPTARG d)MODE=SMTH e)MODE=SMTH2 f)MODE=SMTH3 esac done What im trying to do is to deny the use of more than one MODE arg (def) and display a message to tell that to the user. Something in the line of: ./script.sh -a ajhsd -b kjdhas -c daskjdha -d -e -f performs check for use of more than one MODE arg(def) and if more than one is used displays an error message. I tried with a simple input check if statement but failed miserably. It always passes through all three and obtains the last passed argument parameter without even going inside the multiple args check. Kind of weird. It should've been easy to do this. Should have...
You need to initialize MODE as some value other than SMTH, SMTH2 and SMTH3. Then, check if MODE is at the initial value. If not, throw an error message and then exit. You have to exit after error, otherwise script will keep running. The modified version of your script below should get you started. MODE=0 EMSG="More than one of -d, -e, -f has been specified" while getopts "a:b:c:def" variable; do case $variable in a) a=$OPTARG ;; b) b=$OPTARG ;; c) c=$OPTARG ;; d) if [ $MODE = 0 ] ; then MODE=SMTH ; else echo $EMSG ; exit 1 ; fi ;; e) if [ $MODE = 0 ] ; then MODE=SMTH2 ; else echo $EMSG ; exit 1 ; fi ;; f) if [ $MODE = 0 ] ; then MODE=SMTH3 ; else echo $EMSG ; exit 1 ; fi ;; esac done
deny use of multiple getopts arguments
1,507,439,499,000
I have read the getopts man page and am still not sure about this use case. getopts is not detecting any available options the second time a function is called in the same script. As you can see from my debug echo outputs, all of the positional params $@ are present for both function calls. In the second create_db function call, the getopts while loop is never entered, causing my variables TYPE and ENVIRON to not be set. Any thoughts? FUNCTION DEFINITION (create_db) function create_db { local TYPE SIZE ENVIRON TYPE='' SIZE='' ENVIRON='' print_usage() { echo -e $"\nUsage: create_db -t {mysql|redis|rabbitmq|sftp|elasticsearch} -e <environment_name> -s <size_in_GB>" echo "Required args: -t, -e" echo "Optional args: -s" } echo "@: $@" echo "0: $0" echo "1: $1" echo "2: $2" echo "3: $3" echo "4: $4" echo "5: $5" echo "6: $6" # parse flags while getopts 't:s:e:h' flag; do echo "flag: $flag" echo "opt: ${OPTARG}" case "${flag}" in t) TYPE="${OPTARG}" ;; s) SIZE="${OPTARG}" ;; e) ENVIRON="${OPTARG}" ;; h) print_usage exit 0 ;; *) print_usage >&2 exit 1 ;; esac done shift "$(( OPTIND - 1 ))" echo "TYPE: ${TYPE}" echo "ENVIRON: ${ENVIRON}" ... DO WORK ... } CALLED SCRIPT (environment-setup-from-scratch.sh) #!/bin/bash # import functions from utils file . "${0%/*}/environment-setup-utils.sh" ENVIRONMENT="${1}" create_db -t "elasticsearch" -e "${ENVIRONMENT}" create_db -t "mysql" -e "${ENVIRONMENT}" create_db -t "redis" -e "${ENVIRONMENT}" TERMINAL OUTPUT $ ./environment-setup-from-scratch.sh sandbox @: -t elasticsearch -e sandbox 0: ./environment-setup-from-scratch.sh 1: -t 2: elasticsearch 3: -e 4: sandbox 5: 6: flag: t opt: elasticsearch flag: e opt: sandbox TYPE: elasticsearch ENVIRON: sandbox @: -t mysql -e sandbox 0: ./environment-setup-from-scratch.sh 1: -t 2: mysql 3: -e 4: sandbox 5: 6: TYPE: ENVIRON:
Each time you call getopts, it uses $OPTIND; If the application sets OPTIND to the value 1, a new set of parameters can be used: either the current positional parameters or new arg values. Any other attempt to invoke getopts multiple times in a single shell execution environment with parameters (positional parameters or arg operands) that are not the same in all invocations, or with an OPTIND value modified to be a value other than 1, produces unspecified results. (my emphasis). You need to reset OPTIND before you call getopts each time, perhaps here: # ... # parse flags OPTIND=1 while getopts 't:s:e:h' flag; do # ...
getopts in function that is called more than once in a script, getopts doesn't detect any opts after 1st function call [duplicate]
1,507,439,499,000
When using getopts with case clause, is a *) pattern subclause as the last pattern subclause equivalent to the union of \?) and :) pattern subclauses as the last two pattern subclauses? Specifically, while getopts "<optionString>" opt; do case $opt in a) a="$OPTARG" ;; b) b="$OPTARG" ;; ... ;; \?) printf "illegal option: -%s\n" "$OPTARG" >&2 exit 1 ;; :) printf "missing argument for -%s\n" "$OPTARG" >&2 exit 1 ;; esac done and while getopts "<optionString>" opt; do case $opt in a) a="$OPTARG" ;; b) b="$OPTARG" ;; ... ;; *) printf "illegal option: -%s, or missing argument for -%s\n" "$OPTARG" "$OPTARG" >&2 exit 1 ;; esac done Thanks.
You only really need to check for : and ? with the getopts in bash if you use silent error reporting (when the first character of the optstring is a colon). When getopts in not used in that way, it will produce its own diagnostic messages for invalid options and for missing option arguments (and these are usually quite adequate). In fact, it will not place : or ? in the variable unless it's silenced. Using * in a case statement would be a way to catch both of these, but if getopts is silenced, you would not know which error was triggered and would just be able to say something on the lines of an error occurred while parsing the command line options to the user.
When using `getopts` with `case`: `*)` as the last pattern subclause, or `\?)` and `:)` as the last two pattern subclauses?
1,507,439,499,000
I'm learning Bash, and I've written a basic function: wsgc () { # Wipe the global variable value for `getopts`. OPTIND=1; echo "git add -A"; while getopts m:p option; do case "${option}" in m) COMMIT_MESSAGE=$OPTARG if [ "$COMMIT_MESSAGE" ]; then echo "git commit -m \"$COMMIT_MESSAGE.\"" else echo "A commit message is required." exit fi ;; p) echo "git push" exit ;; \?) echo "Invalid parameter." exit ;; esac done } However, I'm struggling with a couple of things: the if in m) isn't working, in that if I omit the argument, Bash intercedes and kicks me out of the session; git add -A -bash: option requires an argument -- m Invalid parameter. logout Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed] After running: wsgc -m "Yo!" -p, I get kicked out of the session. git add -A git commit -m "Yo." git push logout Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed] Any advice would be much appreciated.
the if in m) isn't working, in that if I omit the argument, Bash intercedes and kicks me out of the session; You specify getopts m:p option. The : after the m means that you need an argument. If you don't provide it, it's an error. After running: wsgc -m "Yo!" -p, I get kicked out of the session. What do you mean by you get kicked out of the session? Does the shell vanish? Then that is because you sourced the script instead of executing it. That being said, I would highly recommend to use getopt instead of getopts.
Bash: Help honing a custom function
1,507,439,499,000
I'm looking for a way to do option parsing in a Bash script (allowing for both short and long arguments as getopt does) that stops parsing at the first unrecognized argument, places a -- before that first unrecognized argument, and then copies the remaining arguments to the output string. For example, here's the behavior that I want: % OPTIONS=("-u" "name1" "--username=name2" "-x" "a" -u "b" "c") % getopt -o u: -l username: -n programname -- "${OPTIONS[@]}" -u 'name1' --username 'name2' -- -x 'a' -u 'b' 'c' % The utility getopt does not work this way, and instead emits the following: programname: invalid option -- 'x' -u 'name1' --username 'name2' -u 'b' -- 'a' 'c' Note that I do not want arguments that follow an unrecognized option to be reordered as if they were recognized, as is demonstrated above with the second -u option. I'm hoping that someone will have a solution that will give the results I demonstrate in the first code block above. Any ideas? Environment: bash 4.2.46(1), CentOS 7.2 @3.10.0-327.36.1.el7.x86_64. Requirements: CentOS 7.2 Minimal with no additional software to be installed with all code written in a Bash script. The options passed to the options parser are not required to include -- in them (that is, the termination of parsing should be automatic).
The solution to this problem requires something other than getopt because getopt rearranges options that it finds that do match the option specification and terminates on unrecognized options. The Bash built-in getopts comes to the rescue, but needs to be able to handle long options. In a post by Arvid Requate and TomRoche, there is a "trick" that allows getopts to handle long options: use - as an option specification and use a leading colon in the options specification to silence error reporting. Their solution, however, would require that code be duplicated for handling short and long options. Here is a solution that I've tested that meets all of my requirements and also does not duplicate handling for short and long options. For clarity and completeness, I've changed username to set and added toggle to demonstrate options not taking values. parse.sh #!/bin/bash Options=("$@") while getopts ":s:-:" OptChar "${Options[@]}"; do case "$OptChar" in -) case "$OPTARG" in set|set=*) if [[ $OPTARG =~ ^set= ]] ; then Value="${Options[$OPTIND-2]#*=}" else Value="${Options[$OPTIND-1]}" ((OPTIND++)) fi echo "Parsed: --$OPTARG, value: '$Value'" ;; toggle) echo "Parsed: --$OPTARG";; *) ((OPTIND--)); break;; esac ;; # Redirect short arguments to long arguments s) ((OPTIND-=2)); Options[OPTIND-1]="--set";; t) ((OPTIND--)); Options[OPTIND-1]="--toggle";; *) ((OPTIND--)); break;; esac done Options=( "${Options[@]:$OPTIND-1}" ) echo "REMAINING Options: ${Options[@]}" and here is my test code: % ./parse.sh --set=a -s b --set= --set c --unknown -s d --set e Parsed: --set=a, value: 'a' Parsed: --set, value: 'b' Parsed: --set=, value: '' Parsed: --set, value: 'c' REMAINING Options: --unknown -s d --set e % % ./parse.sh -s a -x -s b Parsed: --set, value: 'a' REMAINING Options: -x -s b %
How can options be parsed in a Bash script, leaving unrecognized options after the "--"?
1,507,439,499,000
I would be glad if someone clarified the need to use shift in this simple parser code: while getopts ":hp:" option do case "${option}" in p) some_parameter=${OPTARG} ;; h) print_usage_and_exit 0 ;; *) print_usage_and_exit 1 ;; esac done shift $(( OPTIND - 1 )) For instance, it is unclear to me: Why is there seemingly no need for a shift inside the loop? Does accessing getopts move the arguments on itself, or how does it work? Why is there a need for a shift after getopts? I don't get why would not getopts do that by itself at the end.
You don’t need to shift (and shouldn’t) inside the loop because getopts tracks which positional parameter it’s processing by updating the OPTIND variable. You don’t need to shift after the loop: you can use OPTIND to determine which positional parameters to handle yourself. Using shift however is the simplest way of dealing with arguments which have been processed by getopts, assuming you don’t need to post-process them yourself. Having getopts not shift itself has a couple of benefits: you can revisit arguments yourself if necessary, and you can reproduce the original command line. The latter is useful for example in error messages, or if you need to run another command with the same arguments (I know I’ve used that in the past).
shift in getopts loop - clarification needed
1,507,439,499,000
I'd like to have named parameters for my functions. I seem to only be able to use GETOPTS for the main function called from the command line. If I have multiple functions within one file is there any way I can get the same sort of functionality (named parameters) when calling other functions ? e.g. the following does not seem to work: $ cat getops_example.sh function usage { echo 'here' } function my_test { while getopts ":s:p:" o; do case "${o}" in s) s=${OPTARG} ((s == 45 || s == 90)) || usage ;; p) p=${OPTARG} ;; *) usage ;; esac done } my_test 11 20 echo "s was $s" echo "p was $p" $ $ ./getops_example.sh -s 10 -p 20 s was p was
Your program does not work because you pass parameters to your program but inside the program you call your function my_test without the option flags -s and -p resp. Depending what you actually want, use either my_test -s 11 -p 20 or pass the arguments from outside and call your function as my_test "$@"
How to have getopts functionality when just calling another function within the file
1,507,439,499,000
I have to check if particular argument lets say 'java8' is present in my shell arguments to script and if it is present remove it . Also I want it to be stored in some other variable , but want it to be removed from my $*. One option I Trie checking if arg is present using getopts but I dont know how to remove argument . Please note - the position of arg to check 'java8' is not known I can be at any place while getopts "f:" flag do case $flag in f) echo $OPTARG ;; esac done echo $* ~ So basically when I invoke - ./test.sh arg1 arg2 -f java8 arg3 after execution of getopts block the remaining args will be arg1 , arg2 , arg3
Maybe something like this? I'm assuming you want to remove -f java8... #!/bin/bash while (( $# )); do if [[ $1 = "-f" ]] && [[ $2 = "java8" ]]; then shift 2 continue fi args+=( "$1" ) shift done echo "${args[*]}" Example usage: $ ./argtest.sh one two three one two three $ ./argtest.sh one two -f java8 three four one two three four $ ./argtest.sh -f java8 -f foo -f bar -f foo -f bar
identify if present and remove specific argument from shell args
1,507,439,499,000
I am writing a shell script and new to getopts for parameter parsing. What I have are 1 optional and 2 mandatory parameters and what I want to do is ensure that only one mandatory parameter is passed. Right now a basic validation exists but its practically useless long term. I looked at other examples using if conditions but that fails as well. Using flags inside the cases and calling functions by checking individual statuses also printed all. What I want is to ensure just 1 mandatory argument is used and an error thrown when more than 1 passed. Right now, all the options are working in the order specified. Here is the code #!/bin/bash USAGE="Usage : $0 [-r N] (-a|-b)" #Prompts when there are no arguments passed if [ "$#" -le 0 ]; then echo $USAGE exit 2 fi #the option parsing while getopts ':n:ab' option do case $option in r) numlim=$OPTARG ;; a) task1 ;; b) task2 ;; *) echo "Unknown Param" echo $USAGE ;; esac done What I want is a hint to how I can go about designing the code in the specified way.
Set a variable deciding which task to run in the getopts loop, then manually check that only one task is chosen. You could do that in various ways, e.g.: #!/bin/sh task= set_task() { if [ -n "$task" ]; then echo "only one of -a and -b may be used" >&2 exit 1 fi task=$1 } while getopts ':n:ab' option; do case $option in a) set_task a;; b) set_task b;; *) echo "unknown option" >&2 exit 1;; esac done if [ "$task" = a ]; then echo do task a... elif [ "$task" = b ]; then echo do task b... else echo "invalid or unspecified task" >&2 exit 1 fi
Ensuring only 1 mandatory parameter is passed to script
1,507,439,499,000
I'm trying to get it to call a function. Here is my code #!/bin/bash while getopts ":a:b:" opt; do case $opt in a) my_function "%e" ;; b) my_function "%s" ;; /?) echo "Invalid option: -$OPTARG" ;; esac done my_function() { option=$1 //do something here } When i call: ./myscript.sh -a sshd This would display ./myscript.sh: line 5: my_function: command not found What should I do to fix it?
For a shell script to be able to call a function, that function has to have been defined before calling it. This is not the case in your code. To fix it, move the function to above the command line parsing loop. Also, I would make the last case test be *) to catch any unhandled option (/? would never match a single option character). And the getopts utility would already output an error message, so you don't need to repeat that ($OPTARG may additionally not be what you use here, but $opt).
Trying to get getopts to call a function [duplicate]
1,507,439,499,000
I want to escape the first string SOMETEXT in the getopts args. But I'm only able to do it in the first example. How can I make it work on the second example? while getopts p: opt do case $opt in p) result=$OPTARG;; esac done echo "The result is $result " Example 1: run_test.ksh -p3 SOMETEXT The result is 3 Example 2: run_test.ksh SOMETEXT -p3 ./run_test.ksh: line 10: result: parameter not set
This is a consequence of using getopts. Parameters and their arguments must come before any other text. If you know that the first word is SOMETEXT you could strip it from the argument list that getopts processes: if [[ 'SOMETEXT' == "$1" ]] then echo "Found SOMETEXT at the beginning of the line" shift fi while getopts p: opt do case $opt in p) result=$OPTARG;; esac done echo "The result is $result "
parsing getopts
1,507,439,499,000
I want to add a line of code that tells the user that enough arguments were not given (may be an error message somewhere. but i am not sure where?) blastfile= comparefile= referencegenome= referenceCDS= help=''' USAGE: sh lincRNA_pipeline.sh -c </path/to/cuffcompare_output file> -g </path/to/reference genome file> -r </path/to/reference CDS file> -b </path/to/RNA file> ''' while getopts ":b:c:g:hr:" opt; do case $opt in b) blastfile=$OPTARG ;; c) comparefile=$OPTARG ;; h) printf "$help" exit 1 ;; g) referencegenome=$OPTARG ;; r) referenceCDS=$OPTARG ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done
One approach would be to count the options as getopts parses them. Then, you can exit if less than a given number were passed: #!/usr/bin/env bash blastfile= comparefile= referencegenome= referenceCDS= help=''' USAGE: sh lincRNA_pipeline.sh -c </path/to/cuffcompare_output file> -g </path/to/reference genome file> -r </path/to/reference CDS file> -b </path/to/RNA file> ''' while getopts ":b:c:g:hr:" opt; do ## Count the opts let optnum++ case $opt in b) blastfile=$OPTARG echo "$blastfile" ;; c) comparefile=$OPTARG ;; h) printf "$help" exit 1 ;; g) referencegenome=$OPTARG ;; r) referenceCDS=$OPTARG ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done [[ $opts -lt 3 ]] && echo "At least 3 parameters must be given"
How can I detect that not enough options were passed with getopts
1,507,439,499,000
I have function port.sh as stand alone script and i'm wondering if is possible to put this function in the same script where getopts is and pass value of OPT_B into function and get output of it? OPT_B=B while getopts :a FLAG; do case $FLAG in b) #set option "b" OPT_B=$OPTARG ;; esac done shift $((OPTIND-1)) !!-->> port $1 <<--!! -> OPT_B=$(port $1) ?? function port() { if [ "$1" = 'B' ]; then set $1=8000 echo "declared value: $1" elif [[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1 -a "$1" -le 10000 ]; then echo "chosen value: $1" else echo "chosen value $1 is not in '1 - 10000'" fi return 0; }
Don't use function port() - it doesn't actually make any sense. When declaring a bash or ksh function with the function command you don't use the () but the shell accepts it as a forgivable syntax oops and acts like you didn't use function at all. So don't. port() case ${1:--} in (B) OPT_B=8000;; (*[!0-9]*) ! printf 'chosen value %s not in %s\n' \ "${1:-''}" "'1 - 10000'" ;; (*) [ "$(( $1>0 && $1<10001 ))" -ne 0 ] && echo "chosen value '$1'" || port "'$1'" ;; esac That is a POSIXly correct way to write your function (except that the above returns correctly in the event of an error). If the above were in a shell script other than $0 and I wanted to call that function anyway, I would probably do: eval "$(sed '/^port()/,$!d;/esac/q' /path/to/script_containing_port.sh)" port B #or whatever ...if I could be sure that the first occurrence of ^port() in that script definitely signified the beginning of the function I wanted to declare. Else, if the function were in a script all its own, I would do: . /path/to/port.fn.sh; port B Last, you probably shouldn't name your script files something.sh unless they really are sh scripts - which is to say, if you write a bash script name it something.bash. It doesn't make sense otherwise.
getopts passing value of declared parameter to function
1,507,439,499,000
I am trying to make script that has two switches -h and -d, -d having a mandatory number argument. After it there will be undetermined number of paths to file. So far, I have this, but the code seems to not recognize invalid switch -r (can be any name) and also does not work when I do not input any switches: while getopts ":hd:" opt; do case $opt in h) echo $usage exit 0 ;; d) shift 2 if [ "$OPTARG" -eq "$OPTARG" ] ; then # ako dalsi argument mame cislo depth=$OPTARG fi ;; \?) shift 1 ;; :) shift 1 ;; esac done echo $1 when I type ./pripravne1.sh -d /home/OS/test_pz/test2 I get ./pripravne1.sh: [: /home/OS/test_pz/test2: integer expression expected when I type ./pripravne1.sh -r /home/OS/test_pz/test2 I get only empty string.
[ "$OPTARG" -eq "$OPTARG" ] ... is not the right way to check if $OPTARG is numeric -- it may print a nasty inscrutable error to the user if that's not the case, or it may just return true in all cases (in ksh), or also return true for an empty $OPTARG (in zsh). Also, an option taking an argument may be given as either -d12 or -d 12, so a blind shift 2 won't cut it. And doing a shift inside the loop may badly interract with getopts, which is itself using the live argument list. Taking that into account, this is what I propose: die(){ echo >&2 "$@"; exit 1; } usage(){ echo >&2 "usage: $0 [-h] [-d num] files..."; exit 0; } depth=0 while getopts :hd: opt; do case $opt in h) usage ;; d) case $OPTARG in ''|*[!-0-9]*|-|*?-*) die "invalid number $OPTARG" ;; *) depth=$OPTARG ;; esac ;; :) die "argument needed to -$OPTARG" ;; *) die "invalid switch -$OPTARG" ;; esac done shift "$((OPTIND - 1))" echo depth="$depth" echo files="$@"
GETOPTS parse empty and nonempty args
1,507,439,499,000
Bash manual says getopts optstring name [args] When the end of options is encountered, getopts exits with a return value greater than zero. OPTIND is set to the index of the first non-option argument and name is set to ?. In an example from the Bash Hackers Wiki getopts tutorial: while getopts ":a" opt; do case $opt in a) echo "-a was triggered!" >&2 ;; \?) echo "Invalid option: -$OPTARG" >&2 ;; esac done When the end of options is encountered, getopts exits with a return value greater than zero, so the while loop will stop. Then inside the while loop, is the part inside \?) never reached? If yes, why is it there? Thanks.
It’s there to process invalid options. In the example, if you run script -a, the -a option is expected and results in “-a was triggered!”. If you run script -b, -b isn’t valid and will be handled by the \? case, resulting in “Invalid option: -b”.
What happens to getopts when the end of options is encountered
1,507,439,499,000
A stackoverflow post has a template for handling command line arguments. Does the test [ $# == 0 ] mean that a bash script shouldn't be run without any argument? As a template, I think that scripts generally do not necessarily require any argument. In the case statement, how different are the two cases *) and "?") ? They seem the same. # --- Options processing ------------------------------------------- if [ $# == 0 ] ; then echo $USAGE exit 1; fi while getopts ":i:vh" optname do case "$optname" in "v") echo "Version $VERSION" exit 0; ;; "i") echo "-i argument: $OPTARG" ;; "h") echo $USAGE exit 0; ;; "?") echo "Unknown option $OPTARG" exit 0; ;; ":") echo "No argument value for option $OPTARG" exit 0; ;; *) echo "Unknown error while processing options" exit 0; ;; esac done shift $(($OPTIND - 1)) param1=$1 param2=$2
This script requires at least one arg, if not it displays usage info. It should do echo $USAGE >&2 as this is an error. Other scripts may work with zero arguments, so you will have to modify. Just as some don't take the argument i. "?", vs * Yes they are different: "?" says to case to look for a ?. This is what getopts returns when it finds an option that it does not expect (invalid option). * says to case, do this is you find no other match. This should not happen, but it may. It probably indicates a bug in getopts, or more likely your program (see defensive programming).
Questions about understanding a template of using bash's getopts
1,507,439,499,000
Bash manual says getopts optstring name [args] When the end of options is encountered, getopts exits with a return value greater than zero. OPTIND is set to the index of the first non-option argument and name is set to ?. Does it mean that getopts only read in options and option arguments, but not arguments which are neither options nor option arguments? getopts can't work with the case where in the command line, some options are specified after some arguments which are neither options nor option arguments? In other words, does getopts require that arguments which are neither options nor option arguments be specified after all the options and option arguments? Thanks.
Yes, getopts is the tool to parse options in the POSIX way (even in bash, the GNU shell): In: cmd -abc -dxx -e yy arg -f -g (with an optspec of :abcd:e:fg) -f and -g are regular arguments. getopts stops at that arg. Generally, you do: while getopts... case...esac done shift "$((OPTIND - 1))" echo Remaining arguments: [ "$#" -eq 0 ] || printf ' - %s\n' "$@" If you want to process the options the GNU way, where options are considered after non-option arguments (except when there's a -- or when POSIXLY_CORRECT is in the environment), you can use the util-linux or busybox implementation of getopt instead (with a different API). That one also supports long options. That won't be portable outside of Linux though. You do something like: parsed_opts=$(getopt -o abcd:e:fg -l long -- "$@") || usage eval "set -- $parsed_opts" for o do case $o in (-[abcfg]) echo "no-arg option: $o"; shift;; (--long) echo "long option"; shift;; (-[de]) printf '%s\n' "option $o with arg $2"; shift 2;; (--) shift; break;; (*) echo "never reached";; esac done echo Remaining args: [ "$#" -eq 0 ] || printf ' - %s\n' "$@" Note that their will be some re-ordering in that options and their arguments will be removed from the "remaining args": $ busybox getopt -o abcd:e:fg -l long -- -a foo bar -e x baz --l -a -e 'x' --long -- 'foo' 'bar' 'baz'
Does getopts read in command line arguments in some order?
1,507,439,499,000
My small POSIX shell scripts do not usually take any arguments, so this is kind of new to me... The minimal snippet would probably look like this: # default for hotkey variable on top of script is set hotkey=Print ... while getopts ':hk:' option; do case "$option" in k) # override default hotkey variable with supplied arg. hotkey=$OPTARG shift 2 ;; h) # self-explanatory I assume, prints usage, and exits script with code 0 print_usage_and_exit 0 ;; *) # inspects unspecified arguments, prints usage, and exits script with code 1 dump_args "$@" print_usage_and_exit 1 ;; esac done ... What remains unclear to me, if in this particular case, there is any use for the notoriety known command: shift $(( OPTIND - 1 )) Thanks for any direction
shift $(( OPTIND - 1 )) removes the arguments that were parsed by getopts. For example, if you do ./yourscript.sh -k x y z, that makes $1 be y instead of -k. If you aren't using $@, $1, etc. after your getopts loop, then you don't need that line for anything.
Manipulating arguments with OPTIND, OPTARG, getopts, and shift correctly
1,507,439,499,000
I have a script with this usage: myscript [options] positional-args... -- [fwd-params] Because [options] can have long, or short variants, I like using getopt. But I'm having troubles. I use getopt like this: args=$(getopt -o a:,b --long alpha:,gamma -- "$@") eval set -- "$args" while : ; do case "$1" in -a | --alpha) a="$2" ; shift 2 ;; -b ) b=true ; shift ;; --gamma ) g=true ; shift ;; -- ) shift ; break ;; esac done positionals=() while [ $# -gt 0 ] ; do case "$1" in * ) positionals+=("$1"); shift ;; -- ) shift ; break ;; esac done # What-ever is left in "$@" needs to be forwarded to another program backend "$@" This works great if I don't have any [fwd-params]: $ getopt -o a:,b -- -a 1 -b pos1 pos2 -a '1' -b -- 'pos1' 'pos2' ^-- getopt adds this to help me find the end-of-options/start-of-positionals But it falls apart if the user defined any [fwd-params]. Here's my desired output: $ getopt -o a:,b -- -a 1 -b pos1 pos2 -- fwd1 -a '1' -b -- 'pos1' 'pos2' '--' 'fwd1' ^ \- I'll use this to delimit the positional arguments from the forwarding ones. And here's what I actually get. The user's intentional -- has been filtered out. $ getopt -o a:,b -- -a 1 -b pos1 pos2 -- fwd1 -a '1' -b -- 'pos1' 'pos2' 'fwd1' What's the best way to delimit my positional-arguments from the forwarding ones?
Well, if the user passes the arguments -a 1 -b pos1 pos2 -- fwd1, getopt takes the -- as the marker making all following arguments non-options. It's not a positional argument itself here. If you want the -- to appear as-is, your user would have to explicly add the marker, and another -- one after to separate the two sets of positionals, e.g.: $ getopt -o a:,b -- -a 1 -b -- pos1 pos2 -- fwd1 -a '1' -b -- 'pos1' 'pos2' '--' 'fwd1' or, you could prefix the set of option characters with a + to ask for the POSIX behaviour, where any non-option marks the end of options. That way, the -- in your example would no longer be the marker, but a positional in itself: $ getopt -o +a:,b -- -a 1 -b pos1 pos2 -- fwd1 -a '1' -b -- 'pos1' 'pos2' '--' 'fwd1' But note that if you don't have positional arguments in the first set, the user will still need to manually add a total of two --s: $ getopt -o +a:,b -- -a 1 -b -- -- fwd1 -a '1' -b -- '--' 'fwd1' I would likely do something similar to what GNU Parallel does, and use some other fixed string to separate the two types of positionals. E.g. have the script look for a :: instead, leaving -- for getopt. So the user would enter -a 1 -b pos1 pos2 :: fwd1 (with or without a --): $ getopt -o +a:,b -- -a 1 -b pos1 pos2 :: fwd1 -a '1' -b -- 'pos1' 'pos2' '::' 'fwd1' or with no positionals in the first set: $ getopt -o +a:,b -- -a 1 -b :: fwd1 -a '1' -b -- '::' 'fwd1'
getopt with several `--`
1,507,439,499,000
How can I use getopt or getopts with subcommands and long options, not with short options? I know how to implement short and long options with getopts. Solutions that I've found so far are using getopts in subcommand switch-case but with short options, for example: https://stackoverflow.com/questions/402377/using-getopts-to-process-long-and-short-command-line-options Using getopts to parse options after a non-option argument How can I implement for example following subcommands and their long options?: $> ./myscript.sh help show show --all set set --restart reset reset --restart help
Parse them manually in a function, and call the function with the "$@". After the call, any residual non-options are in $1, $2, etc. Note, in this example, I'm using associative arrays to hold the switches. If you don't have bash 3 or later, you can use global variables instead. The subcommand will be args[0] and its options in the rest. usage() { echo "${0##*/} [options...] args ... " } version() { echo "0.1" } parse_opt() { while [[ -n "$1" ]]; do case "$1" in --) break ;; ## Your options here: -m) opts[m]=1 ;; -c|--center) opts[c]="$2" ; shift ;; -x) opts[x]=1 ;; ## Common / typical options -V) version; exit 0 ;; --version) version; exit 0 ;; -?|--help) usage ; exit 0 ;; -*) echo >&2 "$0: Error in usage." usage exit 1 ;; *) break ;; esac shift done args=("$@") } declare args declare -A opts # assoc array parse_opt "$@" case "${args[0]}" in sub1) "${args[@]}" ;; *) echo >&2 "Unknown sub-command" exit 1 esac
Bash script with subcommand and long options only
1,507,439,499,000
I have a bash script which processes an input file with optional arguments. The script looks like this #!/bin/bash while getopts a:b:i: option do case "${option}" in a) arg1=${OPTARG};; b) arg2=${OPTARG};; i) file=${OPTARG};; esac done [ -z "$file" ] && { echo "No input file specified" ; exit; } carry out some stuff The script runs fine, but I need to specify the input file like so sh script.sh -a arg1 -b arg2 -i filename I would prefer to be able to call the script without the -i option, like so sh script.sh -a arg1 -b arg2 filename while still having the error message when no input file is specified. Is there a way to do this?
#!/bin/sh - # Beware variables can be inherited from the environment. So # it's important to start with a clean slate if you're going to # dereference variables while not being guaranteed that they'll # be assigned to: unset -v file arg1 arg2 # no need to initialise OPTIND here as it's the first and only # use of getopts in this script and sh should already guarantee it's # initialised. while getopts a:b:i: option do case "${option}" in (a) arg1=${OPTARG};; (b) arg2=${OPTARG};; (i) file=${OPTARG};; (*) exit 1;; esac done shift "$((OPTIND - 1))" # now "$@" contains the rest of the arguments if [ -z "${file+set}" ]; then if [ "$#" -eq 0 ]; then echo >&2 "No input file specified" exit 1 else file=$1 # first non-option argument shift fi fi if [ "$#" -gt 0 ]; then echo There are more arguments: printf ' - "%s"\n' "$@" fi I changed the bash to sh as there's nothing bash-specific in that code.
Processing optional arguments with getopts in bash
1,507,439,499,000
I have a script (let’s call it scriptC) that uses getopt to parse short and long options and works fine. This script is being called like this: scriptA runs scriptB which calls scriptC with the proper parameters. Question: Is it possible to pass the same parameters as arguments to scriptA and then those be passed over eventually to scriptC? The scripts are called like: scriptB "$@" and in scriptB it eventually does scriptC —param1 —param2
If scriptA calls scriptB like scriptB "$@" then the command line arguments that were used for invoking scriptA will be passed to scriptB provided that these have not been altered before the call. Likewise for the call from scriptB to scriptC. As long as scriptA and scriptB does not try to interpret, change or otherwise mutate the contents of $@ (or the individual positional parameters $1, $2, $3 etc.), the command line arguments will be passed on to scriptC for it to parse with getopt. Example using functions instead of scripts (it works the same way): #!/bin/sh scriptC () { printf 'Arg: %s\n' "$@" } scriptB () { scriptC "$@" } scriptA () { scriptB "$@" } scriptA -param1 -param2 This will produce the output Arg: -param1 Arg: -param2 Doing the call as scriptA "hello world" --param1 /etc/passwd --param2 will produce Arg: hello world Arg: --param1 Arg: /etc/passwd Arg: --param2 That is, the parameters will be passed on to scriptC without modification. It is then left to scriptC to interpret the parameters using getopt, getopts or by some other means.
Pass params for getopt from a script that does not use getopt
1,507,439,499,000
I have the following snippet: #!/bin/bash OPTIND=1 while getopts ":m:t" params; do case "${params}" in m) bar=$OPTARG ;; t) foo=$OPTARG ;; \?) "Invalid option: -$OPTARG" >&2 print_usage exit 2 ;; :) echo "Option -$OPTARG requires an argument." >&2 print_usage exit 2 ;; esac done shift "$(( OPTIND-1 ))" echo "${foo}" && echo "${bar}" How can I output the stdout using pipes through this script? For example: echo "this is the test" | bash getoptscript.sh -m - And it should provide : this is the test as the output.
By piping the output to xargs, which converts its input into arguments: echo "this is the test" | xargs bash getoptscript.sh -m - Which will result in: bash getoptscript.sh -m - this is the test
How to output piped stdout in bash script from getopts?
1,507,439,499,000
#!/bin/bash while getopts ":r" opt; do case $opt in r) [ -f "$1" ] && input="$1" || input="-" read $userinp cat $input | tr -d "$userinp" ;; esac done That is my code. Essentially I'm trying to either parse a file or a string and have the user choose a character to delete from the text or string. The call would be something like: /stripchars -r 'd' test > out This would remove all instances of d from the test file and place the new string or text in out. At the moment I'm just getting empty outputs.
The character (or set, or range) to delete is given by the -r flags's argument, so there's no need to read it. The filename (if any) is left in the positional argument after command line processing is done. Don't process the file when you're not yet done with processing the command line flags. The option string to getopts is backwards. Solution: #!/bin/bash # Process command line. # Store r-flag's argument in ch, # Exit on invalid flags. while getopts 'r:' opt; do case "$opt" in r) ch="$OPTARG" ;; *) echo 'Error' >&2 exit 1 ;; esac done # Make sure we got r-flag. if [[ -z "$ch" ]]; then echo 'Missing -r flag' >&2 exit 1 fi # Shift positional parameters so that first non-flag argument # is left in $1. shift "$(( OPTIND - 1 ))" if [[ -f "$1" ]] || [[ -z "$1" ]]; then # $1 is a (regular) file, or unset. # Use file for input, or stdin if unset. cat "${1:--}" | tr -d "$ch" else # $1 is set, but not a filename, pass it as string to tr. tr -d "$ch" <<<"$1" fi This would be used as $ ./script -r 'a-z' file (deletes all lowercase characters in file) $ ./script -r 'a-z' "Hello World!" (deletes all lowercase characters in the given string, unless it happens to be a filename) $ ./script -r 'a-z' (deletes all lowercase character in the standard input stream)
In a bash script, how may I use "tr -d" to delete a user entered char?
1,493,192,554,000
As I understand for text-based interaction with the Linux kernel, a program called init starts getty (or agetty) which connects to one of the TTY devices under /dev and prompts for a username. After this, a program called login is run which prompts for the user's password and if correct, then launches the user's preferred shell (e.g. bash or csh). At this point, bash interacts with the kernel via the TTY device. How does this login process work for X11? Does X11 interact with the kernel over a TTY?
The shell uses a TTY device (if it’s connected to one) to obtain user input and to produce output, and not much else. The fact that a shell is connected to a TTY is determined by getty (and preserved by login); most of the time the shell doesn’t care whether it’s connected to a TTY or not. Its interaction with the kernel happens via system calls. An X11 server doesn’t know about logins (just like a shell). The login process in X11 works in two ways: either the user logs in on the terminal, and then starts X (typically using startx); or an X server is started with a “display manager” which prompts the user for a login and password (or whatever authentication information is required). The way X11 servers obtain input and produce output is very different compared to a shell. On the input side, X knows about devices that shells don’t, starting with mice; it typically manages those directly with its own drivers. Even for keyboards, X has its own drivers which complement the kernel’s handling (so as I understand it, on Linux for example X uses the TTY driver to read raw input from the keyboard, but then interprets that using its own driver). On the output side, X drives display devices directly, with or without the kernel’s help, and without going through a TTY device. X11 servers on many systems do use TTY devices though, to synchronise with the kernel: on systems which support virtual terminals, X needs to “reserve” the VT it’s running on, and handle VT switching. There are a few other subtleties along the way; thus on Linux, X tweaks the TTY to disable GPM (a program which allows text-mode use of mice). X can also share a VT... On some workstations in the past, there wasn’t much explicit synchronisation with the kernel; if you didn’t run xconsole, you could end up with kernel messages displayed in “text mode” over the top of your X11 display.
How does X11 interact with the kernel / perform login
1,493,192,554,000
By default when I login to my Arch linux box in a tty, there is a timeout after I type my username but before I type my password. So it goes like this Login: mylogin <enter> Password: (+ 60 seconds) Login: As you can see, if I don't type the password it recycles the prompt -- I want it to wait indefinitely for my password instead of recycling the login prompt. Is this possible? It seems like the --timeout option to agetty would be what I want. However, I tried adding this flag in the getty files in /usr/lib/systemd/system/ (the option is not used by default), and rebooting -- it seemed to have no effect.
agetty calls login after reading in the user name, so any timeout when reading the password is done by login. To change this, edit /etc/login.defs and change the LOGIN_TIMEOUT value. # # Max time in seconds for login # LOGIN_TIMEOUT 60
change tty login timeout - ArchLinux
1,493,192,554,000
When I looked in the manual for agetty all I saw was alternative getty
There was a program named getty in 1st Edition Unix. The BSDs usually have a program named getty that is a (fairly) direct descendant of this. It (nowadays) reads /etc/ttys for the database of configured terminal devices and /etc/gettytab for the database of terminal line types (a line type being passed as an argument to the getty program). The Linux world has a collection of clones and reimplementations, as did minix before it. agetty was written by Wietse Venema, as an "alternative" to AT&T System 5 and SunOS getty, and ported to Linux by Peter Orbaek (who also provided simpleinit alongside it). It is suitable for use with serial devices, with either modems or directly connected terminals, as well as with virtual terminal devices. Paul Sutcliffe, Jr's getty and uugetty is hard to find nowadays, but was an alternative to agetty. (The getty-ps package containing them both can still be found in SlackWare.) Fred van Kempen wrote an "improved" getty and init for minix in 1990. Gert Doering's mgetty is another getty that is suitable for use with actual serial devices, and was designed to support "smart" modems such as fax-modems and voice-modems, not just "dumb" terminal-only modems. Florian La Roche's mingetty was designed not to support serial devices, and generic getty functionality on any kind of terminal device. Rather, it is specific to virtual terminal devices and cuts out all of the traditional getty hooplah that is associated with modems and serial devices. Felix von Leitner's fgetty was derived from mingetty, adjusted to use a C library with a smaller footprint than the GNU C library, and tweaked to include things like the checkpasswd mechanism. Nikola Vladov's ngetty was a rearchitecture of the whole getty mechanism. Instead of init (directly or indirectly) knowing about the TTYs database and spawning multiple instances of getty, each to respond on one terminal, init spawns one ngetty process that monitors all of the terminals.
What is the difference between getty and agetty?
1,493,192,554,000
I have a line in my inittab like the following: # Put a getty on the serial port ttyS0::respawn:/sbin/getty -L ttyS0 115200 vt100 # GENERIC_SERIAL If I try to perform a similar operation from an ssh session command line (this time towards a usb-serial adapter I have): /sbin/getty -L ttyUSB0 115200 vt100 I receive the following response: getty: setsid: Operation not permitted Is i possible to launch the getty process from my ssh session and have a serial terminal be presented on the usb-serial adapter? Why does this have to occur in inittab?
I solved that problem running : su root -c "getty /dev/ttyXX" I am running busybox 1.23.1 on an ARM platform.
getty start from command line?
1,493,192,554,000
I have connected a USB-to-serial cable from OS X to a Banana Pi board running Arch Linux ARM, distributed by Lemaker. The connection itself works well - I see all the boot messages on startup, I can drop to U-Boot and issue commands etc.; I assume that the connection itself is working as expected. However, as soon as the boot sequence finishes and I should be prompted for my credentials, the screen goes blank (clearing previous entries) and no login prompt appears. Googling around revealed that I should: Enable getty on the serial console: systemctl enable [email protected] Ensure that the kernel boot argument console=ttyS0,115200 is the last console parameter Doing that, I still do not get the login prompt. Checking the logs reveals that systemd for some reason cannot start dev-ttyS0.device: Nov 25 20:20:27 pi-server systemd[1]: Timed out waiting for device dev-ttyS0.device. Nov 25 20:20:27 pi-server systemd[1]: Dependency failed for Serial Getty on ttyS0. journalctl -u dev-ttyS0.device does not reveal any additional information - only that it timed out. systemctl start dev-ttyS0.device also times out. What am I missing? Why can't systemd start the device? And more importantly, why is the login prompt missing? Running Linux pi-server 3.4.90 #2 SMP PREEMPT Tue Aug 5 14:11:40 CST 2014 armv7l GNU/Linux Thank you for your assistance and guidance!
After reading more on the internets I found out that a newer version of systemd requires a kernel with configuration option CONFIG_FHANDLE=y - however, this option is not present on the kernel version included in the official banana-pi ArchLinux image (3.4.90). I recompiled the kernel with the option included and now the login prompt appears as expected -> everything is great. For those interested in compiling the newer kernel (3.4.103+ at the time of this writing) I followed the instructions provided here on a virtual Ubuntu Server 14.04. Did not encounter any problems. I only followed to a point where I had kernel compiled - I did not create a new SD image. Update The official Banana Pi Arch Linux image now contains the new kernel version 3.4.103 so there is no need to recompile.
No login prompt on serial console
1,493,192,554,000
Is it possible to use agetty from the command line? I tried the command sudo agetty -s 34800 tty8 linux but it returns after a few seconds and tty8 is not open. Is it the expected behaviour? Also, trying to start it in the background with sudo agetty -s 34800 tty8 linux &> /dev/null & returns immediately. Why?
I tried your line, I get the following in /var/log/secure (fedora19): getty[12336]: bad speed: 34800 try this: agetty -s 38400 -t 600 tty8 linux
How to use agetty from the command line
1,493,192,554,000
I am trying to set up getty to log in over serial (mainly as an experiment). With almost any configuration, the same thing happens. If my default shell is bash, I get this message after I log in: -bash: cannot set terminal process group (15297): Inappropriate ioctl for device -bash: no job control in this shell and then to prove that it doesn't work, I can't use ctrl+C to stop programs: $ sleep 30 ^C and it doesn't seem to send the signal. These are the configurations I have tried: I have tried both of these commands # copied from raspberry pi: sudo /sbin/agetty --keep-baud 115200,38400,9600 ttyUSB0 vt220 # something else I read somewhere sudo getty -L ttyUSB0 9600 vt100 # (I know I'm mixing and matching a lot of differences but the result is the same) I have tried both screen and picocom as a client. I have tried using a rasberry pi as a server, and two different ubuntu laptops. I have tried two FTDIs, two RS-485 usb adapters, and a built in RS232 on the getty side with a USB RS232 on the client side. I have also tried changing my default shell to sh and dash. I don't get the message, but ctrl+C still doesn't work as expected The funny thing is - when raspberry pi's automatically configure /dev/ttyAMA0, and it uses exactly the getty command that I have put, job control works! And the terminal settings are almost identical. (except for -iutf8 actually) here are the terminal settings with the FTDI connection, and picocom running: $ stty -a -F /dev/ttyUSB0 speed 9600 baud; rows 24; columns 80; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = <undef>; discard = <undef>; min = 1; time = 0; -parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl ixon ixoff -iuclc -ixany -imaxbel -iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig -icanon -iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke -flusho -extproc $ stty -a -F /dev/ttyUSB1 speed 9600 baud; rows 0; columns 0; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1; time = 0; -parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel -iutf8 -opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 -isig -icanon -iexten -echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke -flusho -extproc What am I doing wrong? And why does it work with the built in configuration for the built in serial port on the raspberry pi?
It's not the commands but the environment in which they run that is the difference. Normally getty is spawned directly from the system service manager (init) – both with systemd where it is a .service, and in the SysV world where it has an inittab entry (and not an init.d script!). This has several differences from being spawned from within another terminal: Primarily, a process started from a terminal inherits it as its "controlling terminal", which is the most important parameter for shell job control. You can see this in ps aux or ps -ef – service processes have no ctty at first, so when getty opens the indicated terminal, that becomes its controlling terminal for job control once the shell is run. But a getty that was started from your xterm will continue to have that xterm pty as its controlling tty despite its input/output now being routed to the serial port – and while getty itself doesn't mind, the shell will also inherit the wrong controlling tty, and that'll make job control impossible. $ ps -C agetty PID TTY TIME CMD 1136 tty1 00:00:00 agetty 14022 pts/22 00:00:00 agetty ^-- should be ttyS0! The controlling terminal defines /dev/tty; it defines which processes job-control signals are sent to; it defines which processes are killed (SIGHUP'd) once the terminal closes. If your shell has inherited a controlling tty that's different from its stdin/out tty, all kinds of weird things may happen. There are ways that a process can detach from its previous terminal, such as calling setsid() – and traditionally /etc/init.d scripts did this to 'daemonize' – but getty does not use them automatically because it doesn't expect to be run this way, so it wasn't programmed in. (There is a setsid tool that could be used to force this to happen, but you shouldn't use it here either; you should do things the right way from start.) You should ideally just systemctl start serial-getty@ttyUSB0 and let getty run in a clean environment. If custom options are needed, it's better to customize that service using systemctl edit [--full] instead of running getty directly. (If systemd is not used, then edit /etc/inittab; it usually has an example included. Run telinit q to reload the inittab.) There are many other, relatively minor differences between processes started from a shell vs through a service manager – stdin/stdout/stderr will start off as being your terminal at first (getty will close and reopen them, but not all services do); environment variables will be inherited from your 'sudo'; the cgroup will be inherited, which might affect your resource limits; from cgroups, the systemd-logind session will be inherited, and the serial login will not be permitted to start its own); your SELinux security context (or AppArmor profile, or SMACK label) will be inherited if that's in use; etc.
job control doesn't work when I try to set up getty over serial
1,493,192,554,000
I've got a FreeBSD (9.2) box that I'm trying to strip down as lightweight as possible. It's running on a VM server, so other than ttyv0, we don't ever use the console. I'd like (if possible and reasonable) to not start the extra getty processes that run ttyv1 through ttyv7. How do I accomplish that in a FreeBSD supported manner?
You can edit the /etc/inittab file and comment out the unneeded ttys. Take a look at the inittab manpage here. If inittab doesn't exist, take a look at the /etc/ttys file. It also has a manpage here.
How do I limit the number of getty processes started?
1,493,192,554,000
I have a few dark areas when trying to understands TTYs. On my system, I have /dev/tty[1-63]. Is udev creating these character devices? And how can I access them (like tty2 can be accessed with Ctrl+Alt+F2)? How can I access /dev/tty40 for example? As I understand, when I access /dev/tty1, agetty is called, which then calls login. What really is the role of agetty outside of calling login?
These are virtual consoles, known in Linux as virtual terminals (VT). There is a single hardware console (a single screen and a single keyboard), but Linux pretends that there are multiple ones (as many as 63). At a given point in time, a single VT is active; keyboard input is routed to that console and the screen shows what that console displays. You can use the command chvt to switch between VT (you need to have direct access to the current virtual console, which you won't have if logged remotely or running under X). You can also use keybindings set with the keymap loaded by loadkeys or by the X server. By default, outside X, Alt+Fn switches to console number n and Alt+Shift+Fn switches to console number n+12; Alt+Left and Alt+Right switch to the previous/next console. A console needs to be allocated in order to switch to it. You can use openvt to allocate a console (this requires root) and deallocvt to deallocate one. The program getty is not directly related to virtual consoles, in particular it has nothing to do with VT allocation. The role of getty is to prepare the console (set up serial port parameters, possibly blank the screen, display a welcome message, etc.) and call login, then wait for the login session to terminate and repeat. In a nutshell, the role of getty is to call login in a loop. You don't have to run getty to use a console. For example, you can start any program on a console with openvt. You can start an X server on a new console with startx.
Access higher TTYs and the role of getty
1,493,192,554,000
From the man page: agetty opens a tty port, prompts for a login name and invokes the /bin/login command. It is normally invoked by init(8). But if you run login without any argument, it asks a username. So why not let login do the job of asking the username, instead of doing it inside agetty (also, if your login fails, login will ask you your username again)? It just seems redundant to me. I thought agetty's only job would be to call login repeatedly (because login exits after a certain number of tries).
By reading in the username, agetty can automatically adapt the tty settings like parity bits, character size, and newline processing. If you disable it (--skip-login options), it needs to assume (possibly wrong) default settings.
Why does agetty ask for the username itself?
1,493,192,554,000
I want to remove the default newline inserted before the content of /etc/issue on login prompt in tty. I'm using agetty and systemd. I tried to add the --nonewline option to my [email protected] : ExecStart=/sbin/agetty --nonewline --noclear %I $TERM That result in : # systemctl status -l [email protected] ● [email protected] - Getty on tty5 Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled) Active: failed (Result: start-limit) since sam. 2014-05-17 23:50:13 CEST; 56s ago Docs: man:agetty(8) man:systemd-getty-generator(8) http://0pointer.de/blog/projects/serial-console.html Process: 14538 ExecStart=/sbin/agetty --nonewline --noclear %I $TERM (code=exited, status=1/FAILURE) Main PID: 14538 (code=exited, status=1/FAILURE) systemd[1]: [email protected] has no holdoff time, scheduling restart. systemd[1]: Stopping Getty on tty5... systemd[1]: Starting Getty on tty5... systemd[1]: [email protected] start request repeated too quickly, refusing to start. systemd[1]: Failed to start Getty on tty5. systemd[1]: Unit [email protected] entered failed state. And I get : # journalctl --no-pager -b -u [email protected] -- Logs begin at sam. 2013-10-12 00:20:12 CEST, end at sam. 2014-05-17 23:52:49 CEST. -- systemd[1]: Starting Getty on tty5... systemd[1]: Started Getty on tty5. agetty[14497]: Usage: agetty[14497]: agetty [options] <line> [<baud_rate>,...] [<termtype>] agetty[14497]: agetty [options] <baud_rate>,... <line> [<termtype>] agetty[14497]: Options: agetty[14497]: -8, --8bits assume 8-bit tty agetty[14497]: -a, --autologin <user> login the specified user automatically agetty[14497]: -c, --noreset do not reset control mode agetty[14497]: -E, --remote use -r <hostname> for login(1) agetty[14497]: -f, --issue-file <file> display issue file agetty[14497]: -h, --flow-control enable hardware flow control agetty[14497]: -H, --host <hostname> specify login host agetty[14497]: -i, --noissue do not display issue file agetty[14497]: -I, --init-string <string> set init string agetty[14497]: -l, --login-program <file> specify login program agetty[14497]: -L, --local-line[=<mode>] control the local line flag agetty[14497]: -m, --extract-baud extract baud rate during connect agetty[14497]: -n, --skip-login do not prompt for login agetty[14497]: -o, --login-options <opts> options that are passed to login agetty[14497]: -p, --login-pause wait for any key before the login agetty[14497]: -r, --chroot <dir> change root to the directory agetty[14497]: -R, --hangup do virtually hangup on the tty agetty[14497]: -s, --keep-baud try to keep baud rate after break agetty[14497]: -t, --timeout <number> login process timeout agetty[14497]: -U, --detect-case detect uppercase terminal agetty[14497]: -w, --wait-cr wait carriage-return agetty[14497]: --noclear do not clear the screen before prompt agetty[14497]: --nohints do not print hints agetty[14497]: --nonewline do not print a newline before issue agetty[14497]: --nohostname no hostname at all will be shown agetty[14497]: --long-hostname show full qualified hostname agetty[14497]: --erase-chars <string> additional backspace chars agetty[14497]: --kill-chars <string> additional kill chars agetty[14497]: --help display this help and exit agetty[14497]: --version output version information and exit agetty[14497]: For more details see agetty(8). systemd[1]: [email protected] has no holdoff time, scheduling restart. systemd[1]: Stopping Getty on tty5... Why agetty don't want to recognize the option ? Is there another way to do that ?
You have hit a bug! There's a F_NONL directive that never gets called in the agetty binary as can be seen in the sources: ... #define F_NONL (1<<17) /* No newline before issue */ ... /* Parse command-line arguments. */ static void parse_args(int argc, char **argv, struct options *op) { int c; enum { VERSION_OPTION = CHAR_MAX + 1, NOHINTS_OPTION, NOHOSTNAME_OPTION, LONGHOSTNAME_OPTION, HELP_OPTION, ERASE_CHARS_OPTION, KILL_CHARS_OPTION, }; const struct option longopts[] = { { "8bits", no_argument, 0, '8' }, { "autologin", required_argument, 0, 'a' }, ... { "skip-login", no_argument, 0, 'n' }, { "nonewline", no_argument, 0, 'N' }, while ((c = getopt_long(argc, argv, "8a:cC:d:Ef:hH:iI:Jl:L::mnNo:pP:r:Rst:Uw", longopts, NULL)) != -1) { switch (c) { case '8': op->flags |= F_EIGHTBITS; break; case 'a': op->autolog = optarg; break; case 'c': op->flags |= F_KEEPCFLAGS; break; case 'C': op->chdir = optarg; break; case 'd': op->delay = atoi(optarg); break; case 'E': op->flags |= F_REMOTE; break; case 'f': op->flags |= F_CUSTISSUE; op->issue = optarg; break; case 'h': op->flags |= F_RTSCTS; break; case 'H': fakehost = optarg; break; case 'i': op->flags &= ~F_ISSUE; break; case 'I': init_special_char(optarg, op); op->flags |= F_INITSTRING; break; case 'J': op->flags |= F_NOCLEAR; break; case 'l': op->login = optarg; break; case 'L': /* -L and -L=always have the same meaning */ op->clocal = CLOCAL_MODE_ALWAYS; if (optarg) { if (strcmp(optarg, "=always") == 0) op->clocal = CLOCAL_MODE_ALWAYS; else if (strcmp(optarg, "=never") == 0) op->clocal = CLOCAL_MODE_NEVER; else if (strcmp(optarg, "=auto") == 0) op->clocal = CLOCAL_MODE_AUTO; else log_err(_("invalid argument of --local-line")); } break; case 'm': op->flags |= F_PARSE; break; case 'n': op->flags |= F_NOPROMPT; break; case 'o': op->logopt = optarg; break; case 'p': op->flags |= F_LOGINPAUSE; break; case 'P': op->nice = atoi(optarg); break; case 'r': op->chroot = optarg; break; case 'R': op->flags |= F_HANGUP; break; case 's': op->flags |= F_KEEPSPEED; break; case 't': if ((op->timeout = atoi(optarg)) <= 0) log_err(_("bad timeout value: %s"), optarg); break; case 'U': op->flags |= F_LCUC; break; case 'w': op->flags |= F_WAITCRLF; break; case NOHINTS_OPTION: op->flags |= F_NOHINTS; break; case NOHOSTNAME_OPTION: op->flags |= F_NOHOSTNAME; break; case LONGHOSTNAME_OPTION: op->flags |= F_LONGHNAME; break; case ERASE_CHARS_OPTION: op->erasechars = optarg; break; case KILL_CHARS_OPTION: op->killchars = optarg; break; case VERSION_OPTION: printf(_("%s from %s\n"), program_invocation_short_name, PACKAGE_STRING); exit(EXIT_SUCCESS); case HELP_OPTION: usage(stdout); default: usage(stderr); } } In the while loop there should be a block like below, which is missing. case 'N': op->flags |= F_NONL; break; I think is a trivial patch to add. You can check the full source code in GitHub or kernel.org.
Remove the newline before `/etc/issue` in tty
1,493,192,554,000
I have machines in pairs. They're connected to each other by a null modem serial cable. These machines sometimes go down, and the only way to diagnose them is through that cable, using the other node of the pair. These devices have Getty configured to run on the serial device /dev/ttyAMA0. This is by default, and I'd like to keep as close to the default config as possible. Here's the problem: I can't seem to get Getty to relinquish control of the device, so I can use something like minicom to log into the other device. Unfortunately, simply killing getty doesn't work, as something seems to immediately restarts it. How can I get getty to stop?
(By the way, I've never seen the spelling "GeTTY". I don't think it's correct.) The short answer is that you can disable getty by commenting it out in /etc/inittab and running init q to reread configuration. Unless you're using systemd or Upstart but since you didn't say so I'll assume you aren't. The longer answer is that your setup has an intrinsic problem and is flawed. With getty running on both serial ports, the two getty processes run the risk of starting to endlessly chat with each other. That is, one will send a prompt, which the other one interprets as a username, which causes it to produce its own prompt, which gets interpreted as a username on the original end, and so on forever. The correct way to handle this is to use two serial ports, one in each direction. The console serial port on system 1 is connected to the extra serial port on system 2, and the console serial port on system 2 is connected to the extra serial port on system 1. Since the "extra" serial ports on both systems never run getty (only the console serial ports do), there is never a getty to disable, and the port can be directly used by screen or cu etc... For the "extra" serial ports, you can use USB serial port adaptors if the systems do not provide enough built-in serial ports. Because those ports are only accessed after the system is fully booted (unlike the console serial port) it's OK for them to be on a USB bus which will not be initialized until partway through the boot sequence.
How to free a serial port owned by Getty
1,493,192,554,000
I have a service that execs a command when a user connects to it through a socket, and redirects everything it receives to the executed program. It works ok with shells like bash, giving the user a remote shell. Instead of forking bash or sh, I'd like to run something that asks for user and password, like /bin/login Is that the correct command to run? Isn't there anything that a non-root service could use to do the same? I thinks getty calls /bin/login, but can I just run it as a user? I guess I could install telnetd and redirect to telnet localhost but I'd rather not run a telnet server.
How to ask for a password? print $prompt read $response If you want to know how to authenticate your users, ideally you'd write your program as being pam-aware, following one of the pam developer guides all ofer the Interwebs. One example is http://www.linux-pam.org/Linux-PAM-html/Linux-PAM_ADG.html. You may also have a helper program on the system which you can use for this purpose, such as unix_chkpwd, which you could use after spawning a child process and switching to the target user. But the pam application developer interface is pretty easy, so doing the auth yourself is probably well within reason.
present users a login prompt? /bin/login? getty?
1,493,192,554,000
What is this command really doing (step by step)? openvt -c 40 /bin/agetty tty40 linux I tried this command instead : openvt -c 41 /bin/agetty tty40 linux and agetty was started on tty40 (not tty41). Why is that? It seems the -c 41 option is not necessary. Removing it yields the same result.
openvt -c 40 /bin/agetty tty40 linux runs openvt, directing it to use VT 40; so it opens that VT, and runs agetty on it. But specifying tty40 as an argument to agetty tells the latter to use VT 40 (regardless of where it was started), so it opens VT 40 itself and runs there. Thus, openvt -c 41 /bin/agetty tty40 linux opens VT 41, but then agetty opens VT 40 itself. You should just use one program to open the VT. You can either run agetty directly on whichever VT you want, or tell it to run wherever it’s been started: agetty tty40 linux openvt -c 40 agetty - linux If you remove the -c option, openvt will pick the first available VT.
What is this openvt command doing?
1,493,192,554,000
I'm trying to colorize the console and I'm having success with the following in root's .bash_profile: echo -en "\e]P7000000" echo -en "\e]P0F0F0F0" clear The problem is that this is obviously only going to be kicked off the first time the root user logs in. Is there a way to get mingetty to automatically set the proper console colors? Proposed solutions should work with RHEL6 and RHEL7 (i.e systemd) since that's what the majority of my systems are. Note that this is about colorizing the regular console and not a terminal emulator or SSH (former isn't relevant and I'm alright with the latter being considered a user config issue).
You can put literal escape characters into /etc/issue as suggested in a comment (Red Hat does this, sometimes). In a quick test, that works, but only colors the text. The background is uncolored. In vi, the text might look like ^[]P7000000^[]P0F0F0F0\S Kernel \r on an \m and the result like this: If you clear the screen, then the colors fill the window, e.g., ^[]P7000000^[]P0F0F0F0^[[2J\S Kernel \r on an \m where ^[ is the ASCII escape character, inserted in vi using controlV followed by the escape character. Modifying /etc/issue is relatively safe as long as you can ssh into the machine to repair it when you make a mistake. mingetty prints that file before the login; ssh doesn't go there. However, you might be tempted to also modify /etc/motd in the same way (after all, that is printed too). But that introduces a problem. In your script, once you substitute \e to a literal ASCII escape character echo -en "\e]P7000000" echo -en "\e]P0F0F0F0" you'd get escape]P7000000 escape]P0F0F0F0 The standard for escape sequences (ECMA-48) says that escape] begins an operating system command and that will end with a string terminator. There is none in Linux console's implementation. You can get interesting (baffling) terminal lockups from connecting with ssh when attempting to print /etc/motd with those improperly-terminated escape sequences using xterm. There is a workaround (for xterm, at least) in the brokenLinuxOSC resource. Further reading: mingetty - minimal getty for consoles issue - prelogin message and identification file motd - message of the day console_codes - Linux console escape and control sequences ECMA-48: Control Functions for Coded Character Sets
Is it possible to send color code escape sequences before login?
1,493,192,554,000
I have been experimenting with an RS-232 null modem cable and am curious to know how one would allow FreeBSD to use a serial port as a terminal, like in the days of the PDP-11 where all users had dumb terminals connected to the computer via serial connections. I wish to do the same with a headless FreeBSD machine with a serial cable running to my main PC which is using PuTTY to communicate over the serial port. Before you ask why I don't use SSH for the same purpose, I prefer this type of connection because if the network were to go down I would still be able to log into the server and see what exactly is happening, whereas if the same situation occurred with SSH I would be mostly out of luck, if that makes any sense at all. I have seen other similar questions with answers pointing to screen and minicom but these seem to be for fulfilling the role of PuTTY on the BSD side, which is not what I want here. What I want is a serial port configured at a specific baud rate with getty running on it, etc. as if it were an actual terminal. To answer the question of what version of init I am running, I am using FreeBSD 10.3, and I haven't changed anything at the system level so it's running the default BSD-style init that uses rc scripts.
Take a look at the /etc/ttys file. It's kind of like gettytab in Linux. There's one line for each... terminal line. The "ttyuX" are for serial ports (different drivers have different device names, consult man pages, eg man uart for physical serial ports . What you need to do to enable them is to change the "off" (or "onifconsole") to "on", and notify init by running "init q" as root. Remember that differently from protocols like SSH or TELNET, serial ports don't have a protocol to negotiate terminal type and size. So, at minimum, run resizewin(1) (http://man.freebsd.org/resizewin) from your shell initialization script. Otherwise the default terminal size (as visible in "stty -a") will be zero, and this will result in things like shell line editing, less(1) or vi(1) output to be badly messed up.
How do I use a serial terminal with a FreeBSD server?
1,493,192,554,000
Qingy is a getty replacement. I'd like to use it for a tty terminal on Linux Mint 15 (in hopes of getting tmux to get proper 256 colors in tty which fails with fbterm) which means replacing getty. I'm not sure how to do so, as it says I need to edit /etc/inittab, which doesn't exist in current versions of Ubuntu.
/etc/init/tty1.conf (and others) has a line that says: exec /sbin/getty -8 38400 tty1 just change the binary to qingy in some versions, these files may be under /etc/event.d you can do a lookup such as sudo locate tty1.conf
How can I replace a tty using getty with qingy on ubuntu 12.04 or later?
1,493,192,554,000
I’m running virtual linux machine (debian12) on QEMU with -device virtconsole argument. That argument adds /dev/hvcX device nodes to VM. QEMU can connect that device to unix socket on host. If i pass “console=hvc0” parameter to VM’s kernel i get console on host socket and can launch tty on it. However it works only if i configure alongside another one console kernel parameter, e.g. console=ttyAMA0 console=hvc0. The problem is that VM doesn’t boot with single console=hvc0 kernel parameter. Am i missing something? The whole QEMU command: qemu-system-aarch64 \ -M virt,accel=hvf,highmem=off \ -cpu host \ -smp 2 \ -m 2048 \ -display none \ -daemonize \ -monitor unix:/tmp/qemu-monitor-socket,server=on,wait=off \ -device virtio-serial-pci \ -chardev socket,path=/tmp/qemu-guest-tty,server=on,wait=off,id=guest-tty \ -device virtconsole,chardev=guest-tty \ -device virtio-net-pci,netdev=mynet0,mac=52:54:00:08:06:8b \ -netdev user,id=mynet0,hostfwd=tcp::22221-:22 \ -device virtio-blk-pci,drive=hda \ -drive file=~/qemu/debian/1-debian-12-genericcloud-arm64.qcow2,format=qcow2,discard=unmap,id=hda,if=none \ -cdrom ~/qemu/cloud-init/cloud-init.iso \ -kernel ~/qemu/debian/vmlinuz-6.1.0-9-cloud-arm64 \ -initrd ~/qemu/debian/initrd.img-6.1.0-9-cloud-arm64 \ -append 'root=/dev/vda1 ds=nocloud;h=debian1 console=hvc0' UPD: Problem existing only in Debian. I've tested Ubuntu, Fedora and openSUSE - they boot normally with hvc0 configured as the only console. All distributions i've tested were latest "cloud" ARM64 images. I've tried debian-12-genericcloud and debian-11-genericcloud images with same result.
Debian(12) builds its kernels with CONFIG_VIRTIO_CONSOLE set to 'm' as opposed to 'y'. This means that your initrd needs to contain the virtio_console module in order for hvc0 to be available early enough in the booting process. You can verify if your initrd has the required module by running this command: $ lsinitramfs `readlink -f /boot/initrd.img` | grep virtio_console usr/lib/modules/6.1.0-11-arm64/kernel/drivers/char/virtio_console.ko If you don't see the module, edit your /etc/initramfs-tools/modules file and add a line that says "virtio_console". Then run update-initramfs -k all -u as root. Your initrd should now contain the virtio_console module, and when you reboot, systemd will automatically start [email protected].
Debian VM doesn’t boot on QEMU with "console=hvc0" kernel parameter
1,493,192,554,000
Normally systemd will spawn a getty on the virtual terminals just before it starts graphical mode. I have always thought that is the wrong time to spawn a getty: The time when you need the getty is when the booting fails, and it needs a helping hand to get back. How do I change the order, so getty is spawned as soon as root can login?
Check out man systemd-debug-generator. It is talking about boot options, but says you can also enable the feature permanently, as for any service: If the systemd.debug-shell option is specified, the debug shell service "debug-shell.service" is pulled into the boot transaction. It will spawn a debug shell on tty9 during early system startup. Note that the shell may also be turned on persistently by enabling it with systemctl(1)'s enable command.
systemd: spawn gettys ASAP
1,493,192,554,000
On a Debian Jessie system with systemd, how can I configure the terminals so that a message like Press enter to activate this console is displayed and the login prompt does not appear before hitting enter? With inittab this could be done by configuring askfirst, but how to do it with systemd? If possible I'd prefer to adjust appropriate config files rather than messing with existing systemd unit files directly - just like there is logind.conf but unfortunately that config file won't help in this case AFAIK.
With /etc/inittab this could be done by configuring askfirst … Actually, it could not. That's a BusyBox init mechanism that doesn't exist in the Linux System 5 init clone, one of several ways in which their /etc/inittab configuration files are not the same things. The way to do similar things on a systemd Linux operating system depends from what one is actually doing. One doesn't necessarily employ it solely for interactive terminal log-on, although you clearly are here. One common use of askfirst is simply for not having the getty+login system running for unused virtual terminals. systemd doesn't need a non-default setting for this. With systemd, the logind service as packaged already arranges to only start autovt@N.service services on demand, when virtual terminals are switched to the foreground. Terminal login isn't run on virtual terminals that haven't been switched to (and that are not the first or the "reserved" virtual terminals). The slightly different semantics, of not starting the getty+login system until one has switched to the virtual terminal and pressed enter, are slightly harder to achieve, as they involve either switching on a getty option or interposing a program that prints out a message and waits for a line of input before chaining to getty. Only a few getty programs have such options, such as Peter Orbaek's agetty which has --wait-cr. Most (like Felix von Leitner's fgetty and Florian La Roche's mingetty) have not. The remainder (such as Gert Doering's mgetty) are ones that expect modems and all of their accompaniments — which of course virtual terminals do not have and which make adapting them to virtual terminal use somewhat tricky. The chain-loading equivalent to --wait-cr on a virtual terminal, a simple program that prints a message, then reads a line from the terminal (in canonical mode), and then chain loads, is a fairly simple program. Employing such options, employing different getty programs, or interposing utility chain-loading programs "before" getty, all involve either writing one or more unit file override files under /etc/systemd/system with systemctl edit (changing the ExecStart setting) or simply pointing [email protected] at a local unit file of one's own devising instead of at [email protected]. Further reading https://unix.stackexchange.com/a/194218/5132 Jonathan de Boyne Pollard (2015). login-prompt. nosh Guide. JdeBP's Softwares. https://askubuntu.com/a/659268/43344 https://unix.stackexchange.com/a/233855/5132 Werner Fink and Karel Zak. agetty. Unbuntu 15.04 manual pages.
"askfirst" getty with systemd ("press enter to activate this console")
1,493,192,554,000
is it possible to start the x server on a virtual console that is running getty already? i like the responsiveness of getty - scrolling through the man pages or scrolling in vi is much quicker than xterm (gnome terminal). but i also like being able to alt+tab between web browser and xterm. it would be great if i could alt+tab between getty and my chromium web browser. i'm running debian wheezy with gnome. p.s. i know i could switch between x on tty7 and getty on tty6 say, but if i do it this way then i cannot use alt+tab.
No. Once you start X, the VT stops being handled as a "text device" and becomes a "graphical" one. In the olden days the distinction was clear: either the VT was relying on BIOS (at least to some extent), knew just a few text modes and was blazingly fast, or it was switched to a graphical mode, had more colours and/or larger resolution and was slower. These days the difference is not that clear (at least on Linux, can't tell about other UNIX variants), since the textual VT actually uses graphical mode, with the "translation" being done in the kernel. Nevertheless, you either let a text-based application open the VT or you leave it to X (or any other graphical front end like for example an implementation of the Wayland protocol). As for the speed issues, choose your terminal right. Gnome terminal is likely to be slower than xterm, which itself is way slower than for example urxvt, unless you coerce it to use the same dirty tricks urxvt does, by setting the appropriate X resource: XTerm*fastScroll: true You very likely want to read Can a terminal emulator be as fast as TTY 1-6?, set up your terminal properly and use the graphical mode. As a side note, some time ago (around 2007), I had problem with the nVidia framebuffer kernel driver, which was really slow on large resolutions like 1600x1200 - reading man pages in XTerm was much faster.
run x and getty on the same virtual console?
1,493,192,554,000
Init typically will start multiple instances of "getty" which waits for console logins which spawn one's user shell process. Upon shutdown, init controls the sequence and processes for shutdown. The init process is never shut down. It is a user process and not a kernel system process although it does run as root. If the init process are user process and not kernel process, how I can modify the behavior or see the log the the process remotely?
To clarify, you seem to be running systemd on Ubuntu rather than the (current) default of upstart. systemd, by default, sets up only one getty, tty1. Other gettys are set up "on the fly". There is a default setting of a maximum of 6 ttys. If you want to increase the number of gettys available to autostart, then increase the value of NAutoVTs in /etc/systemd/logind.conf. If you want to prestart gettys, continue to do what you are doing (i.e. enabling a getty service and starting it) for each getty you want. Not sure why you want to preactivate though. More details available here: https://wiki.archlinux.org/index.php/Systemd_FAQ#How_do_I_change_the_default_number_of_gettys.3F
Getty instances in init process
1,493,192,554,000
I found that, in /etc/inittab, this modification (-a username) for the user u disables the login/password check for all tty:s: 1:2345:respawn:/sbin/getty -a u 38400 tty1 2:23:respawn:/sbin/getty -a u 38400 tty2 3:23:respawn:/sbin/getty -a u 38400 tty3 4:23:respawn:/sbin/getty -a u 38400 tty4 5:23:respawn:/sbin/getty -a u 38400 tty5 6:23:respawn:/sbin/getty -a u 38400 tty6 That would be great for me, not having to type the password all the time! Question is, apart from the case when the computer gets stolen, the thief could use the system (which I would prefer, come to think of it), what security implications does this configuration have? Possibly relevant: The second column (runlevels).
I use autologin, not just disable the password ;-) If your disk is not encrypted, they could just boot from external media and steal your data. So autologin isn't a problem for thieves, but people near you (that can access your computer when you're not here). Just don't let people around you know that they could login as root without password... EDIT In this case, you run autologin at local tty, remote login normally use pts (pesudo tty), they don't interfere each other
Security drawbacks of disabling tty password check
1,493,192,554,000
On my embedded system, I use linux kernel 4.19.102 and systemd 240. Everything is generated using buildroot 2019.02.9. I use the serial port of my device to output console. bootargs = "console=ttyS0,115200"; With the previous version I used, everything were fine one the console side (buildroot 2018.05, kernel 4.16.y and systemd 237). I had the following file : /etc/systemd/system/getty.target.wants/[email protected] which was launching /sbin/getty -L ttyS0 115200 vt100 Now, the console prints the usual starting messages and then, prints the log message twice : Welcome to MyDevice MyDevice login: Welcome to MyDevice MyDevice login: And when I try to log with a long password beginning with 'r', I get something like this : Welcome to MyDevice MyDevice login: Welcome to MyDevice MyDevice login: root Password: r Login incorrect MyDevice login: I can hopefully login with SSH. I have seen that the "getty" service is started twice in this version : # ps | grep getty 988 root /sbin/getty -L ttyS0 115200 vt100 1002 root /sbin/getty -L console 115200 vt100 1117 root grep getty The /etc file is now : /etc/systemd/system/getty.target.wants/console-getty.service which was launching /sbin/getty -L console 115200 vt100 But the /sbin/getty -L ttyS0 115200 vt100 is still started. When I kill the 'console' service (to be in the same state as the previous version), I can login and the console is finally fine. How Can I configure buildroot or systemd to prevent the console service to start ?
The problem is that BR2_TARGET_GENERIC_GETTY_PORT was set on 'console' on buildroot 2018.05. It needs to be changed by 'ttyS0' in buildroot 2019.02.9.
How to prevent console-getty.service to start?
1,493,192,554,000
A framebuffer is a device file which allows for a simplified interface to the screen. For example running the below code on a RaspberryPi with a HDMI display connected: cat /dev/urandom > /dev/fb1 There are commands (fbi, fim) which allow for injecting full images into the framebuffer. There are multiple resources on the internet (ref1, ref2, ref3) trying to more or less succesfully explain how to add make a systemd service which will result in an image on the screen. A common thread in those resources is the mentioning tty together with the framebuffer. (i.e. both fbi and fim have options to pass them a tty). My assumption was that a tty is a separated concept from a framebuffer. The tty uses the framebuffer to output content to a user, but the framebuffer isn't in any way tied to a tty. Is there a hidden relationship behind a tty and a framebuffer which could explain why commands to print images to a framebuffer seem to depend on a tty?
The “hidden relationship” is related to the fact that Linux supports multiple virtual terminals, which means that the framebuffer can be used by a number of different terminals. Programs which manipulate the framebuffer directly need to be aware of which terminal currently owns the framebuffer: When such a program starts, it needs to store the current terminal configuration, then tell the kernel that it wants to control the display directly (it switches to “graphics mode” using the KDSETMODE ioctl) and set the framebuffer up as necessary (e.g. in fbi, configure panning). It also needs to tell the kernel that it wants to be told about virtual terminal switches (when the user presses CtrlAltFn). If the user switches terminals, the kernel will then tell the running program about it; the program needs to restore the terminal settings and relinquish control over the terminal (VT_RELDISP) before the switch can actually proceed. If the user switches back to the terminal running the framebuffer-based program, the kernel again tells the program about it, and the program sets up the terminal and framebuffer as necessary and restores its display. This is described in detail in How VT switching works.
Relationship between framebuffer and a tty
1,493,192,554,000
I have a clean Debian Stretch installation. It used to be the case that after booting I would end up on tty1 with a login prompt, and after logging X is started. I wanted to automate the logging in (because I'm the only user and my disk is encrypted already) so I followed the exact instructions given here: In /etc/systemd/logind.conf, changed #NAutoVTs=6 to NAutoVTs=1 Used systemctl edit getty@tty1 and added (where username is my username): [Service] ExecStart= ExecStart=-/sbin/agetty --autologin username --noclear %I 38400 linux Enabled the service: systemctl enable [email protected] After rebooting, the login prompt was gone from tty1 and nothing else happened. It still showed the boot log. On tty2-5, only a cursor appeared, no login prompt as before. Luckily, tty6 was still available to recover the system. So I did: Disable the service: systemctl disable [email protected] Undid the change to /etc/systemd/logind.conf Now, I can use all ttys except tty1 to login as normally, but somehow tty1 remains damaged. How can I repair this as well?
You should enable the [email protected] again: systemctl enable [email protected]
tty1 missing login prompt
1,493,192,554,000
My Ubuntu 20.04 system has a serial port over which I would like to provide console access. I can confirm that I can communicate over the serial port with sudo picocom -b 115200 /dev/ttyS5 I start the Getty instance with sudo systemctl start serial-getty@ttyS5 which starts the command /sbin/agetty -o '-p -- \u' --keep-baud 115200,38400,9600 ttyS5 vt220 However, no login prompt appears on the remote system.
I used strace to monitor agetty's activity, and I did see that it is writing to and reading from the serial device, even though nothing appeared on the remote side. After using strace to monitor the system calls, I saw that whenever I typed on the remote side, agetty was only seeing the byte 0xFF, which suggested a bad baud rate. I added a udev rule to set the baud rate on the serial device: ... RUN+="/bin/stty -F /dev/%k 115200" Since the [email protected] passes the --keep-baud option, it will use the previously-configured baud rate.
No login prompt from Getty over serial console
1,493,192,554,000
I have a Stretch system n which I would like to replace agetty with ngetty (for various reasons like because I have no use for serial lines, and I like the way ngetty can be configured, for examples). I know how to do that in runit or sysvinit, but I can't find where the info is with systemd. I can find nothing which seems related in /etc (the inittab file is simply not used for the related lines) but there seems to have related files in /lib/systemd/system/. I must admit I do not feel comfortable to hack things in this folder, so what would be the cleanest way to do that in Debian? Thanks.
Seems like you may be on a virtual environment where getty is useless. You may switch to mingetty (default at Amazon AWS now), which uses minimal resources and still be able to look at the "Console Logs" (via Amazon vm GUI ..eeeek). To switch from agetty to ngetty or mingetty, (you just need one): # apt install mgetty # apt install mingetty To tell debian to start using you new getty, update your /sbin/getty symbolic-link to (pick one): # cd /sbin # rm getty # ln -s mgetty getty # ln -s mingetty getty BONUS: If in a cloud based environment, you really don't care about multiple consoles, you may even reduce the # of consoles to just 1 (for viewing console logs on Amazon CLI). To do this: Edit /etc/default/console-setup and replace: ACTIVE_CONSOLES=/dev/tty[1-6] with... ACTIVE_CONSOLES=/dev/tty[1-1] Cheers...
how to change the getty binary in Debian Stretch?
1,493,192,554,000
The unix haters handbook says: /etc/getty, which asks for your username, and /bin/login, which asks for your pass- word, are no different from any other program. They are just programs. They happen to be programs that ask you for highly confidential and sensi- tive information to verify that you are who you claim to be, but you have no way of verifying them. Is it true for a modern linux system?
By default there is no particular integrity protection for system program and library files beyond file permissions (and possibly read-only mounting); there is no such thing as specially protected system files. In this sense, the answer is yes, this is true. You could follow a host-based IDS approach using programs like tripwire or aide, where you create suitable checksums for each important file, store them in a safe place, and compare them regularly against the actual re-calculated checksums to notice any changes. Clearly, the database of checksums needs to be updated upon the installation of every single update or patch. Most package managers maintain such a list of checksums and allow to check the integrity of the installed files. This approach, if followed in meaningful ways, is a bit involved and therefore rarely seen. A different approach is to harden the system against integrity violations by using add-ons for role-based access control (RBAC) and mandatory access control (MAC) like SELinux or Grsecurity, where, if applied correctly, even root could not modify system files unless it enters a designated role. The key here is to design a policy that inhibits undesired actions while not breaking legitimate application activity. This is far from trivial and therefore rarely seen, either. However, before any of this is considered in depth, one needs to define the attack model and to specify the scenario: 20 years ago Unix machines were true multi-user systems where potentially not trustworthy users had access to the system. These days are gone; today you have servers with functional users such as "webserver" or "database", or you have desktop systems on personal computers with one user.
How can one verify the authenticity of `getty` running on linux?
1,463,417,513,000
Problem I want to see the dependencies for one or more targets of a makefile. So I am looking for a program that can parse makefiles and then will represent the dependencies in some tree-like format (indentation, ascii-art, ...) or as a graph (dot, ...). Similar There are programs that do this for other situations: pactree or debtree can display the dependencies for software packages in the respective format in a tree like ascii format or as a dot graph, gcc -M source_file.c displays the dependencies of the C source file as a make rule, pstree displays an ascii representation of the process tree. Progress Searching the web I found little help. That lead me to try make --always-make --silent --dry-run some_target | \ grep --extended-regexp 'Considering target file|Trying rule prerequisite' but it looks like I have to hack some more parsing code in perl or python in order to represent this as a nice tree/graph. And I do not yet know if I will really get the full and correct graph this way. Requirements It would be nice to limit the graph in some ways (no builtin rule, only a given target, only some depth) but for the most part I am just looking for a tool that will give me the dependencies in some "reasonable", human-viewable format (like the programs under "Similar" do). Questions Are there any programs that can do this? Will I get the full and correct information from make -dnq ...? Is there a better way to get this info? Do scripts/attempts for parsing this info already exist?
Try makefile2graph from the same author there is a a similar tool MakeGraphDependencies written in java instead of c. make -Bnd | make2graph | dot -Tsvg -o out.svg Then use some vector graphics editor to highlight connections you need.
How to display dependencies given in a makefile as a tree?
1,463,417,513,000
I was using a Makefile from the book "Advanced Linux Programming (2001)" [code]. It was strange for me to see that GNU make does compile the code correctly, without even specifying a compiler in the Makefile. It's like baking without any recipe! This is a minimal version of the code: test.c int main(){} Makefile all: test and make really works! This is the command it executes: cc test.c -o test I couldn't find anything useful in the documentation. How this is possible? P.S. One additional note: Even the language is not specified; Because test.c is available, GNU make uses cc. If there exists test.cpp or test.cc (when there is no test.c), it uses g++ (and not c++).
Make does this using its built-in rules. These tell it in particular how to compile C code and how to link single-object programs. You actually don't even need a Makefile: make test would work without one. To see the hidden rules that make all of this possible, use the -p option with no Makefile: make -p -f /dev/null The -r option disables these built-in rules. As pointed out by alephzero, Make has had built-in rules for a very long time (if not always); Stuart Feldman's first version in Unix V7 defines them in files.c, and his 1979 paper mentions them. They're also part of the POSIX specification. (This doesn't mean that all implementations of Make support them — the old Borland Make for DOS doesn't, at least up to version 3.0.)
How does this Makefile makes C program without even specifying a compiler?
1,463,417,513,000
I am trying to instruct GNU Make 3.81 to not stop if a command fails (so I prefix the command with -) but I also want to check the exit status on the next command and print a more informative message. However my Makefile below fails: $ cat Makefile all: -/bin/false ([ $$? -eq 0 ] && echo "success!") || echo "failure!" $ $ make /bin/false make: [all] Error 1 (ignored) ([ $? -eq 0 ] && echo "success!") || echo "failure!" success! Why does the Makefile above echo "success!" instead of "failure!" ? update: Following and expanding on the accepted answer, below is how it should be written: failure: @-/bin/false && ([ $$? -eq 0 ] && echo "success!") || echo "failure!" success: @-/bin/true && ([ $$? -eq 0 ] && echo "success!") || echo "failure!"
Each update command in a Makefile rule is executed in a separate shell. So $? does not contain the exit status of the previous failed command, it contains whatever the default value is for $? in a new shell. That's why your [ $? -eq 0 ] test always succeeds.
Don't stop make'ing if a command fails, but check exit status
1,463,417,513,000
I am trying to compile a program written in Fortran using make (I have a Makefile and, while in the directory containing the Makefile, I type the command $ make target, where "target" is a system-specific target specification is present in my Makefile. As I experiment with various revisions of my target specification, I often get a variety of error messages when attempting to call make. To give a few examples: make[1]: Entering directory /bin/sh: line 0: test: too many arguments ./dpp angfrc.f > angfrc.tmp.f /bin/sh: ./dpp: Permission denied make[1]: *** [angfrc.o] Error 126 make[1]: Leaving directory make: *** [cmu60] Error 2 and make[1]: Entering directory /bin/sh: line 0: test: too many arguments ./dpp -DSTRESS -DMPI -P -D'pointer=integer'-I/opt/mpich_intel/include angfrc.f > angfrc.tmp.f /bin/sh: ./dpp: Permission denied make[1]: *** [angfrc.o] Error 126 make[1]: Leaving directory make: *** [mpich-c2] Error 2 and make[1]: Entering directory /bin/sh: line 0: test: too many arguments ./dpp -DSTRESS -DMPI -P -D'pointer=integer' -I/opt/mpich_intel/include angfrc.f > angfrc.tmp.f /bin/sh: ./dpp: Permission denied make[1]: *** [angfrc.o] Error 126 make[1]: Leaving directory make: *** [mpi-intel] Error 2 Do you know how I can find a list of what the error codes, such as "Error 126" and "Error 2," mean? I found this thread on another website, but I am not sure what the reply means. Does it mean that there is no system-independent meaning of the make error codes? Can you please help me? Thank you.
The error codes aren't from make: make is reporting the return status of the command that failed. You need to look at the documentation of each command to know what each status value means. Most commands don't bother with distinctions other than 0 = success, anything else = failure. In each of your examples, ./dpp cannot be executed. When this happens, the shell that tried to invoke it exits with status code 126 (this is standard behavior). The instance of make that was running that shell detects a failed command (the shell) and exits, showing you Error 126. That instance of make is itself a command executed by a parent instance of make, and the make utility returns 2 on error, so the parent make reports Error 2. The failure of your build is likely to stem from test: too many arguments. This could be a syntax error in the makefile, or it could be due to relying on bash-specific features when you have a /bin/sh that isn't bash. Try running make SHELL=/bin/bash target or make SHELL=/bin/ksh target; if that doesn't work, you need to fix your makefile.
Where can I find a list of 'make' error codes?
1,463,417,513,000
I am playing around with makefiles and I came across %.o or %.c. From what I understood, it specify all c or o files. But why this work: %.o: %.c $(CC) -c $^ -o $@ and this doesn't work SOURCE := $(wildcard *.c) $(SOURCE:.c=.o): SOURCE $(CC) -c $^ -o $@ Both expression specify all the files. so what %.o: symbol in make file does ?
Both expression specify all the files. Nope, the first rule tells make how to obtain an .o file given the corresponding .c file. Note the singular: a single file. The second rule (claims to) tell make how to obtain a bunch of .o files, given another bunch of corresponding .c files. Note the plural: all .c files resulting from the *.c globbing. On a side note, %.o: %c is a GNU extension. On another side note, you won't be learning how to use make on StackOverflow. You should consider reading a book instead.
What does % symbol in Makefile mean
1,463,417,513,000
I have a Makefile that has a variable that needs to have a default value in case when variable is unset or if set but has null value. How can I achieve this? I need this, as I invoke make inside a shell script and the value required by the makefile can be passed from the shell as $1. And to pass this to makefile I have to set it inside bash-script. Ideas: (Not elegant) Inside the bash script the variable could be checked if its set but has null value, in that case it can be unset. Snippets Note: the following won't work if the variable are not defined in the terminal, as they are set in the bash script. Makefile dSourceP?=$(shell pwd) Source?=$(notdir $(wildcard $(dSourceP)/*.md)) Bash Script make all dSourceP="${1}" Source="${2}" Terminal bash ./MyScript.sh bash ./MyScript.sh /home/nikhil/MyDocs bash ./MyScript.sh /home/nikhil/MyDocs index.md
Since you’re using GNU make, you could use the ?= operator: FOO ?= bar but that doesn’t deal with pre-existing null (or rather, empty) values. The following deals with absent and empty values: ifndef FOO override FOO = bar endif test: echo "$(FOO)" .PHONY: test (Make sure line 6 starts with a real tab.) You’d call this using make FOO=blah to set a value. make or make FOO= will end up setting FOO to bar; you need override to override variables set on the command-line.
Makefile: Default Value of Variable that is set but has null value
1,463,417,513,000
I was writing a Makefile (on Ubuntu 20.04, if it's relevant) and noticed some interesting behavior with echo. Take this simple Makefile: test.txt: @echo -e 'hello\nworld' @echo -e 'hello\nworld' > test.txt When I run make, I would expect to see the same thing on stdout as in test.txt, but in fact I do not. I get this on stdout: hello world but this in test.txt: -e hello world Meanwhile, if I remove -e from both lines in the Makefile, I get this on stdout: hello\nworld and this in test.txt: hello world This had me wondering if echo detects the redirection and behaves differently, but it doesn't when I just run it manually in the shell with /bin/echo -e 'hello\nworld' > test.txt (which yields hello and world on separate lines, as I would normally expect). I even went so far as to confirm that the Makefile is using /bin/echo instead of a shell builtin by adding an @echo --version line. What is going on here?
UNIX compliant implementations of echo are required to output -e<space>hello<newline>world<newline> there. Those that don't are not compliant. Many aren't which means it's almost impossible to use echo portably, printf should be used instead. bash's echo, in some (most) builds of it, is only compliant when you enable both the posix and xpg_echo options. That might be the echo behaviour you were expecting. Same for the echo standalone utility that comes with GNU coreutils which is only compliant if it's invoked with $POSIXLY_CORRECT set in its environment (and is of a recent enough version). make normally runs sh to interpret the command lines on each action line. However, the GNU implementation of make, as an optimisation, can run commands directly if the code is simple enough and it thinks it doesn't need to invoke a shell to interpret it. That explains why echo --version gives you /bin/echo, but echo ... > file needs a shell to perform the redirection. You can use strace -fe execve make to see what make executes (or the truss/tusc... equivalent on your system if not Linux). Here, it seems that while your /bin/echo is not compliant, your sh has a echo builtin that is compliant. Here, use printf if you want to expand echo-style escape sequences: printf '%b\n' 'hello\nworld' In its format argument, printf understands C-style escape sequences (there's a difference with the echo-style ones for the \0xxx (echo) vs \xxx (C) octal sequences) printf 'hello\nworld\n' Here, you could also do: printf '%s\n' hello world Which is the common and portable way to output several arguments on separate lines. Another approach would be to add: SHELL = bash To your Makefile for make to invoke bash (assuming it's installed and found in $PATH) instead of sh to interpret the command lines. Or invoke make as make <target> SHELL=bash. That won't necessarily make it more portable as while there are more and more systems where bash is installed by default these days, there are some (like on Solaris) where bash is built so that its echo builtin behaves the standard way by default. Setting SHELL to anything other than /bin/sh also has the side effect of disabling GNU make's optimisation mentioned above, so with make <target> SHELL=sh or make <target> SHELL=/bin//sh, you'd get consistent behaviours between the two invocation, while still not having to add a dependency to bash.
Why is echo -e behaving weird in a Makefile?
1,463,417,513,000
Currently I am working with Makefiles that have definitions like MYLIB=/.../mylib-1.2.34 The problem is that these are different for different developers, and it is a pain having to re-edit the file after every checkout. So I tried exporting a specific environment variable, and then doing MYLIBX:=$(MYLIB_ENV) MYLIBX?=MYLIB Trouble is that if MYLIB_ENV is not defined, it still creates an empty MYLIBX, so the ?= does not work. Is there a clean way to do this very basic thing? I am working with a "rich" set of make files developed over many years that do all sorts of things like make and call each other, so changing things deeply is not an option. SOLUTION Double shuffle. MYLIB already defined. MYLIB_ENV?=MYLIB MYLIB:=MYLIB_ENV
The problem with MYLIB:=$(MYLIB_ENV) MYLIB?=/.../mylib-1.2.34 is that MYLIB is always defined in the first line, so the second never applies. The typical approach in this situation is just MYLIB?=/.../mylib-1.2.34 That way individual developers can specify their own value from the shell, either on the make command line make MYLIB=... or in their environment before running make export MYLIB=... make (so they can set it once, e.g. in their shell startup scripts, and forget about it). If you just run make without specifying a value for MYLIB, the default /.../mylib-1.2.34 is used. Another option is to determine where the Makefile is stored, but that doesn't work in all cases (in particular if the path to the Makefile contains spaces).
gnuMake, How to have an environment variable override
1,463,417,513,000
Closely related to How to display dependencies given in a makefile as a tree? But the answers given there is not satisfactory (i.e. do not work). Is there a tool to visualize the Directed Acylic Graphs (DAGs) coded up in standard Makefiles? eg, a shell-script for post-processing through Unix pipes can be an acceptable solution as well (maybe there is a pandoc filter to convert MakeFiles to graphviz or LaTeX). I don't strictly need a tool that directly typesets this graphical visualisation? Just a common file-format translation of the makefile to a graph-viz file or something similar would suffice.
I believe makefile2graph does exactly what the original post author wanted. For the full installation and usage example: Installation (make sure graphviz is installed, e.g. with sudo apt install graphviz on Debian systems) cd /my/install/dir git clone https://github.com/lindenb/makefile2graph cd makefile2graph make Generate PNG (no need to use dedicated variable GDIR if you add makefile2graph's path to your PATH variable) cd /path/to/my/makefile GDIR=/my/install/dir/makefile2graph make -Bnd | ${GDIR}/make2graph | dot -Tpng -o my_graph.png
Visualizing dependencies coded up in makefiles as a graph
1,463,417,513,000
My Makefile: all: ...(other rules) clean clean: rm $(find . -type f -executable) When I delete clean rule from the above Makefile everything works as expected. After adding, make (also make clean) command results in: rm rm: missing operand Try 'rm --help' for more information. make: *** [Makefile:46: clean] Error 1 What causes problem here and how can I solve?
There are several issues. Passing a $ sign in a Makefile to the shell You want to run the command rm $(find . -type f -executable) to let the shell do the command substitution. To do this you need to write clean: rm $$(find . -type f -executable) with the dollar doubled as Make itself uses $. Handing the case where there is nothing to "clean" If the output of find is empty, then after command substitution rm $(find . -type f -executable) becomes rm and the typical rm command complains that you haven't told it what to remove. One way to address this is to use xargs to process the output of find. It takes the output of find and if there is any it splits it into blocks and runs rm. clean: find . -type f -executable | xargs rm Handling arbitrary characters in filenames Unix filenames are made up of components separated by / characters. The components themselves can be any sequence of characters except / and NUL. In particular a component can include newline characters, spaces, tabs, *. For the command substitution case (rm $(find . -type f -executable)) case the shell will process the output of find. So white-space characters will cause word splitting, * characters will cause filename "globbing" to take place etc. For reasonable implementations of xargs this is avoided. If filenames begin with - then rm might consider them to be options to the command. The simple way to avoid this is to add -- to the command to indicate "end of options". The major remaining issue is newlines. xargs splits the input on newlines, so if you have a file called abc\ndef then find will output abc def and xargs will invoke rm with 2 filenamesabc and def. To work around this, tell both find and xargs to use the 1 character (NUL) that can't appear in filenames as the deliminator rather than newline. clean: find . -type f -executable -print0 | xargs -0 rm -- Best solution, if your find supports it clean: find . -type f -executable -delete Here you have find directly removing the files, if any. It is more efficient as it doesn't need to start additional processes. There are no shells or xargs processes to need to escape characters. No special characters to tell make to process. Handles the "no files to delete" case correctly.
Makefile command substitution
1,463,417,513,000
When I have a make task where a specific target needs to be made before another, while in parallel mode, this is simple when using SunPro Make (dmake). The following makefile: install: dir dir/file dir: mkdir dir dir/file: cp source dir/file could be made parallel make safe by changing the first line to: install: dir .WAIT dir/file or by using: .NO_PARALLEL: install The rest of the makefile will still be handled in parallel mode and even a list if targets to the left or to the right of .WAIT will be handled in parallel mode. See: http://schilytools.sourceforge.net/man/man1/make.1s.html and http://schilytools.sourceforge.net/man/man1/dmake.1.html but GNU make does not seem to have a similar option. Is there really no way to do this with GNU make? To be more specific, the solution needs to be written in a way that allows the makefile to be processed with other make implementations. The special target .WAIT could e.g. be in a make program dependent macro that is called e.g. $(WAIT) Note that there is an apparent solution that does not help: In theory one could make dir a dependency of dir/file in order to enforce to create dir before trying to copy to dir/file, but if you later copy another file into dir, this makes dir newer than dir/file. As a result, calling make more than once would copy to dir/fileagain and again, without the expected result from make to do the copy only in case that the source has become newer that dir/file. This raises the alternate question whether there may be a topology of dependencies that enforces make to create dir before copying to dir/file without making dira dependency of dir/file.
Here is the my own answer that has been derived from the idea presented by Filipe Brandenburger and from generic methods used in the Schily Makefile system: The makefile system makes sure that the following make macros are set up this way: WAIT= # empty with GNU make WAIT= .WAIT # .WAIT special target with SunPro Make MAKEPROG= <name of the make program> # This is from: smake, gmake, sunpro _UNIQ= .XxZzy- Now the makefile that makes use from the macro definitions above: _NORULE= $(_UNIQ)$(MAKEPROG) __NORULE= $(_NORULE:$(_UNIQ)gmake=) NORULE= $(__NORULE:$(_UNIQ)%=%) install: dir $(WAIT) dir/file dir/file: source cp source dir/file dir: mkdir -p dir $(NORULE)dir/file: | dir $(NORULE) expands to nothing with gmake and to sunprowith SunPro Make. In case of gmake, the whole makefile expands to: install: dir dir/file dir/file: source cp source dir/file dir: mkdir -p dir dir/file: | dir In case of SunPro Make, the whole makefile expands to: install: dir .WAIT dir/file dir/file: source cp source dir/file dir: mkdir -p dir sunprodir/file: | dir The last line is seen as a junk rule with no relevance.
How can I partially serialize with GNU make
1,463,417,513,000
Assume doc.pdf is the target. The following rule triggers a regeneration of doc.pdf whenever doc.refer is updated, but is also happy when doc.refer does not exist at all: doc.pdf: doc.mom $(wildcard doc.refer) pdfmom -e -k < $< > $@ However the following pattern rule does not accomplish the same (the PDF is generated correctly, but a rebuild is not triggered when changing doc.refer): %.pdf: %.mom Makefile $(wildcard %.refer) pdfmom -e -k < $< > $@ I suspect that the wildcard command is executed before the % character is expanded. How can I work around this?
The GNU Make function wildcard takes a shell globbing pattern and expands it to the files matching that pattern. The pattern %.refer does not contain any shell globbing patterns. You probably want something like %.pdf: %.mom %.refer pdfmom -e -k < $< > $@ %.pdf: %.mom pdfmom -e -k < $< > $@ The first target will be invoked for making PDF files when there's a .mom and a .refer file available for the base name of the document. The second target will be invoked when there isn't a .refer file available. The order of these targets is important.
Using wildcard in GNU Make pattern rule
1,463,417,513,000
I have a set of directories, some of which contain makefiles, and some of the makefiles have clean targets. In the parent directory, I have a simple script: #!/bin/bash for f in *; do if [[ -d $f && -f $f/makefile ]]; then echo "Making clean in $f..." make -f $f/makefile clean fi done This does a weird thing when it hits a directory with a makefile without a (defined) "clean" target. For example, given two directories, one and two, containing; one/makefile clean: -rm *.x two/makefile clean: In the second case "clean" is present without directives, so if you ran "make clean" in two you'd get: make: Nothing to be done for `clean'. Versus if there were no "clean": make: *** No rule to make target `clean'. Stop. However, for the problem I'm about to describe, the result is the same whether the target is present but undefined or just not present. Running clean.sh from the parent directory; Making clean in one... rm *.x rm: cannot remove `*.x': No such file or directory make: [clean] Error 1 (ignored) So one did not need cleaning. No big deal, this is as expected. But then: Making clean in two... cat clean.sh >clean chmod a+x clean Why the cat clean.sh>clean etc.? Note I did create a minimal example exactly as shown above -- there are no other files or directories around (just clean.sh, directories one & two, and the very minimal makefiles). But after clean.sh runs, make has copied clean.sh > clean and made it executable. If I then run clean.sh again: Making clean in one... make: `clean' is up to date. Making clean in two... make: `clean' is up to date. Press any key to continue... Something even weirder, because now it is not using the specified makefiles at all -- it's using some "up to date" mystery target. I've noticed a perhaps related phenomenon: if remove one of the test clauses in clean.sh like this: # if [[ -d $f && -f $f/makefile ]]; then if [[ -d $f ]]; then And create a directory three with no makefile, part of the output includes: Making clean in three... make: three/makefile: No such file or directory make: *** No rule to make target `three/makefile'. Stop. That there is no such file or directory I understand, but why does make then go on to look for a target with that name? The man page seems pretty straightforward: -f file, --file=file, --makefile=FILE Use file as a makefile.
That behaviour is not a bug. It is a feature. A feature and a possible user error to be precise. The feature in question is one of the implicit rules of Make. In your case the implicit rule to "build" *.sh files. The user error, your error, is not changing the working directory before invoking the makefile in the subdirectories. TL; DR: to fix this you can do one or more of the following: Fix the shell script to change the working directory: #!/bin/bash for f in *; do if [[ -d $f && -f $f/makefile ]]; then echo "Making clean in $f..." (cd $f; make clean) fi done Make the empty rules explicit: clean: ; Make the clean targets phony: .PHONY: clean Detailed explanation: Make has a bunch of implicit rules. This allows one to invoke make on simple projects without even writing a makefile. Try this for a demonstration: create an empty directory and change in to the directory create a file named clean.sh. run make clean Output: $ make clean cat clean.sh >clean chmod a+x clean BAM! That is the power of implict rules of make. See the make manual about implicit rules for more information. I will try to answer the remaining open question: Why does it not invoke the implicit rule for the first makefile? Because you overwrote the implicit rule with your explicit clean rule. Why does the clean rule in the second makefile not overwrite the implicit rule? Because it had no recipe. Rules with no recipe do not overwrite the implicit rules, instead they just append prerequisites. See the make manual about multiple rules for more information. See also the make manual about rules with explicit empty recipes. Why is it an error to not change the working directory before invoking a makefile in a subdirectory? Because make does not change the working directory. Make will work in the inherited working directory. Well technically this is not necessarily an error, but most of the time it is. Do you want the makefiles in the subdirectories to work in the subdirectories? Or do you want them to work in the parent directory? Why does make ignore the explicit clean rule from the first makefile in the second invocation of clean.sh? Because now the target file clean already exists. Since the rule clean has no prerequisites there is no need to rebuild the target. See the make manual about phony targets which describes exactly this problem. Why does make search for the target three/makefile in the third invocation? Because make always tries to remake the makefiles before doing anything else. This is especially true if the makefile is explicitly requested using -f but it does not exists. See the make manual about remaking makefiles for more information.
Strange behavior automating with "make -f"
1,463,417,513,000
I know GNU Make is by far the most commonly used, but I'm looking for a way to verify that GNU Make is the actual make program that is being used. Is there a special variable I can print from within the Makefile like: @echo "$(MAKE_VERSION)" What if I have both GNU Make and another variant installed? which make /usr/bin/make
Using: $(MAKE) --version works here. My output is: make --version GNU Make 3.82 Built for i686-pc-linux-gnu Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
How to tell whether GNU make is being used in a makefile?
1,463,417,513,000
I'm trying to understand why some Makefiles have prerequisites with %.txt and others have *.txt. I've created this folder layout. $ tree . . ├── hi1.txt ├── hi2.txt ├── hi3.txt └── Makefile First, I tried this. foo.txt: *.txt echo $^ And it does what I expect. $ make echo hi1.txt hi2.txt hi3.txt hi1.txt hi2.txt hi3.txt But, then I've seen some Makefiles use %.txt as a wildcard. So I tried that next. foo.txt: %.txt echo $^ However, this results in an error. $ make make: *** No rule to make target '%.txt', needed by 'foo.txt'. Stop. Can someone explain why this is happening? GNU Make 4.3
In make, the percent sign is used for pattern matching, and it requires one in the target as well as (at least) one in the prerequisites: %.o: %.c $(CC) $(CFLAGS) -c -o $@ $< With this makefile, we specify that in order to build something whose file name ends with .o, you need to have a file that has the same prefix, but then ends with .c rather than .o In order to be able to construct those rules, you obviously need to be able to refer to the target as well as the prerequisites; this is where the$@ and $< variables come in. $@ means 'the target of this rule', and $< means 'this rule's first listed prerequisite'. If you need to construct a command that uses all prerequisites (e.g., to link an executable), then you can use the variable $^: %: %.o lib.o $(CC) $(LDFLAGS) -o $@ $^ If you combine the above two example makefile snippets in one makefile, and you have a file 'lib.c' with some common code that you want to use in a number of C programs in that same directory, then you can add any random .c file, say foo.c, and compile it into a program foo that also links in the code in lib.c, without requiring any changes to your makefile. Note that it is also possible to have patterns with the same target, as long as the prerequisites are different; e.g., the following will work: %.o: %.c $(CC)..... %.o: %.cpp $(CXX)..... that is, it will work as expected as long as you don't have any C++ source files in this directory that happen to have the same name (sans extension) as a C source file. If that does happen to be the case, the C++ version will be ignored and the C source will be compiled instead (because the C rule is listed first in the makefile) For more info, see the relevant section in the GNU make manual.
What's the difference between percent vs asterisk (star) makefile prerequisite
1,463,417,513,000
I just solved a problem with my Makefile(s). Make trips over every <<< with the error message /bin/sh: 1: Syntax error: redirection unexpected And I would like to know why. (I am using Bash as SHELL) In my current projects I tried a lot of recipies along the lines of: target: read FOO <<< "XXX"; \ read BAR <<< "YYY"; \ read BAZ <<< "ZZZ"; \ someprog -a param0 -b $$FOO -c param1 -d $${BAR} -e param2 -f $${BAZ} >$@ Trying this will result in an error for every <<< as described at the beginning. My workaround is target.dep: echo "XXX YYY ZZZ" >$@ target: %: %.dep read FOO BAR BAZ < $<;\ someprog -a param0 -b $$FOO -c param1 -d $${BAR} -e param2 -f $${BAZ} >$@ which means I put my stuff into temporary files which I then read with <, which works just fine. When I copy paste my make output to a normal bash prompt, every command works just as expected, even with the <<<. I am fairly certain that my problem is, that using the <<< operator, i.e. here strings, break something. Why is that and is there a way to make here strings work in Makefiles? P.S.: Yes, sometimes I feel autotools would be the better choice over make.
/bin/sh: 1: Syntax error: redirection unexpected means you’re not using bash as your shell, in spite of your expectations to the contrary. bash as sh recognises here strings fine (so your Makefile would work on Fedora), but for example dash as sh doesn’t. Unless told otherwise, Make uses /bin/sh as its shell; it ignores your default user shell. Setting SHELL=/bin/bash in your Makefile should fix things for you; at least, it does for me on a system showing the same symptoms as yours. P.S.: Yes, sometimes I feel autotools would be the better choice over make. Autotools and Make don’t address the same problems; they’re complementary, and using Autotools would still mean using Make...
Why don't here strings in Makefiles using Bash work?
1,463,417,513,000
I have written a rule where a directory should be removed if it exists: .PHONY: distclean distclean: -rmdir release make distclean prints: rmdir release rmdir: failed to remove ‘release’: No such file or directory test.mak:3: recipe for target 'distclean' failed make: [distclean] Error 1 (ignored) Shouldn't the - sign make GNU Make ignore the error? I am using GNU Make 4.0.
Make is ignoring the error: make: [distclean] Error 1 (ignored) It still prints the error messages, but if you add another rule in the distclean target it should be processed in spite of the rmdir failure. In more detail: rmdir release This is make printing the command it's about to run. rmdir: failed to remove ‘release’: No such file or directory This is rmdir printing an error message because release doesn't exist. To remove that, you'd add 2> /dev/null to the command (or >& /dev/null to silence rmdir completely). test.mak:3: recipe for target 'distclean' failed rmdir exits with a non-zero exit code, so make prints an error message. To remove that, you'd add || true to the command (so that it exits with a zero exit code in all cases). make: [distclean] Error 1 (ignored) Finally, since the command was prefixed with -, the error is ignored and make continues.
GNU Make does not ignore failed command
1,463,417,513,000
How do I properly configure pkgconf and libffi to allow the python3 build process to correctly use my libffi version at every step of the build process, in order to import the _ctypes module correctly? Which piece am I missing here? Some background I am trying to build Python3 from source to build a GUI with PyQt5, and one of the requirements is a functional libffi-dev library. I don't have sudo permission on this SLES11 machine or write access to the typical library directory, nor are those likely to be granted. I reached out to the team who manages the machines, but they were unwilling to update the whole system set of libraries for my project. I have to resort to building ~30 dependencies from source, and I have little experience in this, but I'm familiar with the configure-->make-->install process. I'm stuck on one final piece of the process (failure to import the _ctypes module), which can be traced back to a missing symbol from the FFI library. *** WARNING: renaming "_ctypes" since importing it failed: build/lib.linux-x86_64-3.9/_ctypes.cpython-39-x86_64-linux-gnu.so: undefined symbol: ffi_prep_cif The libffi library built and installed without any issues, and I can see the files in the local library path, so I have reached the conclusion that there is either a mismatch between the libffi version and another dependency, or pkgconf is unable to locate the library. Based on my observations of the behavior of pkgconf when isolated and instructed to validate the libffi.so file, it is most likely the latter. But, I am virtually a complete novice with this, I've been at this whole build for about a week now, and I'm here typing this question, so I'm clearly open to hearing some other ideas! Some useful debug pkgconf --version 1.7.3 https://distfiles.dereferenced.org/pkgconf/pkgconf-1.7.3.tar.gz libffi 3.3 ftp://sourceware.org/pub/libffi/libffi-3.3.tar.gz Python 3.9.1 https://www.python.org/ftp/python/3.9.1/Python-3.9.1.tgz I provided the options to specify a local library directory while making the pkgconf source ./configure --prefix=$HOME/LIBRARIES/--with-system-libdir=$HOME/LIBRARIES/lib:$HOME/LIBRARIES/lib64:/usr/lib:/lib --with-system-includedir=$HOME/LIBRARIES/include:/usr/include My PKG_CONFIG, PKG_CONFIG_PATH, LD_LIBRARY_PATH, LDFLAGS, and PATH are updated to reflect where the libffi pc files and pkgconf files are located $ echo $PKG_CONFIG $HOME/LIBRARIES/bin/pkgconf $ echo $PKG_CONFIG_PATH $HOME/LIBRARIES/lib/pkgconfig:$HOME/LIBRARIES/lib64/pkgconfig $ echo $LD_LIBRARY_PATH $HOME/LIBRARIES/lib:$HOME/LIBRARIES/lib64 $ echo $LDFLAGS -L$HOME/LIBRARIES/lib64/ -L$HOME/LIBRARIES/lib $ echo $PATH $HOME//LIBRARIES/bin:/usr/local/bin:/usr/bin $ ls $HOME/LIBRARIES/lib64/libff* libffi.a libffi.la libffi.so libffi.so.7 libffi.so.7.1.0 AND YET pkgconf --validate validation of the library appears to fail, and the Python3 make script notes the undefined symbol. I'm more concerned about the make script; I'm not sure whether pkgconf is actually supposed to error out here. Update here: the library is valid according to pkgconf, so this rules out that suspicion. Thank you, telcoM pkgconf --validate libffi $HOME/LIBRARIES/lib/pkgconfig/libffi.pc:9 Adding the configure command for Python3 for clarity ./configure --prefix=$HOME/LIBRARIES --enable-shared --with-system-ffi=$HOME/LIBRARIES/lib
I had exactly the same problem. build/lib.linux-x86_64-3.9/_ctypes.cpython-39-x86_64-linux-gnu.so would be generated by make, but wasn't linked with libffi (as I found out with ldd). When, subsequently, make runs setup.py, I'd get exactly the same error: *** WARNING: renaming "_ctypes" since importing it failed: build/lib.linux-x86_64-3.9/_ctypes.cpython-39-x86_64-linux-gnu.so: undefined symbol: ffi_prep_cif Following modules built successfully but were removed because they could not be imported: _ctypes But in my case, exporting C_INCLUDE_PATH wasn't the problem. The problem was that _ctypes.cpython-39-x86_64-linux-gnu.so was compiled without -lffi. I had to hack setup.py by adding the line ext.libraries.append('ffi') at the end of the definition of the function def detect_ctypes(self): For the record, I ran the configure script with CPPFLAGS="-I/my/path/include" LDFLAGS="-Wl,-rpath=/my/path/lib64 -Wl,-rpath=/my/path/lib" ./configure --prefix=/my/path --build=x86_64-redhat-linux --enable-shared --enable-optimizations I'm not sure if explicit CPPFLAGS and LDFLAGS would be necessary in all cases.
How do I build PKGCONF and LIBFFI and subsequently Python3.9 with ctypes support without sudo and write access to /usr/local?
1,463,417,513,000
I'm implementing a simple build system that's actually just a wrapper around Make. Since this build system already emits its own error messages, I don't want Make to produce error messages like make: *** [/cool/makefile:116: /fun/target.o] Error 1 on failure. I'm already using the -s flag to suppress most of Make's output. And I don't want Make to ignore errors; I still want it to stop and exit with a status. I can't just kill all error output with make 2> /dev/null because I still want to see messages printed to stderr by the tasks Make is running. Is there a way to do this without manually parsing and sanitizing Make's output? I'm using GNU Make 4.2.1, and I don't mind GNU Make-specific solutions.
Since your system is a wrapper around make, I presume that it generates the makefile. Tweak your generator to add 2>&3 to all the shell commands in the makefile, and make your program redirect file descriptor 3 to standard error (file descriptor 2) and file descriptor 2 to /dev/null. This way the make program itself will print to its standard error, which goes to /dev/null, and build commands will print to their standard error, which goes to the wrapper's standard error. If you're using a handwritten makefile, you can transform it to add those redirections, assuming the makefile doesn't go too wild with syntax (e.g. no fancy GNU make macros that generate commands). For every line that starts with a tab and optionally @ or -, and where the previous line does not end with a backslash, add exec 2>&3; after the tab and optional @-. Instead of changing the makefile, you can invoke it with the argument SHELL=/path/to/shell_wrapper where shell_wrapper is executes its argument with standard error redirected to a different descriptor, something like this: #!/bin/sh eval "$2" 2>&3
Make - How to suppress make error messages without suppressing other output
1,463,417,513,000
Say I have a variable with a path release/linux/x86, and want the relative path from a different directory (i.e. ../../.. for current working directory), how would I get that in a shell command (or possibly GNU Make)? Soft link support not required. This question has been heavily modified based on the accepted answer for improved terminology.
Absolutely not clear the purpose of it, but this will do exactly what was asked, using GNU realpath: realpath -m --relative-to=release/linux/x86 . ../../.. realpath -m --relative-to=release///./linux/./x86// . ../../..
How to get the relative path between two directories?
1,463,417,513,000
How do I set more than one target specific variable? If I try: x: Y := foo Z := bar I end up with Y = "foo Z := bar". There must be some syntax which will allow for multiple variables...
In GNU make you specify the target multiple times to accommodate the required number of variable assignments, like as: x: Y := foo x: Z := bar x: @echo Y=$(Y) -- Z=$(Z)
multiple target specific gnu make variables?
1,463,417,513,000
When I enable make V=s to read the full log of make. I always see make[numer] in the log. e.g: datle@debian:~/workspace/cpx/trunk$ make rm -rf openwrt/tmp cp config/defaut.config openwrt/.config cd openwrt && make make[1]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt' make[1]: Leaving directory `/home/datle/workspace/cpx/trunk/openwrt' make[1]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt' make[2]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt' Collecting package info: done Collecting target info: done Checking 'working-make'... ok. Checking 'case-sensitive-fs'... ok. Checking 'getopt'... ok. Checking 'fileutils'... ok. Checking 'working-gcc'... ok. Checking 'working-g++'... ok. Checking 'ncurses'... ok. Checking 'zlib'... ok. Checking 'gawk'... ok. Checking 'unzip'... ok. Checking 'bzip2'... ok. Checking 'patch'... ok. Checking 'perl'... ok. Checking 'python'... ok. Checking 'wget'... ok. Checking 'git'... ok. Checking 'gnutar'... ok. Checking 'svn'... ok. Checking 'gnu-find'... ok. Checking 'getopt-extended'... ok. Checking 'non-root'... ok. make[3]: Entering directory `/home/datle/workspace/cpx/trunk/openwrt' Checking 'openssl'... ok. make[3]: Leaving directory `/home/datle/workspace/cpx/trunk/openwrt' make[2]: Leaving directory `/home/datle/workspace/cpx/trunk/openwrt' WARNING: your configuration is out of sync. Please run make menuconfig, oldconfig or defconfig! make[2] world make[3] target/compile make[4] -C target/linux compile make[3] package/cleanup make[3] package/compile make[4] -C package/toolchain compile make[4] -C package/wireless-tools compile I read the make manual but I didn't find any detail about this.
Those numbers is represent for makelevel, which let us know how sub-make relates to top-level make. This is the recursive use of make, see more details here. Digging into make source code, you can see something clearer. In main.c: /* Value of the MAKELEVEL variable at startup (or 0). */ unsigned int makelevel; and then: /* Figure out the level of recursion. */ { struct variable *v = lookup_variable (STRING_SIZE_TUPLE (MAKELEVEL_NAME)); if (v && v->value[0] != '\0' && v->value[0] != '-') makelevel = (unsigned int) atoi (v->value); else makelevel = 0; } In output.c: /* Use entire sentences to give the translators a fighting chance. */ if (makelevel == 0) if (starting_directory == 0) if (entering) fmt = _("%s: Entering an unknown directory\n"); else fmt = _("%s: Leaving an unknown directory\n"); else if (entering) fmt = _("%s: Entering directory '%s'\n"); else fmt = _("%s: Leaving directory '%s'\n"); else And format the output before printing: if (makelevel == 0) if (starting_directory == 0) sprintf (p, fmt , program); else sprintf (p, fmt, program, starting_directory); else if (starting_directory == 0) sprintf (p, fmt, program, makelevel); else sprintf (p, fmt, program, makelevel, starting_directory); _outputs (NULL, 0, buf); Note Source make
What does make[number] mean in make V=s?
1,463,417,513,000
Goal I write my slides in a markdown file and compile it to reveal afterwards, uploading it to my webserver and do other things. I wanted to organise the steps after the markdown is written in a makefile: PROJNAME = `pwd | grep -oP '(\w|-)+' | tail -n 2 | head -n 1 | tr '[:upper:]' '[:lower:]'` presentation: slides.pandoc pandoc --self-contained --data-dir=$(HOME)/.pandoc --template=slides/revealjs_niels_tmpl.html -V revealjs-url:$(HOME)/.pandoc/revealjs -V theme:solarized slides.pandoc -f markdown -t revealjs -o $(PROJNAME).html onlinepresent: $(PROJNAME).html cp $(PROJNAME).html $(HOME)/Share/index.html Explanation PROJNAME looks for the project folder name and converts it to lowercase. In the example folder where I am using it and which generates the message shown in the Title, this results in ws-anno-ii. The presentation rule compiles the slides as html by using pandoc. The PROJNAME Macro is used to define the name of the output file. onlinepresent where make is stopping should copy a file which holds the project name (ws-anno-ii.html) if this file exists to a mounted external filesystem (Share). If it does not exist, of course the presentation rule should apply first. But when I put the make command nothing happens, and I get the message The make process stops with Makefile:6: *** multiple target patterns. Stop. referring to the line onlinepresent: $(PROJNAME).html Can anyone explain to me why this is happening?
The value of the make variable PROJNAME is `pwd | grep -oP '(\w|-)+' | tail -n 2 | head -n 1 | tr '[:upper:]' '[:lower:]'` The backquote character is not special in make. If you use the variable in a shell command, the shell sees the backquotes and parses them as a command substitution. But if you use the variable in a place where it's interpreted by make, the backquotes don't do anything special. The line onlinepresent: $(PROJNAME).html becomes, after variable expansion, onlinepresent: `pwd | grep -oP '(\w|-)+' | tail -n 2 | head -n 1 | tr '[:upper:]' '[:lower:]'`.html which make parses as onlinepresent, colon, `pwd, \, grep, -oP, '(\w|-)+', |, tail, -n, 2, |, head, -n, 1, |, tr, '[, colon, upper, colon, ]', '[, colon, lower, colon, ]'`.html. There are multiple words to the left of the rightmost colon, hence “multiple target patterns”. If you want to use the output of a shell command in a place where make will read it, you need to invoke the shell function. This is a GNU make feature, it won't work in other make implementations. PROJNAME = $(shell pwd | grep -oP '(\w|-)+' | tail -n 2 | head -n 1 | tr '[:upper:]' '[:lower:]'`) This sets the PROJNAME variable to the next-to-last component of working directory transformed to lowercase. Note that using the current directory is fragile: it means your makefile won't work if invoked from a different directory as the target. It would be more robust to compute PROJNAME from the path to the target. If it wasn't for the lowercase part, you could do it entirely (if cumbersomely) with make functions (I assume that the intent of your splitting code is really to extract pathname components): $(notdir $(patsubst %/,%,$(dir $(patsubst %/,%,$(dir $(abspath $@)))))) but GNU make doesn't have a case conversion facility. If you're going to invoke a shell anyway, you can make it simpler. PROJNAME = $(shell set -x; echo '$(abspath $@)' | awk -F/ '{$$0=tolower($$0); print $$(NF-2)}') onlinepresent: $(PROJNAME).html cp $< $$HOME/Share/index.html Note the use of $$ in the makefile which becomes $ in the shell commands. This works because the PROJNAME variable is calculated on each use, not at the time of definition (variable definitions in make are expanded on each use if they use =, and when the assignment is read if they use :=).
Why does make stop with "Makefile:6: *** multiple target patterns. Stop."?
1,463,417,513,000
What is the meaning of sed command sed -i 's,-m64,,g' Makefile? Does it simply remove -m64 argument from Makefile? Is it the same with sed -i 's/-m64//g' Makefile, just use / delimiter in place of commas?
Yes, it's the same as with / delimiter. Sometimes you may use different delimiters not to confuse sed. In this case, you replace all -m64 instances with empty string, not remove as such. See this resource on using delimiters in sed.
What is "sed -i 's,-m64,,g'" doing to this Makefile?
1,463,417,513,000
I'm using a Makefile to compile my Clean code. My Clean files have file names of the format *.icl and are compiled to binaries with the same name but without .icl. This is done with the rule: $(EXE): % : %.icl | copy $(CLM) $(CLM_LIBS) $(CLM_INC) $(CLM_OPTS) $@ -o $@ I would now like to add a rule which allows me to run a binary. Currently, I'm often doing make some_module && ./some_module I would like to have a make target which depends on the rule above and runs the module. However, the name of the module is already a target itself for compilation alone and I'd like to keep it that way. What I would like is a target with two words that I can call with make run some_module which then depends on the rule above and runs ./some_module afterwards. Is it possible to create targets with multiple words? I tried to make a rule (now still without dependency) with the following: run $(EXE): ./$@ Running make run some_module results in many recipes being 'overridden' and 'ignored' and finally ./run not existing. Makefile:24: warning: overriding recipe for target 'tut7_2_2' Makefile:21: warning: ignoring old recipe for target 'tut7_2_2' Makefile:24: warning: overriding recipe for target 'support_check' Makefile:21: warning: ignoring old recipe for target 'support_check' [...] Makefile:24: warning: overriding recipe for target 'drawingframe' Makefile:21: warning: ignoring old recipe for target 'drawingframe' ./run /bin/bash: ./run: No such file or directory Makefile:24: recipe for target 'run' failed make: *** [run] Error 127
You can inspect MAKECMDGOALS to detect the presence of one goal while building another goal. Either make the %: %.icl run detect the presence of the run goal, or make the run goal inspect what executables are mentioned as targets. If you pass more than one executable as a target, the first method causes each to be run right after it's built, while the second causes all the runs to happen at the end. The downside of this approach is that it doesn't scale well with other features. For example, if you define a target with multiple executables as dependencies, the method with a run target won't work. EXE = foo bar experimental released: foo bar run: $(filter $(EXE), $(MAKECMDGOALS)) set -e; for x in $^; do ./$x; done Here make released run won't run anything. What I normally do for this case is to define a “run” target for each executable. This keeps each target expressed as a single word, which is a major advantage. It's simple and doesn't break other features. $(EXE): %.run: % ./$(@:.run=) all.run: $(EXE:=.run) Then I run make {foo,bar}.run if I want to built and test foo and bar, or make all.run to build and run them all.
Make target with two words
1,463,417,513,000
If I have a makefile I have to use that is using recursive make, is there an easy option to disable that? http://aegis.sourceforge.net/auug97.pdf
No. If you invoke make even once in Makefile, it would be called a recursive make. There's no easy option in GNU Make to prevent it. Once you read the paper mentioned in your post, you could understand it's determined by how you write Makefiles whether the make is recursive or non-recursive. Linux kernel build system would be one of the most famous applications of traditional recursive make in large scale. Android build system is a good example of non-recursive make, which is explicitly addressing problems of recursive make. Both build systems are exploiting GNU Make specific features intensively.
Is there a way to disable recursive make?
1,463,417,513,000
I've been reading the documentation but it's still unclear to me how the order is processed. In the example: myrule: | myrule_step1 myrule_step2 @echo "$(@)" myrule_step1: @echo "$(@)" myrule_step2: @echo "$(@)" what will print first? myrule_step1 or myrule_step2?
The order is unspecified and can run in either order. This isn't just a theoretical concern. It can happen during parallel builds. Assuming the same Makefile as in the question, I ran: watch -n 0.1 make -j8 It only took a few seconds to print: myrule_step2 myrule_step1 myrule See also this StackOverflow answer by Jörg W Mittag: No, the order is not defined. That is the whole point in using declarative dependency-oriented programming: that the computer can pick the optimal evaluation order, or in fact, evaluate them even at the same time. However, as mosvy points out, this is only true for GNU Make. POSIX make (which can be emulated in GNU Make by adding the special .POSIX target to your makefile) specifies a left-to-right ordering when handling prerequisites.
What is the order order-only prerequisites are processed in a GNU Make file?
1,463,417,513,000
According to GNU Make Manual A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that. The same recipe applies to all the targets, but its effect may vary because you can substitute the actual target name into the recipe using ‘$@’. The rule contributes the same prerequisites to all the targets also. First Makefile: %.in %.out: echo BLANK > $@ Corresponding bash session: $ ls Makefile $ make a.in a.out echo BLANK > a.in make: Nothing to be done for 'a.out'. $ ls Makefile a.in $ make a.out echo BLANK > a.out $ ls Makefile a.in a.out $ make b.in c.out echo BLANK > b.in echo BLANK > c.out $ make d.in d.out echo BLANK > d.in make: Nothing to be done for 'd.out'. $ make e.out e.in echo BLANK > e.out make: Nothing to be done for 'e.in'. $ ls Makefile a.in a.out b.in c.out d.in e.out Second Makefile: %.in: echo BLANK > $@ %.out: echo BLANK > $@ Corresponding bash session: $ ls Makefile $ make a.in a.out echo BLANK > a.in echo BLANK > a.out $ ls Makefile a.in a.out $ # nice So, the question: Why doesn't the first Makefile create targets like <name>.in <same name>.out simultaneously? Why isn't it interpreted similar to the second Makefile?
Your rules tell make that a single invocation of the recipe will create both the .in and .out targets. https://www.gnu.org/software/make/manual/html_node/Pattern-Intro.html explains this. It says (in the penultimate paragraph): "Pattern rules may have more than one target; however, every target must contain a % character. Multiple target patterns in pattern rules are always treated as grouped targets (see Multiple Targets in a Rule) regardless of whether they use the : or &: separator. " If you then follow the link to Multiple Targets it explains that grouped targets (normally using the &: separator when you have explicit rules) tell make that a single invocation of the recipe will create all of them, not just one at a time. So your pattern rule is the equivalent of this: a.in a.out &: echo BLANK > $@ ... and not the equivalent of this as you intended: a.in a.out : echo BLANK > $@ As far as I know, there's no way to make a pattern rule which works like the latter and creates just one at a time. You just have to have separate rules for %.in and %.out
Why does make behave strangely when rule has multiple targets with the % character?
1,463,417,513,000
I'm trying to follow a guide to compile a program for Debian in FreeBSD. I have the following makefile: obj-m += kernelinfo.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean I'm confused as to how I would compile this on FreeBSD since I do not have a /lib/modules folder on the machine. I have installed all of the default headers on FreeBSD in the /usr/src/ directory but I can't find a modules folder. I'm guessing the Makefile needs to be translated for FreeBSD, though I am very new to Linux and so I have no idea. Any help is much appreciated.
This looks like it may be from a Linux kernel module. You will probably not be able to compile or use the code associated with the Linux kernel module on FreeBSD, as it's written specifically for Linux, and the Linux kernel is totally different from the FreeBSD kernel. In short, it's not the Makefile that needs translating, but the kernel module source code that needs porting over to FreeBSD. This is not a trivial undertaking and requires knowledge of both the Linux and FreeBSD kernels. See also Conceptual difference between Linux and (Free)BSD Kernel
Convert Debian Makefile for FreeBSD
1,463,417,513,000
why doesn't this simple recipe work ? .PHONY: test test: foo := $(shell ls | grep makefile) ;\ echo $(foo) results in $> make test makefile:65: warning: undefined variable 'foo' foo := makefile ;\ echo /bin/sh: 1: foo: not found So, as far as I understand, the variable foo is well set to value makefile but it cannot be used afterwards ? However, it is a single line command, executed in the same shell ? However, this works @$(eval export foo := $(shell ls | grep makefile)) \ echo $(foo) So I guess that the variable in the first example is not accessible because the assignment is not evaluated yet at the time we try the echo ? And if I dig a little further, how to do this work .PHONY: test test: @$(eval export files = $(shell ls)) for f in $(files) ; do \ t = $(ls | grep $$f) ; \ echo $$t;\ done
I looked at your loop... quoted here: .PHONY: test test: @$(eval export files = $(shell ls)) for f in $(files) ; do \ t = $(ls | grep $$f) ; \ echo $$t;\ done So... $(eval ... ) runs a command in make. $(shell ls) runs command ls in the shell, and substitutes its output. The command run by the $(eval ... ) is thus something like export files = file file2 makefile source.c. This command makes a make variable called files and exports it to child makes. Thus, the export probably isn't needed. The entire $(eval ... ) could probably be replaced with files = $(wildcard *) And it could probably use := and be placed outside of a rule. The for loop, four lines, is run in the shell. The first thing that is done, the make variables and functions are substituted. The one that is weird is $(ls | grep $$f). Since ls is not a make function, this will try to expand a variable, which isn't defined. This is an empty string. If this was meant to be the shell's $(...) operator, you need to double the $. $$ is expanded to $. $(files) is expanded based on the eval. This becomes (using my previous example): for f in file file2 makefile source.c ; do t = echo $t; done At first glance, this might echo four blank lines, but no. The command t = actually runs the program t and passes the equal sign as an argument. t probably doesn't exist. Thus, we get four errors that t isn't a valid program, each followed by a blank line (unless t is elsewhere defined). Something closer to what you wanted might be: files := $(wildcard *) .PHONY: test test: for f in $(files) ; do \ t=$$(ls | grep $$f) ; \ echo $$t ; \ done This will output: file file2 file2 makefile source.c Note that the first line listed two files, as the both include "file" in the name. If that isn't what you want, you might consider: files := $(wildcard *) .PHONY: test test: for f in $(files) ; do \ echo $$f ; \ done or even (may be GNU make specific): files := $(wildcard *) .PHONY: test test: $(foreach f, $(files), echo $f ; )
Variable not found in makefile recipe
1,463,417,513,000
Suppose I am in an empty directory. If I now create a Makefile containing nothing but all: randomFilename and an empty file called randomFilename.sh, then GNU Make will perform cat randomFilename.sh >randomFilename; chmod a+x randomFilename when make is called. $ echo 'all: randomFilename' > Makefile $ make make: *** No rule to make target 'randomFilename', needed by 'all'. Stop. $ touch randomFilename.sh $ make cat randomFilename.sh >randomFilename chmod a+x randomFilename $ make -v | head -n2 GNU Make 4.0 Built for x86_64-pc-linux-gnu Worse yet, Make overrides a file called randomFilename, if it already exists. $ echo "My content" > randomFilename $ echo "My content, all gone" > randomFilename.sh $ make -B cat randomFilename.sh >randomFilename chmod a+x randomFilename $ cat randomFilename My content, all gone I would like to find the reason why Make does this and a way to prevent the behavior.
This behaviour is the result of the built-in suffix rules of Make (in this case for legacy versions of the Source Code Control System [1]). The built-in suffix rules can be disabled by specifying an empty .SUFFIXES pseudo-target [2]: $ echo '.SUFFIXES:' > Makefile $ echo 'all: randomFilename' >> Makefile $ make make: *** No rule to make target 'randomFilename', needed by 'all'. Stop. $ touch randomFilename.sh $ make make: *** No rule to make target 'randomFilename', needed by 'all'. Stop.
How to prevent Make from randomly overriding files?
1,463,417,513,000
Let's say that I have a Makefile that has two “main” targets: foo.o and clean. The former one has a recipe to create the foo.o file. The latter one removes all the temporary files. To remove the need of specifying the dependencies of foo.o manually, I have target foo.d that is valid makefile specifying the dependencies in format foo.o foo.d : dep1 dep2 depn. This dependency file is included to the makefile. The makefile looks like this: ;; This buffer is for text that is not saved, and for Lisp evaluation. ;; To create a file, visit it with C-x C-f and enter text in its buffer. foo.o: foo.c cc -c -o $@ $< foo.d: foo.c sh deps.sh $< > $@ include foo.d .PHONY: clean clean: rm … When I want to make foo.o, everything works correctly: foo.d gets (re)made, it is included and foo.o gets made. The problem is that when I want to make the clean target, foo.d gets included, or even made. How can I prevent make including the foo.d when clean target is being made? (Or, how to include that only when foo.o is made?) The solution can use features of GNU Make.
The solution is quite simple, but results into somewhat unreadable Makefile code. First, we must know that include directive tries to include the file, and if it does not exist, fails. There is also -include (or sinclude) that does simply does not include the file, if it does not exist. But that is not the thing we want, because it stills tries to remake the included makefile, if possible. We can avoid that in two ways: either by changing the include directive parameter in such way that Makefile thinks it is not able to make that included file (e.g. relative vs absolute path etc.), or by omitting the parameter when the file does not exist. That can be done in multiple ways: -include $(wildcard foo.d*) but that has one problem: it matches also other files. So we can write this: -include $(filter foo.d,$(wildcard foo.d*)) or even this: -include $(filter foo.d,$(wildcard *)) And we made another problem: foo.d does not get made. This is resolved either by adding it as another target of the foo.o: foo.o: foo.c foo.d or adding it as a command: foo.o: foo.c $(MAKE) foo.d cc … or directly, without invoking make: foo.o: foo.c sh script.sh … cc …
Remake included makefile only when needed
1,463,417,513,000
I'm calling make from a bash script. Part of the rest of the script only needs to be executed if make actually did something (i.e. it doesn't say nothing to be done for...). How would I check on that in bash? The return code is the same as when something does happen (and doesn't fail), so I can't use that. Is there a better way than comparing make's output and see if it contains nothing to be done for? I'm using GNU Make 4.0.
If your workflow can accomodate it, you can send make on a trial run before you run it to update files: -q, --question ‘‘Question mode’’. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, nonzero otherwise. (I am somewhat surprised this is not part of --recon's behaviour.)
Check in bash if make has done something
1,463,417,513,000
Is it possible to display variables outside rules using GNU Make? Consider the following Makefile: x = foo bar baz ifdef x @echo $(x) endif This results in Makefile:4: *** commands commence before first target. Stop. However, if I add a rule, it works: x = foo bar baz ifdef x t: @echo $(x) endif Is it really necessary to add rules for outputting variables for debugging, etc.? Bonus: Why does the removal of ifdef result in Makefile:3: *** missing separator. Stop.? x = foo bar baz @echo $(x)
GNU make has a feature for doing exactly that and it is called $(info ....). You could place the following line outside of a rule and GNU make will execute it: $(info variable x = $x)) And if you find yourself doing this sort of a task repeatedly, you can abstract it away in a macro and call it where ever needed: make -f - <<\eof dumpvar = $(info variable `$1' value is >>>$($1)<<<) ssl_enable = NO $(call dumpvar,ssl_enable) .PHONY: all all:;@: eof It will display the following on stout: variable `ssl_enable' value is >>>NO<<<
Is it possible to display variables outside rules using GNU Make?
1,463,417,513,000
I'm writing a Makefile recipe that needs to execute IF AND ONLY IF a certain file exists... Here's what I have: clean: $(if $(shell test -s ${MFN_LSTF}), \ $(foreach mfn, $(shell cat ${MFN_LSTF}), \ $(MAKE) -f mfd/${mfn} clean;), ) .PHONY: clean ${MFN_LSTF} holds a filename that contains a one column list of makefile names that are assumed to be at the same local directory as this makefile recipe. The problem that I've encountered with is that, the foreach statement executes always. I want it to execute ONLY IF the filename ${MFN_LSTF} exists. I've tried this also: clean: [ test -s ${MFN_LSTF} ] && for mfn in $(shell cat ${MFN_LSTF}); do $(MAKE) -f mfd/${mfn} clean done .PHONY: clean
This might be a possible and simpler solution, emulating the shell: $(eval mfn_lstf := $(shell cat ${MFN_LSTF})) $(foreach mfn, ${mfn_lstf}, $(MAKE) -f mfd/${mfn} clean;) And, the following works, without emulating the shell: if [ -s "$${MFN_LSTF}" ]; then \ while IFS= read -r mfn; do \ $(MAKE) -f "mfd/$${mfn}" clean; \ done < "$${MFN_LSTF}"; \ fi
How can I execute recipe iff a file exists?
1,463,417,513,000
I am trying to run make for an open-source project on my Debian virtual machine but I do not understand why the commands based on pkg-config are not being recognized. One of the commands is as follows: tempgui-qrps.so: tempgui-qrps.cc refpersys.hh tempgui-qrps.hh tempgui-qrps.moc.hh | $(RPS_CORE_OBJECTS) $(RPS_BUILD_CXX) $(RPS_BUILD_COMPILER_FLAGS) \ -shared -o $@ -fPIC -Wall -Wextra -O -g \ $(shell pkg-config --cflags Qt5Core Qt5Gui Qt5Widgets $(RPS_PKG_NAMES)) \ $(shell pkg-config --libs Qt5Core Qt5Gui Qt5Widgets $(RPS_PKG_NAMES)) \ -std=gnu++17 \ $< When I run make on the command line, the output corresponding to the above command looks like this: g++ -std=gnu++17 \ -shared -o tempgui-qrps.so -fPIC -Wall -Wextra -O -g \ \ \ -std=gnu++17 \ tempgui-qrps.cc When I run the make command, I also see these warnings: Package readline was not found in the pkg-config search path. Perhaps you should add the directory containing `readline.pc' to the PKG_CONFIG_PATH environment variable No package 'readline' found Package zlib was not found in the pkg-config search path. Perhaps you should add the directory containing `zlib.pc' to the PKG_CONFIG_PATH environment variable No package 'zlib' found Are both these problems (absence of packages and the pkg-config commands not being processed) related? Some of the details of pkg-config installed on my system are as follows: xxxxx@xxxx:~$ pkg-config --version 0.29 xxxx@xxxx:~$ whereis pkg-config pkg-config: /usr/bin/pkg-config /usr/lib/pkg-config.multiarch /usr/share/man/man1/pkg-config.1.gz
Having pkg-config isn’t sufficient: you also need the .pc files corresponding to the packages named in each pkg-config command. For pkg-config --cflags Qt5Core Qt5Gui Qt5Widgets $(RPS_PKG_NAMES), you need to install qtbase5-dev, and whatever is necessary for the packages in $(RPS_PKG_NAMES). You can install and use apt-file to find packages containing specific files. For readline and zlib, you need libreadline-dev and zlib1g-dev. In addition, you’ll need to create readline.pc if you’re using Debian 10; place it in /usr/local/lib/pkgconfig, with the following contents: prefix=/usr exec_prefix=${prefix} libdir=/usr/lib/x86_64-linux-gnu includedir=${prefix}/include Name: Readline Description: Gnu Readline library for command line editing URL: http://tiswww.cwru.edu/php/chet/readline/rltop.html Version: 7.0 Requires.private: tinfo (for amd64). You can run the various pkg-config commands from the shell to check that they are working, and get information aboit each individual error.
Why are the pkg-config commands in the makefile not being recognized when I run the script?
1,463,417,513,000
I have a info.properties file where I have this MY_NAME property and I can use this property on my Makefile. I already tried but I can't use that property directly on myScript.sh file. So I'm trying to pass that property as argument to myScript.sh. And I'm doing like this: On Makefile: my_stage: chmod 777 myScript.sh && ./myScript.sh $(MY_NAME) On myScript.sh I have this: #!/bin/bash -e source .build/utils.sh MY_NAME=$1 echo "MY_NAME=${MY_NAME}" But I'm always getting this error: chmod 777 myScript.sh && ./myScript.sh My-name-Mariana .build/utils.bash: line 596: My-name-Mariana: command not found make: *** [test] Error 127 How can I solve this? UPDATE: I know .build/utils.bash is tryind to execute my parameter, I can see that on the error. But I can't change that file because I don't have it because is not part of my code project.
Since your script can source .build/utils.bash, you have already proved that you in fact can read it. For example, try less .build/utils.bash in the directory that contains the Makefile. To fix the actual problem without modifying .build/utils.bash, you might try assigning the contents of $1 into your MY_NAME variable and then using the shift command to remove it from $1 - before you source .build/utils.sh. But then utils.bash receives $1 as an empty string, which may cause it to fail or to do something different - without reading utils.bash or some documentation about it, it will be impossible to know what it expects.
"Command not found" passing argument from Makefile to shell script
1,463,417,513,000
I am aware of LatexMk, but can't install that on the machine where I want to run pdflatex, so I need to write a Makefile of which %.pdf files are targets that depend on %.tex and the *.tex files that %.tex is inputting. For this I wrote the following: %.pdf : %.tex $(shell perl -lne 'print "$$1\n" if /\\input{([\w-]+\.tex)}/' %.tex) Now, I tested the regular expression and it seems to work fine, but the %.tex at the end isn't passed correctly, running make output.pdf gives me: Can't open %.tex: No such file or directory. How can I pass %.tex to the $(shell) command? I'm using GNU make.
Probably what you're trying to do is better solved by creating a file listing dependencies, which you can then include from your Makefile. This is a common pattern in C and C++ makefiles. SOURCES=foo.tex bar.tex all: $(SOURCES:.tex=.pdf) %.dep: %.tex perl -lne 'print "$*.pdf: $$1\n" if /\\input{([\w-]+\.tex)}/' <$< >$@ include $(SOURCES:.tex=.dep) Recommended reading: Generating Prerequisites Automatically in the Make manual.
Percentage symbol in $(shell) in GNU Makefile dependency
1,463,417,513,000
I have recorded that it took 50 minutes for an initial compilation of the OpenWrt firmware image, assuming all the necessary packages have been installed via sudo apt-get install. My BuildRoot Root Dir is openwrt. Subsequently, I found that if I rename the directory above the openwrt folder, with a minor change in a file say wifi.lua the next make (in openwrt folder) takes 21 minutes to compile successfully. However, if I don't rename the directory above the openwrt folder, with a similar minor change in the same file, the next make V=99 takes only 3 minutes to compile successfully. When I now rename the directory above and do the same as above again, the make takes 21 minutes to compile successfully. With make V=99, I can see that there were many more compilation steps taken compared to the case where I did not rename the top directory. I can see that the Makefile compilation is much faster if I do not rename the top directory. This brings me to the related question: In Linux, will renaming or moving a directory change the times of the files in subdirectories? I know that the Makefile does not build a target again if the modification time of the target is more recent than all its dependencies. I was also reading about some issues with the GNU Makefile: http://www.conifersystems.com/whitepapers/gnu-make/ Does the OpenWrt Makefile, supposed to be much more advanced than the original Linux Makefile, address some or all of these issues? (To get the Makefile to compile faster, I also have the openwrt/dl as a symbolic link to a folder in my home directory, so that the user-space package tarballs don't need to be downloaded again.)
No, it doesn't change the timestamps of contained files and directories, only on the directory itself. However, if the Makefile contains targets or dependencies that use absolute paths or even just $(src_dir) it will remake them, b/c it's a different/new target. See the GNU make documentation for conventions and advice on "standard" targets and variables. However, Makefiles don't compile and there is no such thing as the original Linux Makefile. Creating/maintaining an environment like BuildRoot is very complex and the maintainers probably focus on getting it to build correctly before efficiently. If a simple patch, like adding a symlink helps to speed up the process, maybe you should send it as a suggestion for improvement upstream.
How to make OpenWrt Makefile compile faster?
1,483,481,088,000
Trying a Makefile rule like the following did not work (GNU Make 4.0): foo: [email protected] other.o bar: bar.o other.o The file foo.c was compiled (to foo.o), but the link command was cc -o .o. In contrast, bar was compiled and linked correctly as cc bar.o other.o -o bar. Who can explain the difference (or the problem)?
This is addressed in the section on Automatic variables in the GNU Make manual: It’s very important that you recognize the limited scope in which automatic variable values are available: they only have values within the recipe. In particular, you cannot use them anywhere within the target list of a rule; they have no value there and will expand to the empty string. Also, they cannot be accessed directly within the prerequisite list of a rule. A common mistake is attempting to use $@ within the prerequisites list; this will not work. The rest of the paragraph gives one possible solution, albeit a GNU Make-specific one: secondary expansion. Writing your Makefile as .SECONDEXPANSION: foo: [email protected] other.o bar: bar.o other.o allows $$@ to be given the appropriate value, and then $ make foo cc -c -o foo.o foo.c cc -c -o other.o other.c cc foo.o other.o -o foo does what you’re hoping it to do. (In my experience, there are usually better ways of addressing a problem than resorting to secondary expansion, but that can only be determined by understanding the overall goal of what you’re trying to do.)
Using `[email protected]` in Makefile dependency won't work
1,483,481,088,000
I want to add another option to the CFLAGS make variable, depending on the result of a shell command that i want to execute outside of a recipe in my "configuration" section of the makefile. This is what i have come up with: GCC_VERSION := $(shell gcc -dumpversion); \ if [[ ${GCC_VERSION} > 5.0 ]] ; then \ CFLAGS += -D _POSIX_C_SOURCE=199309L; \ fi At first i execute the command with the shell make function as you see above. If i execute the above it doesn't add this define flag. I intentionally do this on linux with GCC Version 5.4.0. I believe this is wrong because then i have to create a new shell to execute the conditional statement. In that new shell though the GCC_VERSION variable will not exist. I could be wrong though. If i do like this then (all in one shell): $(shell GCC_VERSION=$(gcc -dumpversion); \ if [[ ${GCC_VERSION} > 5.0 ]] ; then \ CFLAGS += -D _POSIX_C_SOURCE=199309L; \ fi) i get error: *** recipe commences before first target. Stop. Yeah, very confusing. If someone could help i would appreciate it. Thanks.
There are many solutions, including this one. In your Makefile use VERSION5 := $(shell \ GCC_VERSION=$$(gcc -dumpversion); \ [[ $$GCC_VERSION > 5.0 ]]; \ echo $$? ) ifeq (${VERSION5}, 0) CFLAGS += -D _POSIX_C_SOURCE=199309L endif Note in particular, that you need to use $$ for every $ in your shell script. This shell echos 0 if the string comparision with 5.0 is true, else 1, and this is saved in make variable VERSION5. Then the ifeq test in the Makefile compares the variable with 0 and if it matches edits the CFLAGS variable.
GNU make - How to concatenate to variable depending on shell command result (GCC version)?
1,483,481,088,000
I have a makefile and want to make sure that all the rules are executed sequentially, that is, that no parallel execution is performed. I believe I have three ways of achieving this: With .NOTPARALLEL target, By calling make using make -j 1, By setting the flag directly in the makefile, e.g., MAKEFLAGS := -j 1 Is there a "best practice" between those three, and which one is the more robust? For instance, is it an overkill to do MAKEFLAGS := --jobs=1 .NOTPARALLEL: all:foo bar .NOTPARALLEL: foo:bar .NOTPARALLEL: bar: @echo "test" ?
Yes, this is overkill. As far as the three options go: You should never set MAKEFLAGS, for two reasons: it will cause issues with any flags passed on the command-line, and MAKEFLAGS doesn’t work in a way that can be robustly modified externally. To see both of these problems in action, add an @echo $(MAKEFLAGS) rule to your bar recipe, and run make -n bar. make -j 1 has the effect you’re after, but it is most appropriate when you want to temporarily run everything serially. This is useful if you want to limit the resources used, or if you’re debugging a parallel execution issue. It is also the default, at least in GNU Make: Make only runs one task at a time by default. .NOTPARALLEL: only needs to be specified once, also has the effect you’re after, and is appropriate when the serialisation requirement is a property of the Makefile, i.e. all executions should be serial without possible external influence. If your build is going to be used at lot however, it’s best in my opinion to spend the time needed to figure out why parallel execution of rules causes problems, and add the appropriate dependencies. GNU Make supports order-only prerequisites which can be used to enforce ordering without enforcing a “newness” dependency; these are often helpful in such circumstances. See this answer to the question you found on SO.
Serialize all rules in GNU make: best practise?