date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,328,908,576,000
This site says, "Shell functions are faster [than aliases]. Aliases are looked up after functions and thus resolving is slower. While aliases are easier to understand, shell functions are preferred over aliases for almost every purpose." Given that (true or not), how do shell functions compare to standalone shell scripts? Does one have particular advantages over the other, or better suited for certain types of tasks?
The main difference between aliases and functions is that aliases don't take arguments¹, but functions do. When you write something like alias l='ls --color', l foo is expanded to ls --color foo; you can't grab foo into the alias expansion and do something different with it the way you can do with a function. See also How to pass parameter to alias?. Aliases are looked up before functions: if you have both a function and an alias called foo, foo invokes the alias. (If the alias foo is being expanded, it's temporarily blocked, which makes things like alias ls='ls --color' work. Also, you can bypass an alias at any time by running \foo.) I wouldn't expect to see a measurable performance difference though. Functions and standalone scripts have mostly similar capabilities; here are a few differences I can think of: A function runs inside the shell environment; a script runs in a separate process. Therefore a function can change the shell environment: define environment variables, change the current directory, etc. A standalone script can't do that. A function must be written in the language of the shell you want to use it in. A script can be written in any language. Functions are loaded when they are defined. Scripts are loaded each time they are invoked. This has several consequences: If you modify a script, you get the new version the next time you invoke it. If you change a function's definition, you have to reload the definition. Functions are faster on heavily loaded systems. If you have a lot of functions that you may not use, they'll take up memory. Ksh and zsh, but I think not bash, have a form of function autoloading. Something that's intermediate between a function and a standalone script is a script snippet that you read with the source or . builtin. Like a function, it can modify the shell's environment, and must be written in the shell's language. Like a script, it is loaded each time it's invoked and no sooner. ¹ Yeah, I know, this doesn't apply to tcsh.
Aliases vs functions vs scripts [duplicate]
1,328,908,576,000
I have command foo, how can I know if it's binary, a function or alias?
If you're on Bash (or another Bourne-like shell), you can use type. type command will tell you whether command is a shell built-in, alias (and if so, aliased to what), function (and if so it will list the function body) or stored in a file (and if so, the path to the file). Note that you can have nested cases, such as an alias to a function. If so, to find the actual type, you need to unalias first: unalias command; type command For more information on a "binary" file, you can do file "$(type -P command)" 2>/dev/null This will return nothing if command is an alias, function or shell built-in but returns more information if it's a script or a compiled binary. References Why not use "which"? What to use then?
How to test if command is alias, function or binary?
1,328,908,576,000
In Advanced Bash-Scripting Guide, in example 27-4, 7-th line from the bottom, I've read this: A function runs as a sub-process. I did a test in Bash, and it seems that the above statement is wrong. Searches on this site, Bash Man, and my search engine don't bring any light. Do you have the answer and would like to explain?
The Advanced Bash-Scripting Guide is not always reliable and its example scripts contain out-dated practices such as using the effectively deprecated backticks for command substitution, i.e., `command` rather than $(command). In this particular case, it’s blatantly incorrect. The section on Shell Functions in the (canonical) Bash manual definitively states that Shell functions are executed in the current shell context; no new process is created to interpret them.
Do functions run as subprocesses in Bash?
1,328,908,576,000
Is it possible to call a function which is declared below in bash? Example if [ "$input" = "yes" ]; then YES_FUNCTION elif [ "$input" = "no" ]; then NO_FUNCTION else exit 0; fi YES_FUNCTION() { ..... ..... } NO_FUNCTION() { ..... ..... }
Like others have said, you can't do that. But if you want to arrange the code into one file so that the main program is at the top of the file, and other functions are defined below, you can do it by having a separate main function. E.g. #!/bin/sh main() { if [ "$1" = yes ]; then do_task_this else do_task_that fi } do_task_this() { ... } do_task_that() { ... } main "$@"; exit When we call main at the end of file, all functions are already defined. Explicitly passing "$@" to main is required to make the command line arguments of the script visible in the function. The explicit exit on the same line as the call to main is not mandatory, but can be used to prevent a running script from getting messed up if the script file is modified. Without it, the shell would try to continue reading commands from the script file after main returns. (see How to read the whole shell script before executing it?)
call function declared below
1,328,908,576,000
I have a function which converts epoch time to date. Here is the definition date1(){ date -d @$1 } I'd like to be able to write: $ date1 xxxyyy Where xxxyyy is the parameter I pass into my function so I can get the corresponding date. I understand I have to add it in either .bash_profile, .profile, or .bashrc and then source it: $ source file But, I'm not sure which file to put it in. Currently, I have it in .profile. But to run it, I have to do source .profile every time. Ideally, it should make it available, when the computer starts up like the environment variable.
From man bash: When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. In other words, you can put it in any one of ~/.bash_profile, ~/.bash_login or ~/.profile, or any files sourced by either of those. Typically ~/.profile will source ~/.bashrc, which is the "personal initialization file, executed for login shells." To enable it, either start a new shell, run exec $SHELL or run source ~/.bashrc.
How to add a function to .bash_profile/.profile/bashrc in shell?
1,328,908,576,000
Sometimes I need to divide one number by another. It would be great if I could just define a bash function for this. So far, I am forced to use expressions like echo 'scale=25;65320/670' | bc but it would be great if I could define a .bashrc function that looked like divide () { bc -d $1 / $2 }
I have a handy bash function called calc: calc () { bc -l <<< "$@" } Example usage: $ calc 65320/670 97.49253731343283582089 $ calc 65320*670 43764400 You can change this to suit yourself. For example: divide() { bc -l <<< "$1/$2" } Note: <<< is a here string which is fed into the stdin of bc. You don't need to invoke echo.
Doing simple math on the command line using bash functions: $1 divided by $2 (using bc perhaps)
1,328,908,576,000
I want to write the following bash function in a way that it can accept its input from either an argument or a pipe: b64decode() { echo "$1" | base64 --decode; echo } Desired usage: $ b64decode "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" $ b64decode < file.txt $ b64decode <<< "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" $ echo "QWxhZGRpbjpvcGVuIHNlc2FtZQo=" | b64decode
See Stéphane Chazelas's answer for a better solution. You can use /dev/stdin to read from standard input b64decode() { if (( $# == 0 )) ; then base64 --decode < /dev/stdin echo else base64 --decode <<< "$1" echo fi } $# == 0 checks if number of command line arguments is zero base64 --decode <<< "$1" one can also use herestring instead of using echo and piping to base64
Bash function that accepts input from parameter or pipe
1,328,908,576,000
Suppose you have an alias go, but want it to do different things in different directories? In one directory it should run cmd1, but in another directory it should run cmd2 By the way, I have an aliases for switching to the above directories already, so is it possible to append the go alias assignment to the foo alias? alias "foo=cd /path/to/foo" Working in bash(?) on OSX.
It is not completely sure what you are asking, but an alias just expands to what is in the alias. If you have two aliases, you can append the different commands, even aliases. alias "foo=cd /path/to/foo; go" alias "foo2=cd /path/to/foo2; go" In any other situation, you could specify a function in your .bashrc function go () { if [ "$PWD" == "/path/to/foo" ]; then cmd1 elif [ "$PWD" == "/path/to/go" ]; then cmd2 fi; } In case you have more choices, you could better use a case structure.
How to set an alias on a per-directory basis?
1,328,908,576,000
I use Ubuntu 16.04 with the native Bash on it. I'm not sure if executing #!/bin/bash myFunc() { export myVar="myVal" } myFunc equals in any sense, to just executing export myVar="myVal". Of course, a global variable should usually be declared outside of a function (a matter of convention I assume, even if technically possible) but I do wonder about the more exotic cases where one writes some very general function and wants a variable inside it to still be available to everything, anywhere. Would export of a variable inside a function, be identical to exporting it globally, directly in the CLI, making it available to everything in the shell (all subshells, and functions inside them)?
Your script creates an environment variable, myVar, in the environment of the script. The script, as it is currently presented, is functionally exactly equivalent to #!/bin/bash export myVar="myVal" The fact that the export happens in the function body is not relevant to the scope of the environment variable (in this case). It will start to exist as soon as the function is called. The variable will be available in the script's environment and in the environment of any other process started from the script after the function call. The variable will not exist in the parent process' environment (the interactive shell that you run the script from), unless the script is sourced (with . or source) in which case the whole script will be executing in the interactive shell's environment (which is the purpose of "sourcing" a shell file). Without the function call itself: myFunc() { export myVar="myVal" } Sourcing this file would place myFunc in the environment of the calling shell. Calling the function would then create the environment variable. See also the question What scopes can shell variables have?
Exporting a variable from inside a function equals to global export of that variable?
1,328,908,576,000
When I use df or mount, I'm most of all interested in physical disk partitions. Nowadays the output of those commands is overwhelmed by temporary and virtual filesystems, cgroups and other things I am not interested in on a regular basis. My physical partitions in the output always start with '/', so I tried making aliases for df and mount: alias df1="df | egrep '^/'" alias mount1="mount | egrep '^/'" That works OK for mount1 (although it shows the '/' in red), but for df1 I would sometimes like to add the -h option to df and cannot do df1 -h. I would prefer not to have an alias for every option combination I might want to use. Do I really have to look into defining functions in bash (I would prefer not to)? Is there a better solution for df1?
You can solve the df1 argument issue by using the following alias: alias df1='df --type btrfs --type ext4 --type ext3 --type ext2 --type vfat --type iso9660' make sure to add any other type (xfs, fuseblk (for modern NTFS support, as @Pandya pointed out), etc) you're interested in. With that you can do df1 -h and get the expected result. The filtering works the other way around too, exclude the FS types you don't want to see: alias df1='df -x tmpfs -x efivarfs ' mount does have a -t option but you cannot specify it multiple times (only the last is taken), there I would use: alias mount1="mount | /bin/grep -E '^/'" I am using grep -E as egrep is deprecated and using /bin/grep makes sure you're not using --colour=auto from an alias for grep/egrep
show only physical disks when using df and mount
1,328,908,576,000
A few times when I read about programming I came across the "callback" concept. Funnily, I never found an explanation I can call "didactic" or "clear" for this term "callback function" (almost any explanation I read seemed to me enough different from another and I felt confused). Is the "callback" concept of programming existent in Bash? If so, please answer with a small, simple, Bash example.
In typical imperative programming, you write sequences of instructions and they are executed one after the other, with explicit control flow. For example: if [ -f file1 ]; then # If file1 exists ... cp file1 file2 # ... create file2 as a copy of a file1 fi etc. As can be seen from the example, in imperative programming you follow the execution flow quite easily, always working your way up from any given line of code to determine its execution context, knowing that any instructions you give will be executed as a result of their location in the flow (or their call sites’ locations, if you’re writing functions). How callbacks change the flow When you use callbacks, instead of placing the use of a set of instructions “geographically”, you describe when it should be called. Typical examples in other programming environments are cases such as “download this resource, and when the download is complete, call this callback”. Bash doesn’t have a generic callback construct of this kind, but it does have callbacks, for error-handling and a few other situations; for example (one has to first understand command substitution and Bash exit modes to understand that example): #!/bin/bash scripttmp=$(mktemp -d) # Create a temporary directory (these will usually be created under /tmp or /var/tmp/) cleanup() { # Declare a cleanup function rm -rf "${scripttmp}" # ... which deletes the temporary directory we just created } trap cleanup EXIT # Ask Bash to call cleanup on exit If you want to try this out yourself, save the above in a file, say cleanUpOnExit.sh, make it executable and run it: chmod 755 cleanUpOnExit.sh ./cleanUpOnExit.sh My code here never explicitly calls the cleanup function; it tells Bash when to call it, using trap cleanup EXIT, i.e. “dear Bash, please run the cleanup command when you exit” (and cleanup happens to be a function I defined earlier, but it could be anything Bash understands). Bash supports this for all non-fatal signals, exits, command failures, and general debugging (you can specify a callback which is run before every command). The callback here is the cleanup function, which is “called back” by Bash just before the shell exits. You can use Bash’s ability to evaluate shell parameters as commands, to build a callback-oriented framework; that’s somewhat beyond the scope of this answer, and would perhaps cause more confusion by suggesting that passing functions around always involves callbacks. See Bash: pass a function as parameter for some examples of the underlying functionality. The idea here, as with event-handling callbacks, is that functions can take data as parameters, but also other functions — this allows callers to provide behaviour as well as data. A simple example of this approach could look like #!/bin/bash doonall() { command="$1" shift for arg; do "${command}" "${arg}" done } backup() { mkdir -p ~/backup cp "$1" ~/backup } doonall backup "$@" (I know this is a bit useless since cp can deal with multiple files, it’s only for illustration.) Here we create a function, doonall, which takes another command, given as a parameter, and applies it to the rest of its parameters; then we use that to call the backup function on all the parameters given to the script. The result is a script which copies all its arguments, one by one, to a backup directory. This kind of approach allows functions to be written with single responsibilities: doonall’s responsibility is to run something on all its arguments, one at a time; backup’s responsibility is to make a copy of its (sole) argument in a backup directory. Both doonall and backup can be used in other contexts, which allows more code re-use, better tests etc. In this case the callback is the backup function, which we tell doonall to “call back” on each of its other arguments — we provide doonall with behaviour (its first argument) as well as data (the remaining arguments). (Note that in the kind of use-case demonstrated in the second example, I wouldn’t use the term “callback” myself, but that’s perhaps a habit resulting from the languages I use. I think of this as passing functions or lambdas around, rather than registering callbacks in an event-oriented system.)
Is the "callback" concept of programming existent in Bash?
1,328,908,576,000
I have setup several functions in my .bashrc file. I would like to just display the actual code of the function and not execute it, to quickly refer to something. Is there any way, we could see the function definition?
The declare builtin's -f option does that: bash-4.2$ declare -f apropos1 apropos1 () { apropos "$@" | grep ' (1.*) ' } I use type for that purpose, it is shorter to type ;) bash-4.2$ type apropos1 apropos1 is a function apropos1 () { apropos "$@" | grep ' (1.*) ' }
Display the function body in Bash
1,328,908,576,000
Elsewhere I have seen a cd function as below: cd() { builtin cd "$@" } why is it recommended to use $@ instead of $1? I created a test directory "r st" and called the script containing this function and it worked either way $ . cdtest.sh "r st" but $ . cdtest.sh r st failed whether I used "$@" or "$1"
Because, according to bash(1), cd takes arguments cd [-L|[-P [-e]] [-@]] [dir] Change the current directory to dir. if dir is not supplied, ... so therefore the directory actually may not be in $1 as that could instead be an option such as -L or another flag. How bad is this? $ cd -L /var/tmp $ pwd /var/tmp $ cd() { builtin cd "$1"; } $ cd -L /var/tmp $ pwd /home/jhqdoe $ Things could go very awry if you end up not where you expect using cd "$1"…
Why do I need to use cd "$@" instead of cd "$1" when writing a wrapper for cd?
1,328,908,576,000
Suppose I have in main.sh: $NAME="a string" if [ -f $HOME/install.sh ] . $HOME/install.sh $NAME fi and in install.sh: echo $1 This is supposed to echo "a string", but it echoes nothing. Why?
Michael Mrozek covers most of the issues and his fixes will work since you are using Bash. You may be interested in the fact that the ability to source a script with arguments is a bashism. In sh or dash your main.sh will not echo anything because the arguments to the sourced script are ignored and $1 will refer to the argument to main.sh. When you source the script in sh, it is as if you just copy and pasted the text of the sourced script into the file from which it was sourced. Consider the following (note, I've made the correction Michael recommended): $ bash ./test.sh A String $ sh ./test.sh $ sh ./test.sh "HELLO WORLD" HELLO WORLD
Passing variables to a bash script when sourcing it
1,328,908,576,000
From the bash manual The rules concerning the definition and use of aliases are somewhat confusing. Bash always reads at least one complete line of input before executing any of the commands on that line. Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. The commands following the alias definition on that line are not affected by the new alias. This behavior is also an issue when functions are executed. Aliases are expanded when a function definition is read, not when the function is executed, because a function definition is itself a compound command. As a consequence, aliases defined in a function are not available until after that function is executed. To be safe, always put alias definitions on a separate line, and do not use alias in compound commands. The two sentences "Aliases are expanded when a function definition is read, not when the function is executed" and "aliases defined in a function are not available until after that function is executed" seem to be contrary to each other. Can you explain what they mean respectively?
Aliases are expanded when a function definition is read, not when the function is executed … $ echo "The quick brown fox jumps over the lazy dog." > myfile   $ alias myalias=cat   $ myfunc() { > myalias myfile > }   $ myfunc The quick brown fox jumps over the lazy dog.   $ alias myalias="ls -l"   $ myalias myfile -rw-r--r-- 1 myusername mygroup 45 Dec 13 07:07 myfile   $ myfunc The quick brown fox jumps over the lazy dog. Even though myfunc was defined to call myalias, and I’ve redefined myalias, myfunc still executes the original definition of myalias.  Because the alias was expanded when the function was defined.  In fact, the shell no longer remembers that myfunc calls myalias; it knows only that myfunc calls cat: $ type myfunc myfunc is a function myfunc () { cat myfile } … aliases defined in a function are not available until after that function is executed. $ echo "The quick brown fox jumps over the lazy dog." > myfile   $ myfunc() { > alias myalias=cat > }   $ myalias myfile -bash: myalias: command not found   $ myfunc   $ myalias myfile The quick brown fox jumps over the lazy dog. The myalias alias isn’t available until the myfunc function has been executed.  (I believe it would be rather odd if defining the function that defines the alias was enough to cause the alias to be defined.)
Alias and functions
1,328,908,576,000
I want to make an alias for a multiline ​command to call it faster then copying-pasting-executing it from a text file each time. An example for such command is this execute-a-remote-updater command: ( cd "${program_to_update_dir}" wget https://raw.githubusercontent.com/USER/PROJECT/BRANCH/update.sh source update.sh rm update.sh ) A couple of years ago I have learned that multiline commands' aliases are impossible due to Bash design limitations but perhaps since then the program has been changed and now it is possible. I think that if it is still impossible, perhaps I could make a function with that code, export it to all shell sessions (although I rarely work with any subsessions and I don't even recall when I last did that), and then call it somehow. Should it be enough and secure to just wrap it in an exported and named function in say .bashrc or .bash_profile and then call it whenever I need to?
It's not impossible at all. alias thing='( cd "${program_to_update_dir}" wget "https://raw.githubusercontent.com/USER/PROJECT/BRANCH/update.sh" source update.sh rm update.sh )' or, alias thing='( cd "${program_to_update_dir}"; wget "https://raw.githubusercontent.com/USER/PROJECT/BRANCH/update.sh"; source update.sh; rm update.sh )' The thing with aliases is that quoting may be tricky to get right as they are text strings, and that they are better suited for really short things, like alias ls='ls -F' In almost every other instance, you want a shell function instead: thing () ( cd "${program_to_update_dir}" wget 'https://raw.githubusercontent.com/USER/PROJECT/BRANCH/update.sh' source update.sh rm update.sh ) Or, corrected to not use the update.sh name in the current directory (it may be taken by an unrelated file) and to only run wget if the cd succeeded, and slightly streamlined for the bash shell, thing () ( cd "$program_to_update_dir" && source <( wget --quiet -O - 'https://raw.githubusercontent.com/USER/PROJECT/BRANCH/update.sh' ) ) (Note that you may want to run the update.sh script using bash instead of sourcing it, or pipe it to bash -s, or use whatever interpreter its #!-line uses. The fact that you're using source on it is confusing as you're also running it in a subshell. The effect on the environment of running the script with source, which is usually why one wants to source a script, is lost when the subshell terminates.) Shell functions may be define in the same initialization file that you define aliases in, and they are used in the same way as aliases, but are more versatile (can take arguments etc.) The bash manual contains the statement For almost every purpose, aliases are superseded by shell functions.
How to make a multiline alias in Bash?
1,328,908,576,000
In bash scripts I try to keep my variables local to functions wherever I can and then pass what I need out of functions like bellow #!/bin/bash function FUNCTION() { local LOCAL="value" echo "$LOCAL" # return this variable } GLOBAL=$(FUNCTION) echo "$GLOBAL" But is it possible to do this while including the function's own echos so that if the function has it's own messages to output I don't have to catch them in a variable #!/bin/bash function FUNCTION() { local LOCAL="value" echo "$LOCAL" # return this variable echo "This function is done now" # do not return this variable } GLOBAL=$(FUNCTION) echo "$GLOBAL" # should only echo 'value'
Anything that's printed by the function can be captured if you capture the right output stream. So the easiest way to print something and save some other output is to redirect the superfluous output to standard error: function FUNCTION() { local LOCAL="value" echo "$LOCAL" echo "This function is done now" >&2 } Another possibility is to log to a file rather than printing log messages directly, for example using something like this: log() { printf '%s\n' "$@" > my.log } That said, Bash functions cannot return variables. The only actual "return" value is the exit code. For this reason (and many others), if you want reliable logging, return values, exception handling and more you'll want to use a different language like Python, Ruby or Java.
Bash Scripting echo locally in a function
1,328,908,576,000
I'm trying to create a function method in a bash script that executes a command which is supplied to the method by the paramters. Meaning somethings like this: special_execute() { # Some code # Here's the point where the command gets executed $@ # More code } special_execute echo "abc" I already tried I $@, "$@", $*, "$*" how could I do that?
I think it's just a quoting issue when you're passing the arguments into the function. Try calling it like so: $ special_execute "echo 'abc'" 'abc' If you don't want the single quotes around abc then change the quoting like this: $ special_execute "echo abc" abc Debugging You can wrap the internals of the function so that it echoes out with more verbosity. $ function special_execute() { set -x; "$@"; set +x; } Then when you run commands through the function, special_execute you can see what's going on. ps example: $ special_execute ps -eaf + ps -eaf UID PID PPID C STIME TTY TIME CMD root 1 0 0 Aug21 ? 00:00:01 /sbin/init root 2 0 0 Aug21 ? 00:00:00 [kthreadd] ... perl example: $ special_execute perl -MTime::HiRes=sleep -le 'for(1..10) { print; sleep 0.05; }' + perl -MTime::HiRes=sleep -le 'for(1..10) { print; sleep 0.05; }' 1 2 3 4 5 6 7 8 9 10 + set +x Parsing argument $1 You could do something like this to parse any arguments passed in as $1. $ function special_execute() { [ "$1" -eq "-123" ] && echo "flagY" || echo "flagN"; shift; set -x; "$@"; set +x; } Example with debugging enabled: $ special_execute -123 perl -MTime::HiRes=sleep -le 'for(1..5) { print; sleep 0.05; }' flagY + perl -MTime::HiRes=sleep -le 'for(1..5) { print; sleep 0.05; }' 1 2 3 4 5 + set +x with debugging off - -123: $ special_execute -123 perl -MTime::HiRes=sleep -le 'for(1..5) { print; sleep 0.05; }' flagY 1 2 3 4 5 with debugging off - -456: $ special_execute -456 perl -MTime::HiRes=sleep -le 'for(1..5) { print; sleep 0.05; }' flagN 1 2 3 4 5
Execute command supplied by function parameters
1,328,908,576,000
I have a function for quickly making a new SVN branch which looks like so function svcp() { svn copy "repoaddress/branch/$1.0.x" "repoaddress/branch/dev/$2" -m "dev branch for $2"; } Which I use to quickly make a new branch without having to look up and copy paste the addresses and some other stuff. However for the message (-m option), I'd like to have it so that if I provide a third parameter then that is used as the message, otherwise the 'default' message of "dev branch for $2" is used. Can someone explain how this is done?
function svcp() { msg=${3:-dev branch for $2} svn copy "repoaddress/branch/$1.0.x" "repoaddress/branch/dev/$2" -m "$msg"; } the variable msg is set to $3 if $3 is non-empty, otherwise it is set to the default value of dev branch for $2. $msg is then used as the argument for -m.
Optional parameters in bash function
1,328,908,576,000
I have a bash script as below which installs zookeeper but only if not installed already. ##zookeper installZook(){ ZOOK_VERSION="3.4.5" ZOOK_TOOL="zookeeper-${ZOOK_VERSION}" ZOOK_DOWNLOAD_URL="http://www.us.apache.org/dist/zookeeper/${ZOOK_TOOL}/${ZOOK_TOOL}.tar.gz" if [ -e $DEFAULT_INSTALLATION_DEST/${ZOOK_TOOL} ]; then echo "${ZOOK_TOOL} alreay installed"; exit 1; # <<<< here elif [ ! -e $DEFAULT_SOURCE_ROOT/${ZOOK_TOOL}.tar.gz ]; then wgetIt $ZOOK_DOWNLOAD_URL else echo "[info] : $DEFAULT_SOURCE_ROOT/$ZOOK_TOOL already exists" fi sudo mkdir -p /var/lib/zookeeper sudo mkdir -p /var/log/zookeeper tarIt "$DEFAULT_SOURCE_ROOT/$ZOOK_TOOL.tar.gz" sudo chmod 777 -R $DEFAULT_INSTALLATION_DEST/$ZOOK_TOOL cp $DEFAULT_INSTALLATION_DEST/$ZOOK_TOOL/conf/zoo_sample.cfg $DEFAULT_INSTALLATION_DEST/$ZOOK_TOOL/conf/zoo.cfg cat >> ~/.bash_profile <<'EOF' ############################### ########### ZOOK ############### ############################### ZOOK_HOME=/usr/local/zookeper-3.4.5 export ZOOK_HOME export PATH=$PATH:$ZOOK_HOME/bin EOF } At the line marked <<<< here, if zookeeper is already installed, what I want is to exit the script below it. But using exit exits the terminal itself.
TL;DR Use return instead of exit AND run your script with source your-script.sh aka. . your-script.sh Full details If launching a script with an exit statement in it, you have to launch it as a child of you current child. If you launch it inside the current shell of started with your terminal session (using . ./<scriptname> any exit will close the main shell, the one started along your terminal session. If you had launched your script like bash ./<scriptname> (or any other shell instead of bash), then exit would have stopped your child shell and not the one used by your terminal. If your script has executable permissions, executing it directly without giving the name of the shell will execute it in a child shell too. Using return instead of exit will allow you to still launch your script using . ./<script name> without closing the current shell. But you need to use return to exit from a function only or a sourced script (script ran using the . ./<scriptname> syntax).
Exit the bash function, not the terminal
1,328,908,576,000
I'd like to write a function that I can call from a script with many different variables. For some reasons I'm having a lot of trouble doing this. Examples I've read always just use a global variable but that wouldn't make my code much more readable as far as I can see. Intended usage example: #!/bin/bash #myscript.sh var1=$1 var2=$2 var3=$3 var4=$4 add(){ result=$para1 + $para2 } add $var1 $var2 add $var3 $var4 # end of the script ./myscript.sh 1 2 3 4 I tried using $1 and such in the function, but then it just takes the global one the whole script was called from. Basically what I'm looking for is something like $1, $2 and so on but in the local context of a function. Like you know, functions work in any proper language.
To call a function with arguments: function_name "$arg1" "$arg2" The function refers to passed arguments by their position (not by name), that is $1, $2, and so forth. $0 is the name of the script itself. Example: #!/bin/bash add() { result=$(($1 + $2)) echo "Result is: $result" } add 1 2 Output ./script.sh Result is: 3
How to pass parameters to function in a bash script?
1,328,908,576,000
The problem is that when watch is executed it runs sh and I get this error: sh: 1: func1: not found here is the code: #!/bin/bash func1(){ echo $1 } export -f func1 watch func1
The default shell for watch is /bin/sh. Shells will not inherit exported variables or functions from other types of shell. If your system does not symlink /bin/sh to /bin/bash (or your current shell) then you can instruct watch to exec your shell by using -x or --exec: watch -x bash -c "my_func" or watch --exec bash -c "my_func" This is different to watch bash -c "test_watch" (excluding the --exec) as it doesn't use /bin/sh to spawn the child process (in this case, bash), maintaining bash all the way down. As caveated in other answers, this can get messy if exporting functions that are highly coupled to the current environment (i.e. requiring other variables or functions). An example: test_watch() { echo 'Working!'; } export -f test_watch watch --exec bash -c "test_watch" gives the familiar Every 2.0s: bash -c test_watch Thu Mar 31 11:15:56 2016 Working!
How to force watch to run under bash
1,328,908,576,000
Using extended Unicode characters is (no-doubt) useful for many users. Simpler shells (ash (busybox), dash) and ksh do fail with: tést() { echo 34; } tést But bash, mksh, lksh, and zsh seem to allow it. I am aware that POSIX valid function names use this definition of Names. That means this regex: [a-zA-Z_][a-zA-Z0-9_]* However, in the first link it is also said: An implementation may allow other characters in a function name as an extension. The questions are: Is this accepted and documented? Where? For which shells (if any)? Related questions: Its possible use special characters in a shell function name? I am not interested in using meta-characters (>) in function names. Upstart and bash function names containing “-” I do not believe that an operator (subtraction "-") should be part of a name.
Since POSIX documentation allow it as an extension, there's nothing prevent implementation from that behavior. A simple check (ran in zsh): $ for shell in /bin/*sh 'busybox sh'; do printf '[%s]\n' $shell $=shell -c 'á() { :; }' done [/bin/ash] /bin/ash: 1: Syntax error: Bad function name [/bin/bash] [/bin/dash] /bin/dash: 1: Syntax error: Bad function name [/bin/ksh] [/bin/lksh] [/bin/mksh] [/bin/pdksh] [/bin/posh] /bin/posh: á: invalid function name [/bin/yash] [/bin/zsh] [busybox sh] sh: syntax error: bad function name show that bash, zsh, yash, ksh93 (which ksh linked to in my system), pdksh and its derivation allow multi-bytes characters as function name. yash is designed to support multibyte characters from the beginning, so there's no surprise it worked. The other documentation you can refer is ksh93: A blank is a tab or a space. An identifier is a sequence of letters, digits, or underscores starting with a letter or underscore. Identifiers are used as components of variable names. A vname is a sequence of one or more identifiers separated by a . and optionally preceded by a .. Vnames are used as function and variable names. A word is a sequence of characters from the character set defined by the current locale, excluding non-quoted metacharacters. So setting to C locale: $ export LC_ALL=C $ á() { echo 1; } ksh: á: invalid function name make it failed.
Shell valid function name characters
1,328,908,576,000
Sometimes I define a function that shadows an executable and tweaks its arguments or output. So the function has the same name as the executable, and I need a way how to run the executable from the function without calling the function recursively. For example, to automatically run the output of fossil diff through colordiff and less -R I use: function fossil () { local EX=$(which fossil) if [ -z "$EX" ] ; then echo "Unable to find 'fossil' executable." >&2 return 1 fi if [ -t 1 ] && [ "$1" == "diff" ] ; then "$EX" "$@" | colordiff | less -R return fi "$EX" "$@" } If I'd be sure about the location of the executable, the I could simply type /usr/bin/fossil. Bash recognizes that / means the command it's an executable, not a function. But since I don't know the exact location, I have to resort to calling which and checking the result. Is there a simpler way?
Use the command shell builtin: bash-4.2$ function date() { echo 'at the end of days...'; } bash-4.2$ date at the end of days... bash-4.2$ command date Mon Jan 21 16:24:33 EET 2013 bash-4.2$ help command command: command [-pVv] command [arg ...] Execute a simple command or display information about commands. Runs COMMAND with ARGS suppressing shell function lookup, or display information about the specified COMMANDs. Can be used to invoke commands on disk when a function with the same name exists.
Running an executable in PATH with the same name as an existing function
1,328,908,576,000
Is there a way to test whether a shell function exists that will work both for bash and zsh?
If you want to check that there's a currently defined (or at least potentially marked for autoloading) function by the name foo regardless of whether a builtin/executable/keyword/alias may also be available by that name, you could do: if typeset -f foo > /dev/null; then echo there is a foo function fi Though note that if there's a keyword or alias called foo as well, it would take precedence over the function (when not quoted). The above should work in ksh (where it comes from), zsh and bash.
Test for function's existence that can work on both bash and zsh?
1,328,908,576,000
I have defined a bash function in my ~/.bashrc file. This allows me to use it in shell terminals. However, it does not seem to exist when I call it from within a script. How can I define a bash function to be used by scripts as well?
~/.bash_profile and ~/.bashrc are not read by scripts, and functions are not exported by default. To do so, you can use export -f like so: $ cat > script << 'EOF' #!/bin/bash foo EOF $ chmod a+x script $ ./script ./script: line 2: foo: command not found $ foo() { echo "works" ; } $ export -f foo $ ./script works export -f foo could also be called in ~/.bash_profile to make this function available to scripts after login. Be warned that export -f is not portable. A better solution would be to source the file containing the function using . file. This is much more portable, and doesn't rely on your environment being set up in a particular way.
How to define a Bash function that can be used by different scripts
1,328,908,576,000
I’d like to implement a function in Bash which increases (and returns) a count with every call. Unfortunately this seems non-trivial since I’m invoking the function inside a subshell and it consequently cannot modify its parent shell’s variables. Here’s my attempt: PS_COUNT=0 ps_count_inc() { let PS_COUNT=PS_COUNT+1 echo $PS_COUNT } ps_count_reset() { let PS_COUNT=0 } This would be used as follows (and hence my need to invoke the functions from a subshell): PS1='$(ps_count_reset)> ' PS2='$(ps_count_inc) ' That way, I’d have a numbered multi-line prompt: > echo 'this 1 is 2 a 3 test' Cute. But due to the above mentioned limitation doesn’t work. A non-working solution would be to write the count to a file instead of a variable. However, this would create a conflict between multiple, simultaneously running sessions. I could append the process ID of the shell to the file name, of course. But I’m hoping there’s a better solution which won’t clutter my system with lots of files.
To get the same output you note in your question, all that is needed is this: PS1='${PS2c##*[$((PS2c=0))-9]}- > ' PS2='$((PS2c=PS2c+1)) > ' You need not contort. Those two lines will do it all in any shell that pretends to anything close to POSIX compatibility. - > cat <<HD 1 > line 1 2 > line $((PS2c-1)) 3 > HD line 1 line 2 - > echo $PS2c 0 But I liked this. And I wanted to demonstrate the fundamentals of what makes this work a little better. So I edited this a little. I stuck it in /tmp for now but I think I'm going to keep it for myself, too. It's here: cat /tmp/prompt PROMPT SCRIPT: ps1() { IFS=/ set -- ${PWD%"${last=${PWD##/*/}}"} printf "${1+%c/}" "$@" printf "$last > " } PS1='$(ps1)${PS2c##*[$((PS2c=0))-9]}' PS2='$((PS2c=PS2c+1)) > ' Note: having recently learned of yash, I built it yesterday. For whatever reason it doesn't print the first byte of every argument with the %c string - though the docs were specific about wide-char extensions for that format and so it maybe related - but it does just fine with %.1s That's the whole thing. There are two main things going on up there. And this is what it looks like: /u/s/m/man3 > cat <<HERE 1 > line 1 2 > line 2 3 > line $((PS2c-1)) 4 > HERE line 1 line 2 line 3 /u/s/m/man3 > PARSING $PWD Every time $PS1 is evaluated it parses and prints $PWD to add to the prompt. But I don't like the whole $PWD crowding my screen, so I want just the first letter of every breadcrumb in the current path down to the current directory, which I'd like to see in full. Like this: /h/mikeserv > cd /etc /etc > cd /usr/share/man/man3 /u/s/m/man3 > cd / / > cd ~ /h/mikeserv > There are a few steps here: IFS=/ we're going to have to split the current $PWD and the most reliable way to do that is with $IFS split on /. No need to bother with it at all afterward - all splitting from here on out will be defined by the shell's positional parameter $@ array in the next command like: set -- ${PWD%"${last=${PWD##/*/}}"} So this one's a little tricky, but the main thing is that we're splitting $PWD on / symbols. I also use parameter expansion to assign to $last everything after any value occurring between the left-most and right-most / slash. In this way I know that if I'm just at / and have only one / then $last will still equal the whole $PWD and $1 will be empty. This matters. I also strip $last from the tail end of $PWD before assigning it to $@. printf "${1+%c/}" "$@" So here - as long as ${1+is set} we printf the first %character of each our shell's arguments - which we've just set to each directory in our current $PWD - less the top directory - split on /. So we're essentially just printing the first character of every directory in $PWD but the top one. It's important though to realize this only happens if $1 gets set at all, which will not happen at root / or at one removed from / such as in /etc. printf "$last > " $last is the variable I just assigned to our top directory. So now this is our top directory. It prints whether or not the last statement did. And it takes a neat little > for good measure. BUT WHAT ABOUT THE INCREMENT? And then there's the matter of the $PS2 conditional. I showed earlier how this can be done which you can still find below - this is fundamentally an issue of scope. But there's a little more to it unless you want to start doing a bunch of printf \backspaces and then trying to balance out their character count... ugh. So I do this: PS1='$(ps1)${PS2c##*[$((PS2c=0))-9]}' Again, ${parameter##expansion} saves the day. It's a little strange here though - we actually set the variable while we strip it of itself. We use its new value - set mid-strip - as the glob from which we strip. You see? We ##*strip all from the head of our increment variable to the last character which can be anything from [$((PS2c=0))-9]. We're guaranteed in this way not to output the value, and yet we still assign it. It's pretty cool - I've never done that before. But POSIX also guarantees us that this is the most portable way this can be done. And it's thanks to POSIX-specified ${parameter} $((expansion)) that keeps these definitions in the current shell without requiring that we set them in a separate subshell, regardless of where we evaluate them. And this is why it works in dash and sh just as well as it does in bash and zsh. We use no shell/terminal dependent escapes and we let the variables test themselves. That's what makes portable code quick. The rest is fairly simple - just increment our counter for every time $PS2 is evaluated until $PS1 once again resets it. Like this: PS2='$((PS2c=PS2c+1)) > ' So now I can: DASH DEMO ENV=/tmp/prompt dash -i /h/mikeserv > cd /etc /etc > cd /usr/share/man/man3 /u/s/m/man3 > cat <<HERE 1 > line 1 2 > line 2 3 > line $((PS2c-1)) 4 > HERE line 1 line 2 line 3 /u/s/m/man3 > printf '\t%s\n' "$PS1" "$PS2" "$PS2c" $(ps1)${PS2c##*[$((PS2c=0))-9]} $((PS2c=PS2c+1)) > 0 /u/s/m/man3 > cd ~ /h/mikeserv > SH DEMO It works the same in bash or sh: ENV=/tmp/prompt sh -i /h/mikeserv > cat <<HEREDOC 1 > $( echo $PS2c ) 2 > $( echo $PS1 ) 3 > $( echo $PS2 ) 4 > HEREDOC 4 $(ps1)${PS2c##*[$((PS2c=0))-9]} $((PS2c=PS2c+1)) > /h/mikeserv > echo $PS2c ; cd / 0 / > cd /usr/share /u/share > cd ~ /h/mikeserv > exit As I said above, the primary problem is that you need to consider where you do your computation. You don't get the state in the parent shell - so you don't compute there. You get the state in the subshell - so that's where you compute. But you do the definition in the parent shell. ENV=/dev/fd/3 sh -i 3<<\PROMPT ps1() { printf '$((PS2c=0)) > ' ; } ps2() { printf '$((PS2c=PS2c+1)) > ' ; } PS1=$(ps1) PS2=$(ps2) PROMPT 0 > cat <<MULTI_LINE 1 > $(echo this will be line 1) 2 > $(echo and this line 2) 3 > $(echo here is line 3) 4 > MULTI_LINE this will be line 1 and this line 2 here is line 3 0 >
Stateful bash function
1,328,908,576,000
Data 1 \begin{document} 3 Code #!/bin/bash function getStart { local START="$(awk '/begin\{document\}/{ print NR; exit }' data.tex)" echo $START } START2=$(getStart) echo $START2 which returns 2 but I want 3. I change unsuccessfully the end by this answer about How can I add numbers in a bash script: START2=$((getStart+1)) How can you increment a local variable in Bash script?
I'm getting 2 from your code. Nevertheless, you can use the same technique for any variable or number: local start=1 (( start++ )) or (( ++start )) or (( start += 1 )) or (( start = start + 1 )) or just local start=1 echo $(( start + 1 )) etc.
How to increment local variable in Bash?
1,328,908,576,000
So I am editing bashrc constantly, and I have a terminal open with a working function definition, although bashrc has been updated with a wrong function definition. (Because the definition do not change until I source the updated bashrc) How can I look up the working function definition in this case? For example, if I type: alias function_name I can see what the definition is for that alias. Is there a command similar for function? If not, is there a command to output entire bashrc that the current terminal is using?
typeset -f function displays the indicated function's current definition. It works in ksh (where it originated), bash and zsh. (n.b. in zsh, type -f, which, functions and whence -f also show the function definition.)
View shell function's current definition
1,328,908,576,000
I encountered this error when updating bash for the CVE-2014-6271 security issue: # yum update bash Running transaction (shutdown inhibited) Updating : bash-4.2.47-4.fc20.x86_64 /bin/sh: error importing function definition for `some-function'
[edited after 1st comment from: @chepner - thanks!] /bin/bash allows hyphens in function names, /bin/sh (Bourne shell) does not. Here, the offending "some-function" had been exported by bash, and bash called yum which called /bin/sh which reported the error above. fix: rename shell functions to not have hyphens man bash says that bash identifiers may consist: "only of alphanumeric characters and underscores" The /bin/sh error is much more explicit: some-function () { :; } sh: `some-function': not a valid identifier
/bin/sh: error importing function definition for `some-function'
1,328,908,576,000
Is there any way I can redefine a bash function in terms of its old definition? For example I would like to add the following block of code to the preamble of the function command_not_found_handle (), # Check if $1 is instead a bash variable and print value if it is local VAL=$(eval echo \"\$$1\") if [ -n "$VAL" ] && [ $# -eq 1 ]; then echo "$1=$VAL" return $? fi It is currently defined in /etc/profile.d/PackageKit.sh and sourced by bash start-up scripts. That way I can query the value of an environment variable at the command prompt by simply typing the variable name (and provided that no such command by that name exists). e.g. user@hostname ~:$ LANG LANG=en_AU.utf8 I know I could just copy and paste the current definition and add my own changes in ~/.bashrc, but I am looking for a more elegant way that involves code reuse. Better ways of achieving my goal or code improvements/extensions are also appreciated.
You can print out the current definition of the function, and then include it in a function definition inside an eval clause. current_definition=$(declare -f command_not_found_handle) current_definition=${current_definition#*\{} current_definition=${current_definition%\}} prefix_to_add=$(cat <<'EOF' # insert code here (no special quoting required) EOF ) suffix_to_add=$(cat <<'EOF' # insert code here (no special quoting required) EOF ) eval "command_not_found_handle () { $prefix_to_add $current_definition $suffix_to_add }" Another approach, which I find clearer, is to define the original function under a new name, and call that from your definition. This only works if you don't need to act on the local variables of the original definition. eval "original_$(declare -f command_not_found_handle)" command_not_found_handle () { … original_command_not_found_handle … }
How do I redefine a bash function in terms of old definition?
1,328,908,576,000
Here is my code: function update_profile { echo "1. Update Name" echo "2. Update Age" echo "3. Update Gender" echo "Enter option: " read option case $option in 1) update_name ;; 2) update_age ;; 3) update_gender ;; esac function update_name { echo "Enter new name: " read name } } Just want to make sure if it's possible to do this way. I do know that I can throw all the codes into the case, but it will be messy, so I was thinking to create a stand alone function, within a function, and to be called when needed to perform its commands.
Yes, it's possible. It is even possible to nest a function within another function, although this is not very useful. f1 () { f2 () # nested { echo "Function \"f2\", inside \"f1\"." } } f2 # Gives an error message. # Even a preceding "declare -f f2" wouldn't help. echo f1 # Does nothing, since calling "f1" does not automatically call "f2". f2 # Now, it's all right to call "f2", #+ since its definition has been made visible by calling "f1". # Thanks, S.C. Source: The Linux Documentation Project
Is it possible to add a function within a function?
1,328,908,576,000
I have a Bash script that's getting quite long. It would be nice if I could list all the functions in it. Even better would be listing the name of the function and any documentation about its usage, eg parameters.
In general, it's impossible to list all functions without executing the script, because a function could be declared by something like eval $(/some/program). But if the functions are declared “normally”, you can search for things that look like function definitions. grep -E '^[[:space:]]*([[:alnum:]_]+[[:space:]]*\(\)|function[[:space:]]+[[:alnum:]_]+)' myscript There's no function typing or documentation facility in shell scripts, so any documentation would have to come from comments. Here's a crude Perl snippet that extracts commonly-formatted function definitions as well as immediately preceding or succeeding comments. perl -0777 -ne ' while (/^((?:[ \t]*\#.*\n)*) # preceding comments [ \t]*(?:(\w+)[ \t]*\(\)| # foo () function[ \t]+(\w+).*) # function foo ((?:\n[ \t]+\#.*)*) # following comments /mgx) { $name = "$2$3"; $comments = "$1$4"; $comments =~ s/^[ \t]*#+/#/mg; chomp($comments); print "$name\n$comments\n"; }' myscript A more precise way to print function names, if you can run the script without causing any side effects, or if you can isolate all function definitions in a subscript, is to run the script and then make bash print out all the function definitions. Unlike the text search method above, this includes weirdly-formatted function definitions and excludes false positives (e.g. in here documents), but this cannot find comments. bash -c '. myscript; typeset -f'
How do you list all functions and aliases in a specific script?
1,328,908,576,000
I'm trying to create a bash alias, where the alias itself has a space in it. The idea is that the alias (i.e. con) stands for sudo openvpn --config /path/to/my/openvpn/configs/. Which results in a readable command, when the con alias is used. i.e: `con uk.conf` == `sudo openvpn --config /path/to/my/openvpn/configs/uk.conf` I understand that I can't declare the alias like this: con ="sudo openvpn --config /path/to/my/openvpn/configs/". Would bash functions work in this scenario? I've never heard of that, but when researching a solution for this minor issue.
Yes, you will need to use a function. An alias would work if you wanted to add a parameter, any arguments given to aliases are passed as arguments to the aliased program but as separate parameters, not simply appended to what is there. To illustrate: $ alias foo='echo bar' $ foo bar $ foo baz bar baz As you can see, what was echoed was bar baz and not barbaz. Since you want to concatenate the value you pass to the existing parameter, you'll need something like: function com(){ sudo openvpn --config /path/to/my/openvpn/configs/"$@"; } Add the line above to your ~/.bashrc and you're ready to go.
Bash alias with a space as a part of the command
1,512,259,936,000
I defined the function f in Bash based on the example here (under "An option with an argument"): f () { while getopts ":a:" opt; do case $opt in a) echo "-a was triggered, Parameter: $OPTARG" >&2 ;; \?) echo "Invalid option: -$OPTARG" >&2 return 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 return 1 ;; esac done } Whereas they use a script, I directly define the function in the shell. When I first launch Bash and define the function, everything works: f -a 123 prints -a was triggered, Parameter: 123. But when I run the exact same line a second time, nothing is printed. What's causing this behavior? It happens in Bash 3.2 and 4.3, but it works fine in Zsh 5.1. This is surprising because the example was supposed to be for Bash, not for Zsh.
bash getopts use an environment variable OPTIND to keep track the last option argument processed. The fact that OPTIND was not automatically reset each time you called getopts in the same shell session, only when the shell was invoked. So from second time you called getopts with the same arguments in the same session, OPTIND wasn't changed, getopts thought it had done the job and do nothing. You can reset OPTIND manually to make it work: $ OPTIND=1 $ f -a 123 -a was triggered, Parameter: 123 or just put the function into a script and call the script multiple times. zsh getopts is slightly different. OPTIND was normally reset to 1 each time upon exit from shell function.
Bash function with `getopts` only works the first time it's run
1,512,259,936,000
In bash, I can write: caller 0 and receive the caller context's: Line number Function Script name This is extremely useful for debugging. Given: yelp () { caller 0; } I can then write yelp to see what code lines are being reached. I can implement caller 0 in bash as: echo "${BASH_LINENO[0]} ${FUNCNAME[1]} ${BASH_SOURCE[1]" How can I get the same output as caller 0 in zsh?
I don't think there's a builtin command equivalent, but some combination of these four variables from the zsh/Parameter module can be used: funcfiletrace This array contains the absolute line numbers and corresponding file names for the point where the current function, sourced file, or (if EVAL_LINENO is set) eval command was called. The array is of the same length as funcsourcetrace and functrace, but differs from funcsourcetrace in that the line and file are the point of call, not the point of definition, and differs from functrace in that all values are absolute line numbers in files, rather than relative to the start of a function, if any. funcsourcetrace This array contains the file names and line numbers of the points where the functions, sourced files, and (if EVAL_LINENO is set) eval commands currently being executed were defined. The line number is the line where the ‘function name’ or ‘name ()’ started. In the case of an autoloaded function the line number is reported as zero. The format of each element is filename:lineno. For functions autoloaded from a file in native zsh format, where only the body of the function occurs in the file, or for files that have been executed by the source or ‘.’ builtins, the trace information is shown as filename:0, since the entire file is the definition. The source file name is resolved to an absolute path when the function is loaded or the path to it otherwise resolved. Most users will be interested in the information in the funcfiletrace array instead. funcstack This array contains the names of the functions, sourced files, and (if EVAL_LINENO is set) eval commands. currently being executed. The first element is the name of the function using the parameter. The standard shell array zsh_eval_context can be used to determine the type of shell construct being executed at each depth: note, however, that is in the opposite order, with the most recent item last, and it is more detailed, for example including an entry for toplevel, the main shell code being executed either interactively or from a script, which is not present in $funcstack. functrace This array contains the names and line numbers of the callers corresponding to the functions currently being executed. The format of each element is name:lineno. Callers are also shown for sourced files; the caller is the point where the source or ‘.’ command was executed. Comparing: foo.bash: #! /bin/bash yelp() { caller 0 } foo () { yelp } foo foo.zsh: #! /bin/zsh yelp() { print -l -- $funcfiletrace - $funcsourcetrace - $funcstack - $functrace } foo () { yelp } foo The results: $ bash foo.bash 7 foo foo.bash $ zsh foo.zsh foo.zsh:7 foo.zsh:10 - foo.zsh:2 foo.zsh:6 - yelp foo - foo:1 foo.zsh:10 So, the corresponding values are in ${funcfiletrace[1]} and ${funcstack[-1]}. Modifying yelp to: yelp() { print -- $funcfiletrace[1] $funcstack[-1] } The output is: foo.zsh:7 foo which is quite close to bash's 7 foo foo.bash
function's calling context in zsh: equivalent of bash `caller`
1,512,259,936,000
I must be missing something incredibly simple about how to do this, but I have a simple script: extract () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xvjf $1 ;; *.tar.gz) tar xvzf $1 ;; *.tar.xz) tar xvJf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xvf $1 ;; *.tbz2) tar xvjf $1 ;; *.tgz) tar xvzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1 ;; *.7z) 7z x $1 ;; *.xz) unxz $1 ;; *.exe) cabextract $1 ;; *) echo "\'$1': unrecognized file compression" ;; esac else echo "\'$1' is not a valid file" fi } The script works, but I can't seem to get it to be executable by default when I log in. I have consulted this very thorough answer: How to define and load your own shell function in zsh, and I have managed to get my $fpath to show the appropriate directory where I have stored the function. In my .zshrc profile, I have added fpath=( ~/.local/bin "${fpath[@]}" ) to the bottom, which is the path where my functions live. When I input echo $fpath, I get: /home/me/.local/bin /home/me/.oh-my-zsh/plugins/git /home/me/.oh-my-zsh/functions /home/me/.oh-my-zsh/completions ... However, it does not work unless I explicitly type autoload -Uz extract each time when I log in. Is there a way I can get the whole directory to autoload when I log in?
You're mixing up scripts and functions. Making a script A script is a standalone program. It may happen to be written in zsh, but you can invoke it from anywhere, not just from a zsh command line. If you happen to run a script written in zsh from a zsh command line or another zsh script, that's a coincidence that doesn't affect the script's behavior. The script runs in its own process, it doesn't influence its parent (e.g. it can't change variables or the current directory). Your code accomplishes a standalone task which can be invoked from anywhere and doesn't need to access the state of the shell that runs it, so it should be a script, not a function. A script must be an executable file: chmod +x /path/to/script. It must start with a shebang line to let the kernel know what program to use to interpret the script. In your case, add this line to the top of the file: #!/usr/bin/env zsh Put the file in a directory that is listed in the $PATH variable. Many systems set up either ~/bin or ~/.local/bin in a user's default PATH, so you can use these. If you want to add another directory, see http://unix.stackexchange.com/questions/26047/how-to-correctly-add-a-path-to-path When you type a command name that isn't an alias, a function or a builtin, the shell looks for an executable file of that name in $PATH and executes it. Thus you don't need to declare the script to the shell, you just drop it in the right place. Making a function A function is code that runs inside an existing shell instance. It has full access to all the shell's state: variables, current directory, functions, command history, etc. You can only invoke a function in a compatible shell. Your code can work as a function, but you don't gain anything by making it a function, and you lose the ability to invoke it from somewhere else (e.g. from a file manager). In zsh, you can make a function available for interactive sessions by including its definition in ~/.zshrc. Alternatively, to avoid cluttering .zshrc with a very large number of functions, you can use the autoloading mechanism. Autoloading works in two steps: Declare the function name with autoload -U myfunction. When myfunction is invoked for the first time, zsh looks for a file called myfunction in the directories listed in $fpath, and uses the first such file it finds as the definition of myfunction. All functions need to be defined before use. That's why it isn't enough to put the file in $fpath. Declaring the function with autoload actually creates a stub definition that says “load this function from $fpath and try again”: % autoload -U myfunction % which myfunction myfunction () { # undefined builtin autoload -XU } Zsh does have a mechanism to generate those stubs by exploring $fpath. It's embedded in the completion subsystem. Put #autoload as the first line of the file. In your .zshrc, make sure that you fully set fpath before calling the completion system initialization function compinit. Note that the file containing a function definition must contain the function body, not the definition of the function, because what zsh executes when the function is called is the content of the file. So if you wanted to put your code in a function, you would put it in a file called extract that is in one of the directories on $fpath, containing #autoload if [ -f $1 ]; then … If you want to have initialization code that runs when the function is loaded, or to define auxiliary functions, you can use this idiom (used in the zsh distribution). Put the function definition in the file, plus all the auxiliary definitions and any other initialization code. At the end, call the function, passing the arguments. Then myfunction would contain: #autoload my_auxiliary_function () { … } myfunction () { … } myfunction "$@" P.S. 7z x works on most archive types.
How to make custom zsh script executable automatically?
1,512,259,936,000
Hello I have this in my ~/.bash_profile export GOPATH="$HOME/go_projects" export GOBIN="$GOPATH/bin" program(){ $GOBIN/program $1 } so I'm able to do program "-p hello_world -tSu". Is there any way to run the program and custom flags without using the quotation marks? if I do just program -p hello_world -tSu it'll only use the -p flag and everything after the space will be ignored.
Within your program shell function, use "$@" to refer to the list of all command line arguments given to the function. With the quotes, each command line argument given to program would additionally be individually quoted (you generally want this). program () { "$GOBIN"/program "$@" } You would then call program like so: program -p hello_world -tSu or, if you want to pass hello world instead of hello_world, program -p 'hello world' -tSu Using $1 refers to only the first command line argument (and $2 would refer to the second, etc.), as you have noticed. The value of $1 would additionally be split on white-spaces and each generated string would undergo filename globbing, since the expansion is unquoted. This would make it impossible to correctly pass an argument that contains spaces or filename globbing patterns to the function.
How to pass all arguments of a function along to another command?
1,512,259,936,000
Problem: I have multiple bash functions and aliases. I can't remember all of them off the top of my head, so I usually end up opening my .bash_functions and .bash_aliases files to find what I need. Question(s): How can I list functions/aliases available from the bash prompt? Is it possible for me to document my bash functions/aliases using comments (kinda like PHPDoc)? I'd just like a simple/nice way to output what's available without having to open the files. It would be cool to run a command and have it spit out a dynamic list of my functions/aliases (usage examples would be a plus). :)
To list active aliases, run: alias To see names of all active functions, run: declare -F To see the names and definitions of all active functions, run: declare -f More The information on aliases is also available is a script-friendly format with: declare -p BASH_ALIASES man bash provides more info on the alias builtin: alias [-p] [name[=value] ...] Alias with no arguments or with the -p option prints the list of aliases in the form alias name=value on standard output. When arguments are supplied, an alias is defined for each name whose value is given. A trailing space in value causes the next word to be checked for alias substitution when the alias is expanded. For each name in the argument list for which no value is supplied, the name and value of the alias is printed. Alias returns true unless a name is given for which no alias has been defined. Regarding functions, man bash explains that declare can provide still more information is available if the extdebug option is set: Function names and definitions may be listed with the -f option to the declare or typeset builtin commands. The -F option to declare or typeset will list the function names only (and optionally the source file and line number, if the extdebug shell option is enabled). Links http://ss64.com/bash/alias.html http://linfo.org/alias.html
How to document my custom bash functions and aliases?
1,512,259,936,000
I have the following function: bar() { echo $1:$2; } I am calling this function from another function, foo. foo itself is called as follows: foo "This is" a test I want to get the following output: This is:a That is, the arguments that bar receives should be the same tokens that I pass into foo. How does foo need to be implemented in order to achieve this? I’ve tried the following two implementations, but neither works: foo() { bar $*; } – output: this:is foo() { bar "$*"; } – output: this is a test: My question is effectively how I can preserve the quoting of arguments. Is this possible at all?
Use "$@": $ bar() { echo "$1:$2"; } $ foo() { bar "$@"; } $ foo "This is" a test This is:a "$@" and "$*" have special meanings: "$@" expands to multiple words without performing expansions for the words (like "$1" "$2" ...). "$*" joins positional parameters with the first character in IFS (or space if IFS is unset or nothing if IFS is empty).
Pass arguments to function exactly as-is
1,512,259,936,000
How can I write a function in zsh that invokes an existing command with the same name as the function itself? For example, I've tried this to illustrate my question: function ls { ls -l $1 $2 $3 } When I execute it with ls * I get the following: ls:1: maximum nested function level reached I assume this is because the function is being called recursively. How I can avoid that? This is a crude example, and in this case an alias would do the job, but I have a more complex example where an alias isn't suitable and so I would need to write a function.
What is happening is that you are recursively calling your ls function. In order to use the binary, you can use ZSH's command builtin. function ls { command ls -l "$@" }
How can I create a function in zsh that calls an existing command with the same name?
1,512,259,936,000
The following variables are used to get the positional parameters: $1, $2, $3, etc. $@ $# But they are used for both positional parameters of the script and the positional parameters of a function. When I use these variables inside a function, they give me the positional parameters of the function. Is there a way to get the positional parameters of the script from inside a function?
No, not directly, since the function parameters mask them. But in Bash or ksh, you could just assign the script's arguments to a separate array, and use that. #!/bin/bash ARGV=("$@") foo() { echo "number of args: ${#ARGV[@]}" echo "second arg: ${ARGV[1]}" } foo x y z Note that the numbering for the array starts at zero, so $1 goes to ${ARGV[0]} etc.
Is there a way to get the positional parameters of the script from inside a function in bash?
1,512,259,936,000
I was discussing with my friend on how the commands are parsed in the shell, and he told me that bash searches the command in following order List of aliases List of shell keywords List of user defined functions List of shell built in functions List of directories specified in the PATH variable , from left to right. I know aliases can be found by issuing the alias command. PATH variable contents can be found using echo $PATH command. Can you please tell me which commands do I need to use ? To list all shell keywords To list all user defined functions To list of shell built in functions
In Bash: man bash | grep -10 RESERVED lists reserved words: ! case coproc do done elif else esac fi for function if in select then until while { } time [[ ]] declare -F and typeset -F shows function names without their contents. enable lists builtin shell commands (I don't think these are functions as such).So does man builtins
What are commands to find shell keywords, built in functions and user defined functions?
1,512,259,936,000
I have declared functions and variables in bash/ksh and I need to forward them into sudo su - {user} << EOF: #!/bin/bash log_f() { echo "LOG line: $@" } extVAR="yourName" sudo su - <user> << EOF intVAR=$(date) log_f ${intVAR} ${extVAR} EOF
sudo su -, which is a complicated way of writing sudo -i, constructs a pristine environment. That's the point of a login shell. Even a plain sudo removes most variables from the environment. Furthermore sudo is an external command; there's no way to elevate privileges in the shell script itself, only to run an external program (sudo) with extra privileges, and that means any shell variables (i.e. non-exported variables) and functions defined in the parent shell won't be available in the child shell. You can pass environment variables through by not invoking a login shell (sudo bash instead of sudo su - or sudo -i) and configuring sudo to let these variables through (with Defaults !env_reset or Defaults env_keep=… in the sudoers file). This won't help you for functions (although bash has a function export facility, sudo blocks it). The normal way to get your functions in the child shell would be to define them there. Take care of quoting: if you use <<EOF for the here document, the content of the here document is first expanded by the parent shell, and the result of that expansion becomes the script that the child shell sees. That is, if you write sudo -u "$target_user" -i <<EOF echo "$(whoami)" EOF this displays the name of the original user, not the target user. To avoid this first phase of expansion, quote the here document marker after the << operator: sudo -u "$target_user" -i <<'EOF' echo "$(whoami)" EOF So if you don't need to pass data from the parent shell to the child shell, you can use a quoted here document: #!/bin/bash sudo -u "$target_user" -i <<'EOF' log_f() { echo "LOG line: $@" } intVAR=$(date) log_f "${intVAR}" EOF While you can make use of an unquoted here document marker to pass data from the parent shell to the child shell, this only works if the data doesn't contain any special character. That's because in a script like sudo -u "$target_user" -i <<EOF echo "$(whoami)" EOF the output of whoami becomes a bit of shell code, not a string. For example, if the whoami command returned "; rm -rf /; "true then the child shell would execute the command echo ""; rm -rf /; "true". If you need to pass data from the parent shell, a simple way is to pass it as arguments. Invoke the child shell explicitly and pass it positional parameters: #!/bin/bash extVAR="yourName" sudo -u "$target_user" -i sh _ "$extVAR" <<'EOF' log_f() { echo "LOG line: $@" } intVAR=$(date) log_f "${intVAR}" "${1}" EOF If you have multiple variables to pass, it will be more readable to pass them by name. Call env explicitly to set environment variables for the child shell. #!/bin/bash extVAR="yourName" sudo -u "$target_user" -i env extVAR="$extVAR" sh <<'EOF' log_f() { echo "LOG line: $@" } intVAR=$(date) log_f "${intVAR}" "${1}" EOF Note that if you expected /etc/profile and the target user's ~/.profile to be read, you'll have to read them explicitly, or call bash --login instead of sh.
Forward function and variables into sudo su - <user> <<EOF
1,512,259,936,000
Solaris / sh I have a few functions defined in a file which gets loaded via . ./some_file.sh When I start a subshell with sh All my function definitions are lost but when I do env I do see the source, is there an easy way to get them functional in my subshell?
Functions are naturally propagated to subshells: greet () { echo "hello, $1" } ( echo "this is a subshell"; greet bob ) But they are not and cannot be propagated to independent shell processes that you start by invoking the shell under its name. Bash has an extension to pass functions through the environment, but there's no such thing in other shells. While you can emulate the feature, it requires running code in the nested shell anyway. You might as well source your function definitions in the nested shell.
How to get functions propagated to subshell?
1,512,259,936,000
I often generate and register a lot of bash functions that automate many of the task I usually do in my development projects. That generation depends on the meta-data of the project I am working on. I want to annotate the functions with the info of the project they were generated, this way: func1() { # This function was generated for project: PROJECT1 echo "do my automation" } Ideally, I would be able to see the comment when I inspect the definition: $ type func1 func1 is a function func1 () { # This function was generated for project: PROJECT1 echo "do my automation" } But somehow bash seems to ignore the comments at the moment of loading the function, not when executing it. So the comments are lost and I get this result: func1 is a function func1 () { echo "do my automation" } Is there any way to assign metadata to functions, and check them afterwards? It is possible to retrieve it when inspecting the definition with type?
function func_name() { : ' Invocation: func_name $1 $2 ... $n Function: Display the values of the supplied arguments, in double quotes. Exit status: func_name always returns with exit status 0. ' : local i echo "func_name: $# arguments" for ((i = 1; i <= $#; ++i)); do echo "func_name [$i] \"$1\"" shift done return 0 }
assign and inspect bash function metadata
1,512,259,936,000
I have a function in my .bashrc file. I know what it does, it steps up X many directories with cd Here it is: up() { local d="" limit=$1 for ((i=1 ; i <= limit ; i++)) do d=$d/.. done d=$(echo $d | sed 's/^\///') if [ -z "$d" ]; then d=.. fi cd $d } But can you explain these three things from it for me? d=$d/.. sed 's/^\///' d=.. Why not just do like this: up() { limit=$1 for ((i=1 ; i <= limit ; i++)) do cd .. done } Usage: <<<>>>~$ up 3 <<<>>>/$
d=$d/.. adds /.. to the current contents of the d variable. d starts off empty, then the first iteration makes it /.., the second /../.. etc. sed 's/^\///' drops the first /, so /../.. becomes ../.. (this can be done using a parameter expansion, d=${d#/}). d=.. only makes sense in the context of its condition: if [ -z "$d" ]; then d=.. fi This ensures that, if d is empty at this point, you go to the parent directory. (up with no argument is equivalent to cd ...) This approach is better than iterative cd .. because it preserves cd - — the ability to return to the previous directory (from the user’s perspective) in one step. The function can be simplified: up() { local d=.. for ((i = 1; i < ${1:-1}; i++)); do d=$d/..; done cd $d } This assumes we want to move up at least one level, and adds n - 1 levels, so we don’t need to remove the leading / or check for an empty $d. Using Athena jot (the athena-jot package in Debian): up() { cd $(jot -b .. -s / "${1:-1}"); } (based on a variant suggested by glenn jackman).
Can you explain these three things in this bash code for me?
1,512,259,936,000
I'm trying to execute the code below but when I try to use my function in the if statement I get the -bash: [: too many arguments error. Why is it happening? Thank you in advance! notContainsElement () { local e match="$1" shift for e; do [[ "$e" == "$match" ]] && return 1; done return 0 } list=( "pears" "apples" "bananas" "oranges" ) blacklist=( "oranges" "apples" ) docheck=1 for fruit in "${list[@]}" do if [ notContainsElement "$fruit" "${blacklist[@]}" -a $docheck = 1 ] then echo $fruit fi done
When using if [ ... ] you are actually using the [ utility (which is the same as test but requires that the last argument is ]). [ does not understand to run your function, it expects strings. Fortunately, you don't need to use [ at all here (for the function at least): if [ "$docheck" -eq 1 ] && notContainsElement "$fruit" "${blacklist[@]}"; then ... fi Note that I'm also checking the integer first, so that we may avoid calling the function at all if $docheck is not 1. This works because if takes an arbitrary command and decides what to do from the exit status of that command. Here we use a [ ... ] test together with a call to your function, with && in-between, creating a compound command. The compound command's exit status would be true if both the [ ... ] test and the function returned zero as their exit statuses, signalling success. As a style note, I would not have the function test whether the array does not contain the element but whether if does contain the element, and then if [ "$docheck" -eq 1 ] && ! contains "$fruit" "${blacklist[@]}"; then ... Having a function test a negative will mess up logic in cases where you do want to test whether the array contains the element (if ! notContainsElement ...).
Shell: Using function with parameters in if
1,512,259,936,000
Let's say that I have a command git branch (always with a couple of words) for example. What I want is to keep track of when this command is executed with arguments. For example, if I execute the command git branch develop without errors, I want to save develop on a file. I tried to overwrite git command on my .bash_profile, something like this: git () { if [ $# -eq 3 ] then git $@ echo $2 > /path/tacked_parameters.txt else git $@ fi } But seems that does not work well. Is there any way to do this?
You've got a few problems here: your git function is calling itself recursively instead of the original git command. you're using $@ unquoted which doesn't make any sense whatsoever you're leaving other variables unquoted, asking the shell to split+glob them. you're using echo for arbitrary data. you're losing the exit status of the original git command. you're overwriting your log file upon each invocation. you're putting function definitions in your ~/.bash_profile which is meant to customize your login session, not your shell and is normally not read by non-login bash invocations. You'd want something like: git() { if [ "$#" -eq 3 ] then local ret command git "$@"; ret=$? printf '%s\n' "$2" >> /path/tacked_parameters.txt return "$ret" else command git "$@" fi } That is: quote your variables, use command to run the git command, save the exit status of git in a local variables an return it on exit, use >> instead of > for redirection to the log file. use printf instead of echo. and put that in your ~/.bashrc instead (making sure your ~/.bash_profile is sourcing ~/.bashrc as login bash shells don't read ~/.bashrc by default (a bash bug/misfeature)). Unless you want to export that git function (with export -f git) in case you also want bash scripts that call git to call that function.
Track certain parameters on some command
1,512,259,936,000
I am using zsh and I have defined few utility shell function in some shell scripts, few of them called from ~/.zshrc, so let's assume that we don't know the location of these functions. One function is: function k.pstree.n { if [ "$1" != "" ] then pstree -p | grep -C3 "$1" else printf " Please specify the name of the process you want to show!\n" fi } How can I print the code of that shell function? I can think of a search & grep like: find $(pwd) -name "*sh*" -type f -printf "\"%p\"\n" | xargs grep -C5 "k.pstree.n" but this assumes that I roughly know the location which is not true here.
There is built-in command functions in zsh for this purpose functions k.pstree.n For example in case of my preexec function: $ functions preexec preexec () { local cmd=${1:-} cmd=${cmd//\\/\\\\} [[ "$TERM" =~ screen* ]] && cmd="S $cmd" inf=$(print -Pn "%n@%m: %3~") print -n "\e]2;$cmd $inf\a" cmd_start=$SECONDS } Or use typeset -fp function_name which has the benefit of also working in ksh, bash and yash. In zsh, the function definition is also available in the $functions special associative array (the key is the function name, the value the function body).
How do you print the code of a shell function in terminal?
1,512,259,936,000
I would like to slightly extend a zsh completion function. I would like to avoid putting the complete function body into my homedir with only one line changed. Instead I would like to intercept it's call and then call the original function myself. In quasi code: <make sure _the_original_function is loaded> _backup_of_original_function=_the_original_function _the_original_function() { _arguments "-superduper[that one option I always need]" _backup_of_originial_function $@ } In my concrete case, I have an cmake property in practically all my projects such that I would like to modify the cmake completion. Not having that option in the completion would annoy me much more than having it in projects where the option does not belong. So instead of copying _cmake somewhere I would just redefine _cmake_define_common_property_names() in my .zshrc: _cmake_define_common_property_names() { local properties; properties=( 'MY_GREAT_OPTION:be awesome' ) _describe -t 'my-property-names' 'my property name' properties $@ **call original _cmake_define_common_property_names here } So what's missing is loading _cmake_define_common_property_names and assigning a new function name to it. From here I tried autoload -Uz +X _cmake_define_common_property_names but this fails (the function is not defined within its own file, but rather in _cmake. NB: I don't assign a new function name to avoid having to modify the place from where the function gets called in its original version. What works partially is autoload -Uz +X _cmake BUT this only ensures that _cmake is loaded (which I verify by calling functions _cmake). It does not load the helper function _cmake_define_common_property_names. So my two questions are how do I load a function from within an $fpath file Once I have a function loaded. How do I copy it in a script / Assign a new function name?
How to patch a function The code of a function is stored in the associative array functions. That's the source code with normalized whitespace and no comments (zsh has done lexical analysis and pretty-prints the tokens). You can change the code of a function by modifying the entry of the functions array. For example, to add extra code at the beginning: functions[_cmake_define_common_property_names]=" … # your extra code here $functions[_cmake_define_common_property_names]" If the changes you want to make only involve running code before and after the original code (or more generally around, e.g. to put it inside a conditional), you can copy the function to a new name and redefine the function to call that new name. Since zsh 5.8, you can do this easily with functions -c. functions -c _cmake_define_common_property_names _cmake_define_common_property_names_orig _cmake_define_common_property_names () { … _cmake_define_common_property_names_orig "$@" } Alternatively, in any version of zsh, you can also use the functions array to copy a function to another name. functions[_cmake_define_common_property_names_orig]=$functions[_cmake_define_common_property_names] _cmake_define_common_property_names () { … _cmake_define_common_property_names_orig "$@" } Loading all the functions The only sure-fire way to load all the functions from a file made for autoload is to execute the function, if you can arrange to execute it with no side effects. For a completion function, just run the function with errors redirected to the bit bucket. It'll do nothing but complain that it isn't being executed in a completion context. _cmake 2>/dev/null # Now _cmake_xxx is defined The reason autoload -Uz +X _cmake doesn't work is that the definitions of the auxiliary functions are in the _cmake function itself. % echo $functions[_cmake] builtin autoload -XU % autoload -Uz +X _cmake % echo $functions[_cmake] … (( $+functions[_cmake_define_property_names] )) || _cmake_define_property_names () { … } … local cmake_command_actions cmake_command_actions=('-E[CMake command mode]:*:command') _cmake_command () { _arguments -C -s - command "$cmake_command_actions[@]" } local cmake_suggest_build cmake_suggest_build=('--build[build]') if [ $CURRENT -eq 2 ] then _arguments -C -s - help "$cmake_help_actions[@]" - command "$cmake_command_actions[@]" - build_opts "$cmake_build_options[@]" - build_cmds "$cmake_suggest_build[@]" && return 0 elif [[ $words[2] = --help* ]] then _cmake_help elif [[ $words[2] != -E ]] then _cmake_on_build else _cmake_command fi If you really don't want to execute the toplevel function, you have several choices: Patch the definition of _cmake_define_common_property_names inside the definition of _cmake. Extract the definition of _cmake_define_common_property_names from the definition of _cmake and use that to define _cmake_define_common_property_names_orig or _cmake_define_common_property_names. Patch the definition of _cmake to remove the parts other than the function definitions, then execute it. It isn't really workable with _cmake, but some other completion functions are better structured. For example _cvs consists purely of conditional function definitions (e.g. (( $+functions[_cvs_command] )) || _cvs_command () { … }), a definition of the titular function, and a call of the titular function as the very last thing, thus you can define all the function but not execute anything by removing the last line. autoload -Uz +X _cvs functions[_cvs]=${functions_cvs%'$\n'*} _cvs # Patch auxiliary functions here
overwrite and reuse existing function in zsh
1,512,259,936,000
(This is on MacOS with zsh 5.7.1) Here is how I load custom functions in zsh: # Custom functions fpath=($HOME/.zfunc $fpath) autoload -Uz mackupbackup autoload -Uz tac autoload -Uz airplane autoload -Uz wakeMyDesktop Each function is its own file in the ~/.zfunc directory. Note this directory is symlinked into a different directory thanks to mackup. I wrote a new function to copy the current commit hash to the clipboard. I created a file in my $fpath called ghash, wrote the function, added a new autoload line in my .zshrc and executed source ~/.zshrc. Here's the function # copy the commit hash of the given git reference, or HEAD if none is given ref=$1 if [[ $ref ]]; then git rev-parse $1 | pbcopy else git rev-parse HEAD | pbcopy fi After sourcing .zshrc, the function became available and it worked, but I wanted to add a line to print a confirmation that it worked: echo "Copied $(pbpaste) to clipboard" So I added that line, saved the file, then I sourced .zshrc again. I ran the function again, but its behaviour didn't change! I thought I'd done something wrong, so I kept making changes to the function and sourcing .zshrc to no effect. All in all, I re-sourced .zshrc 22 times, by which point that operation took 37 seconds to complete... Then I realised maybe it wasn't reloading the function, so I ran zsh to start a fresh instance (which took about 1 second), and the function started working as expected! Anyone know why source picked up my new function, but didn't update it when the function changed? Bonus question: why'd it take longer to run source ~/.zshrc each time I ran it?
Sourcing an rc file rarely if ever works in practice, because people rarely write them to be idempotent. A case in point is your own, where you are prepending the same directory to the fpath path every time, which of course means that searching that path takes a little longer each time. No doubt this isn't the only place where you are doing that sort of thing, moreover. You also do not understand autoloading correctly. As the doco states, autoloading of a function without a body happens the first time that the function is executed. Obviously, if the function is already loaded, and thus has a body, it does not get loaded again. You need to unfunction the function before autoloading it again. The sample .zshrc in the Z shell source contains an freload() function that does this very thing for all of the functions named as its arguments. It also does typeset -U path cdpath fpath manpath, notice.
zsh: `source` command doesn't reload functions
1,512,259,936,000
I have been slowly migrating from Bash to Zsh and have got to the point where everything I have moved across is working well, with one exception. I have a couple of functions in my .bashrc that I use dozens of times a day and two of them do not work under Zsh. The three functions comprise a basic note taking facility. They are currently in .config/zsh/functions: function n() { local arg files=(); for arg; do files+=( ~/".notes/$arg" ); done ${EDITOR:-vi} "${files[@]}" } function nls() { tree -CR --noreport $HOME/.notes | awk '{ if (NF==1) print $1; else if (NF==2) print $2; else if (NF==3) printf " %s\n", $3 }' } # TAB completion for notes function _notes() { local files=($HOME/.notes/**/"$2"*) [[ -e ${files[0]} ]] && COMPREPLY=( "${files[@]##~/.notes/}" ) } complete -o default -F _notes n Which I source from .zshrc like so: autoload bashcompinit bashcompinit # source zshrc functions file source "$HOME/.config/zsh/functions" nls works as expected, but neither n nor Tab completion work. I read man zshcompsys where it says: The function bashcompinit provides compatibility with bash's programmable completion system. When run it will define the functions, compgen and complete which correspond to the bash builtins with the same names. It will then be possible to use completion specifications and functions written for bash. However, when I try Tab completion, nothing happens and when I enter n notename, Vim opens my /home in file browser mode - not quite the expected behaviour. All of the other functions defined work well. How do I migrate these functions to work under Zsh?
local is a builtin, not a keyword, so local files=(…) isn't parsed as an array assignment but as a string assignment. Write the assignment separately from the declaration. (Already found by llua, but note that you need to initialize files to the empty array or declare the variable with typeset -a, otherwise the array starts with a spurious empty element.) Zsh arrays are numbered from 1, not from 0 like in bash and ksh, so ${files[0]} must be written $files[1]. Alternatively, tell zsh to behave in a way that's more compatible with ksh and bash: put emulate -L ksh at the beginning of the function. Unless you go the emulate route, your _notes function will print zsh: no matches found: foo* if there is no completion for foo, because by default non-matching globs trigger an error. Add the glob qualifier N to get an empty array if there is no match, and test whether the array is empty. There is another error in your _notes function which affects notes in subdirectories: you must strip away the prefix up to the completion, so that if e.g. ~/notes/foo/bar exists and you type n b<TAB>, COMPREPLY is set to contain b, not foo/b. If you want to keep a file that's readable by both bash and zsh: type emulate >/dev/null 2>/dev/null || alias emulate=true function n() { emulate -L ksh local arg; typeset -a files for arg; do files+=( ~/".notes/$arg" ); done ${EDITOR:-vi} "${files[@]}" } function nls() { tree -CR --noreport $HOME/.notes | awk '{ if (NF==1) print $1; else if (NF==2) print $2; else if (NF==3) printf " %s\n", $3 }' } # TAB completion for notes function _notes() { emulate -L ksh local x files files=($HOME/.notes/**/"$2"*) [[ -e ${files[0]} ]] || return 1 COMPREPLY=() for x in "${files[@]}"; do COMPREPLY+=("$2${x#$HOME/.notes*/$2}") done } complete -o default -F _notes n If you want to port your code to zsh: function n() { local files files=(${@/#/~/.notes/}) ${EDITOR:-vi} $files } function nls() { tree -CR --noreport $HOME/.notes | awk '{ if (NF==1) print $1; else if (NF==2) print $2; else if (NF==3) printf " %s\n", $3 }' } # TAB completion for notes function _notes() { setopt local_options bare_glob_qual local files files=(~/.notes/**/$2*(N)) ((#files)) && COMPREPLY=($2${^files##~/.notes*/$2}) } complete -o default -F _notes n
Bash function not working in Zsh
1,512,259,936,000
What is the difference between autoload -U and plain autoload? For instance, here it is recommended to run: autoload -U run-help autoload run-help-git autoload run-help-svn autoload run-help-svk unalias run-help alias help=run-help Why is -U only in the first line?
Yes, you do see the recommendation for -U often, usually paired with -z. It’s not documented in the run-help for autoload, but there is a section titled “AUTOLOADING FUNCTIONS” in the manpage for zshmisc. There it states: The usual alias expansion during reading will be suppressed if the autoload builtin or its equivalent is given the option -U. This is recommended for the use of functions supplied with the zsh distribution. Note that for functions precompiled with the zcompile builtin command the flag -U must be provided when the .zwc file is created, as the corresponding information is compiled into the latter. I read that as “disregard aliases”. The -z seems to be to avoid Ksh-isms. I just memorize -Uz and usually add them to any autoload. Maybe a worthwhile alias: alias al=’autoload -Uz’. See also: https://stackoverflow.com/questions/12570749/zsh-completion-difference
What is the difference between `autoload` and `autoload -U` in Zsh?
1,512,259,936,000
Stuck with GNU awk 3.1.6 and think I've worked around its array bugs but still have what looks like a scope problem in a 600-line awk program. Need to verify understanding of array scope in awk to find my bug. Given this illustrative awk code... function foo(ga) { ga[1] = "global result" } garray[1] = "global" foo(garray) print garray[1] will print... global result Since arrays are always passed to functions by reference, then all arrays are always global. There is no way to create a local array. Is this correct? Have been unable to find docs that explicitly say that. Since I'm debugging, and 3.1.6 itself has known bugs in this area, am trying to determine where awk's bugs leave off and mine begin. Supplemental: Why does ga[] work inside the function then? First of all, passing the array to the function with foo(ga) is actually unnecessary. Just access it as garray[] inside the function. There's no measurable performance penalty in doing it however, and it helps in debugging and error reporting. In using foo(ga), ga[] is a synonym for the global array garray[]. Instead of being a local copy of garray[], it is simply a pointer to garray[], rather like a symbolic link is a pointer to a file and thus the same file (or array) can be accessed under more than one name. Supplemental: Clarification of Glenn Jackman's answer While arrays created outside a function are global to the function and may be passed to it or just referenced within it, arrays created inside a function do indeed remain local to the function and not visible outside it. Modifying Mr. Jackman's example illustrates this... awk ' function bar(x,y) { split("hello world", y) print "x[1] inside: " x[1] print "y[1] inside: " y[1] } BEGIN { x[1]="goodbye" print "x[1] before: " x[1] print "y[1] before: " y[1] bar(x) print "x[1] after: " x[1] print "y[1] after: " y[1] } ' x[1] before: goodbye y[1] before: x[1] inside: goodbye y[1] inside: hello x[1] after: goodbye y[1] after: Note that we are only passing the x[] array (actually, just a pointer to it) to bar(). The y[] array doesn't even exist until we get inside the function. However, if we declare y[] by including it in the bar() argument list without assigning anything to it outside the function, it becomes visible after calling bar(x,y)... awk ' function bar(x,y) { split("hello world", y) print "x[1] inside: " x[1] print "y[1] inside: " y[1] } BEGIN { x[1]="goodbye" print "x[1] before: " x[1] print "y[1] before: " y[1] bar(x,y) print "x[1] after: " x[1] print "y[1] after: " y[1] } ' x[1] before: goodbye y[1] before: x[1] inside: goodbye y[1] inside: hello x[1] after: goodbye y[1] after: hello Finally, if we create the y[] array outside the function and pass it with bar(x,y), the split() assignment inside the function replaces that array's elements... awk ' function bar(x,y) { split("hello world", y) print "x[1] inside: " x[1] print "y[1] inside: " y[1] } BEGIN { x[1]="goodbye" y[1]="howdy" print "x[1] before: " x[1] print "y[1] before: " y[1] bar(x,y) print "x[1] after: " x[1] print "y[1] after: " y[1] } ' x[1] before: goodbye y[1] before: howdy x[1] inside: goodbye y[1] inside: hello x[1] after: goodbye y[1] after: hello
Function parameters are local to the function. awk ' function foo(x,y) {y=x*x; print "y in function: "y} BEGIN {foo(2); print "y out of function: " y} ' y in function: 4 y out of function: If you pass fewer values to a function than there are parameters, the extra parameters are just empty. You might sometimes see functions defined like function foo(a, b, c d, e, f) {... where the parameters after the whitespace are local variables and are not intended to take a value at invocation. No reason why this can't work for local arrays: awk ' function bar(x) { split("hello world", x) print "in: " x[1] } BEGIN { x[1]="world" bar() print "out: " x[1]} ' in: hello out: world
Gawk: Passing arrays to functions
1,512,259,936,000
WARNING - this question is about the Bash before the shellshock vulnerability, due to which it was changed. I have seen something like this in my bash ENV: module=() { eval `/usr/bin/modulecmd bash $*` } How does this construct work? What is it called? I'm not asking about modulecmd, I am asking about the entire construct.
It's really a function named module. It appears in environment variables when you export a function. $ test() { echo test; } $ export -f test $ env | sed -n '/test/{N;p}' test=() { echo test } From bash documentation - export: export export [-fn] [-p] [name[=value]] Mark each name to be passed to child processes in the environment. If the -f option is supplied, the names refer to shell functions; otherwise the names refer to shell variables. The -n option means to no longer mark each name for export. If no names are supplied, or if the -p option is given, a list of exported names is displayed. The -p option displays output in a form that may be reused as input. If a variable name is followed by =value, the value of the variable is set to value. The return status is zero unless an invalid option is supplied, one of the names is not a valid shell variable name, or -f is supplied with a name that is not a shell function.
How does VARIABLE=() { function definition } work in bash
1,512,259,936,000
I have a number of functions defined in my .bashrc, intented to be used interactively in a terminal. I generally preceded them with a comment describing its intended usage: # Usage: foo [bar] # Foo's a bar into a baz foo() { ... } This is fine if browsing the source code, but it's nice to run type in the terminal to get a quick reminder of what the function does. However this (understandably) doesn't include comments: $ type foo foo is a function foo () { ... } Which got me thinking "wouldn't it be nice if these sort of comments persisted so that type could display them?" And in the spirit of Python's docstrings I came up with this: foo() { : Usage: foo [bar] : "Foo's a bar into a baz" ... } $ type foo foo is a function foo () { : Usage: foo [bar]; : "Foo's a bar into a baz"; ... } Now the usage is included right in the type output! Of course as you can see quoting becomes an issue which could be error-prone, but it's a nicer user experience when it works. So my question is, is this a terrible idea? Are there better alternatives (like a man/info for functions) for providing users of Bash functions with additional context? Ideally I'd still like the usage instructions to be located nearby the function definition so that people viewing the source code also get the benefit, but if there's a "proper" way to do this I'm open to alternatives. Edit these are all fairly simple helper-style functions and I'm just looking to get a little extra context interactively. Certainly for more complex scripts that parse flags I'd add a --help option, but for these it would be somewhat burdensome to add help flags to everything. Perhaps that's just a cost I should accept, but this : hack seems to work reasonably well without making the source much harder to read our edit.
I don't think that there is just one good way to do this. Many functions, scripts, and other executables provide a help message if the user provides -h or --help as an option: $ foo() { [[ "$1" =~ (-h|--help) ]] && { cat <<EOF Usage: foo [bar] Foo's a bar into a baz EOF return; } : ...other stuff... } For example: $ foo -h Usage: foo [bar] Foo's a bar into a baz $ foo --help Usage: foo [bar] Foo's a bar into a baz
Displaying usage comments in functions intended to be used interactively
1,512,259,936,000
I wanted to search for the string '.vars()' in all my Python files, and somehow I redefined 'grep' as follows: % grep .vars() *.py % which grep grep () { *.py } I have tried using unset grep and export grep=/bin/grep to correct this, without success. Can somebody explain what I've accidentally done? NOTE: in Bash, it fails with: "syntax error near unexpected token `('".
This is a function definition. More precisely, this is the definition of two functions with the same code. It looks unusual because zsh has several extensions to the basic syntax of function definitions name () { instruction; … }: Zsh allows multiple names. name1 name2 () { instruction; … } defines both the functions name1 and name2 with the same body. I don't know of any other shell that supports this. Zsh allows . in function names (as does for example bash, but not dash or ksh93). Portable function names can only use ASCII letters, digits and underscore. Zsh allows any command as the body of a function. Some shells (in particular bash) require a compound command (which is all that POSIX requires). name () { instruction; … } is the most common form, where the body is a group. name () ( instruction; ) is also portable and runs the body of the function in a subshell. Other compound commands are technically valid, for example name () if condition; then instruction1; else instruction2; fi is a POSIX-compliant function definition, but it's extremely unusual in practice. name () echo hello is a perfectly valid function definition according to zsh (and ksh and dash), but not according to POSIX or bash. A space before () is optional in all shells. To undo the effect of grep .vars() *.py, unset the two functions. unset -f grep .vars or unfunction grep .vars
How to undefine a zsh command created accidentally?
1,512,259,936,000
I am trying to pass a "var name" to a function, have the function transform the value the variable with such "var name" contains and then be able to reference the transformed object by its original "var name". For example, let's say I have a function that converts a delimited list into an array and I have a delimited list named 'animal_list'. I want to convert that list to an array by passing the list name into the function and then reference, the now array, as 'animal_list'. Code Example: function delim_to_array() { local list=$1 local delim=$2 local oifs=$IFS; IFS="$delim"; temp_array=($list); IFS=$oifs; # Now I have the list converted to an array but it's # named temp_array. I want to reference it by its # original name. } # ---------------------------------------------------- animal_list="anaconda, bison, cougar, dingo" delim_to_array ${animal_list} "," # After this point I want to be able to deal with animal_name as an array. for animal in "${animal_list[@]}"; do echo "NAME: $animal" done # And reuse this in several places to converted lists to arrays people_list="alvin|baron|caleb|doug" delim_to_array ${people_list} "|" # Now I want to treat animal_name as an array for person in "${people_list[@]}"; do echo "NAME: $person" done
Description Understanding this will take some effort. Be patient. The solution will work correctly in bash. Some "bashims" are needed. First: We need to use the "Indirect" access to a variable ${!variable}. If $variable contains the string animal_name, the "Parameter Expansion": ${!variable} will expand to the contents of $animal_name. Lets see that idea in action, I have retained the names and values you used where possible to make it easier for you to understand: #!/bin/bash function delim_to_array() { local VarName=$1 local IFS="$2"; printf "inside IFS=<%s>\n" "$IFS" echo "inside var $VarName" echo "inside list = ${!VarName}" echo a\=\(${!VarName}\) eval a\=\(${!VarName}\) printf "in <%s> " "${a[@]}"; echo eval $VarName\=\(${!VarName}\) } animal_list="anaconda, bison, cougar, dingo" delim_to_array "animal_list" "," printf "out <%s> " "${animal_list[@]}"; echo printf "outside IFS=<%s>\n" "$IFS" # Now we can use animal_name as an array for animal in "${animal_list[@]}"; do echo "NAME: $animal" done If that complete script is executed (Let's assume its named so-setvar.sh), you should see: $ ./so-setvar.sh inside IFS=<,> inside var animal_list inside list = anaconda, bison, cougar, dingo a=(anaconda bison cougar dingo) in <anaconda> in <bison> in <cougar> in <dingo> out <anaconda> out <bison> out <cougar> out <dingo> outside IFS=< > NAME: anaconda NAME: bison NAME: cougar NAME: dingo Understand that "inside" means "inside the function", and "outside" the opposite. The value inside $VarName is the name of the var: animal_list, as a string. The value of ${!VarName} is show to be the list: anaconda, bison, cougar, dingo Now, to show how the solution is constructed, there is a line with echo: echo a\=\(${!VarName}\) which shows what the following line with eval executes: a=(anaconda bison cougar dingo) Once that is evaluated, the variable a is an array with the animal list. In this instance, the var a is used to show exactly how the eval affects it. And then, the values of each element of a are printed as <in> val. And the same is executed in the outside part of the function as <out> val That is shown in this two lines: in <anaconda> in <bison> in <cougar> in <dingo> out <anaconda> out <bison> out <cougar> out <dingo> Note that the real change was executed in the last eval of the function. That's it, done. The var now has an array of values. In fact, the core of the function is one line: eval $VarName\=\(${!VarName}\) Also, the value of IFS is set as local to the function which makes it return to the value it had before executing the function without any additional work. Thanks to Peter Cordes for the comment on the original idea. That ends the explanation, hope its clear. Real Function If we remove all the unneeded lines to leave only the core eval, only create a new variable for IFS, we reduce the function to its minimal expression: delim_to_array() { local IFS="${2:-$' :|'}" eval $1\=\(${!1}\); } Setting the value of IFS as a local variable, allows us to also set a "default" value for the function. Whenever the value needed for IFS is not sent to the function as the second argument, the local IFS takes the "default" value. I felt that the default should be space ( ) (which is always an useful splitting value), the colon (:), and the vertical line (|). Any of those three will split the values. Of course, the default could be set to any other values that fit your needs. Edit to use read: To reduce the risk of unquoted values in eval, we can use: delim_to_array() { local IFS="${2:-$' :|'}" # eval $1\=\(${!1}\); read -ra "$1" <<<"${!1}" } test="fail-test"; a="fail-test" animal_list='bison, a space, {1..3},~/,${a},$a,$((2+2)),$(echo "fail"),./*,*,*' delim_to_array "animal_list" "," printf "<%s>" "${animal_list[@]}"; echo $ so-setvar.sh <bison>< a space>< {1..3}><~/><${a}><$a><$((2+2))><$(echo "fail")><./*><*><*> Most of the values set above for the var animal_list do fail with eval. But pass the read without problems. Note: It is perfectly safe to try the eval option in this code as the values of the vars have been set to plain text values just before calling the function. Even if really executed, they are just text. Not even a problem with ill-named files, as pathname expansion is the last expansion, there will be no variable expansion re-executed over the pathname expansion. Again, with the code as is, this is in no way a validation for general use of eval. Example To really understand what, and how this function works, I re-wrote the code you posted using this function: #!/bin/bash delim_to_array() { local IFS="${2:-$' :|'}" # printf "inside IFS=<%s>\n" "$IFS" # eval $1\=\(${!1}\); read -ra "$1" <<<"${!1}"; } animal_list="anaconda, bison, cougar, dingo" delim_to_array "animal_list" "," printf "NAME: %s\t " "${animal_list[@]}"; echo people_list="alvin|baron|caleb|doug" delim_to_array "people_list" printf "NAME: %s\t " "${people_list[@]}"; echo $ ./so-setvar.sh NAME: anaconda NAME: bison NAME: cougar NAME: dingo NAME: alvin NAME: baron NAME: caleb NAME: doug As you can see, the IFS is set only inside the function, it is not changed permanently, and therefore it does not need to be re-set to its old value. Additionally, the second call "people_list" to the function takes advantage of the default value of IFS, there is no need to set a second argument. « Here be Dragons » ¯\_(ツ)_/¯ Warnings 01: As the (eval) function was constructed, there is one place in which the var is exposed unquoted to the shell parsing. That allows us to get the "word splitting" done using the IFS value. But that also expose the values of the vars (unless some quoting prevent that) to: "brace expansion", "tilde expansion", "parameter, variable and arithmetic expansion", "command substitution", and "pathname expansion", In that order. And process substitution <() >() in systems that support it. An example of each (except last) is contained in this simple echo (be careful): a=failed; echo {1..3} ~/ ${a} $a $((2+2)) $(ls) ./* That is, any string that starts with {~$`<> or could match a file name, or contains ?*[] is a potential problem. If you are sure that the variables do not contain such problematic values, then you are safe. If there is the potential to have such values, the ways to answer your question are more complex and need more (even longer) descriptions and explanations. Using read is an alternative. Warnings 02: Yes, read comes with it's own share of «dragons». Always use the -r option, it is very hard for me to think of a condition where it is not needed. The read command could get only one line. Multi-lines, even by setting the -d option, need special care. Or the whole input will be assigned to one variable. If IFS value contains an space, leading and trailing spaces will be removed. Well, the complete description should include some detail about the tab, but I'll skip it. Do not pipe | data to read. If you do, read will be in a sub-shell. All variables set in a sub-shell do not persist upon returning to the parent shell. Well, there are some workarounds, but, again, I'll skip the detail. I didn't mean to include the warnings and problems of read, but by popular request, I had to include them, sorry.
How to use call-by-reference on an argument in a bash function
1,512,259,936,000
I just decided to try zsh (through oh-my-zsh), and am now playing with precmd to emulate a two-line prompt that has right prompts in more than just the last line. So I clone the default theme, and inspired by this post (that I'm using to learn a lot too), i do somehting like this (I'll add colors later): function precmd { local cwd="${(%):-[%~]}" local who_where="${(%):-%n@%m}" local git_info=${(%)$(git_prompt_info)} local right_prompt=" $git_info [$who_where]" local left_prompt="${(r:(($COLUMNS - ${#${right_prompt}})):: :)cwd}" echo "$left_prompt$right_prompt" } And it works. But I can't help but wonder: is zsh defining all those variables every time precmd is called? I've been googling for closures, scope and namespacing in relation to zsh, looking to attach the local vars as data to precmd, so it doesn't need to redefine the variables every time, but I have found nothing. Is there some way to do what I'm trying, or should I just drop it? As a side note, and only if it is related, what does "to have a function loaded" mean?
Zsh doesn't have anything like closures or packages or namespaces. Zsh lacks a bunch of things required to have true closures: Functions aren't first class. You can't pass functions around as arguments to other functions, and functions can't return other functions. (You can pass the name of a function to call, but that's not the same as passing the function itself). You can't have nested functions. All functions in zsh are global. You must prefix your function names to avoid conflicts. Note especially that functions will shadow external programs with the same name. If you have a function called ls, it will be called instead of the program ls. This can be useful, except if you do it by accident. Variables are dynamically scoped, not statically scoped like in most modern languages. Even if you could have nested functions, inner functions wouldn't close over the local variables of outer functions in the way you would normally expect. You couldn't use them to make modules the way people do in, say, Javascript. Zsh does have anonymous functions, but without any of these other things they're not useful for much. So basically, the best you can do is to prefix all your functions and global variables. I'll also point out that you should do define your precmd like this: % autoload -Uz add-zsh-hook % add-zsh-hook precmd my_precmd_function add-zsh-hook lets you hook your function into precmd without it overwriting any other functions that might also want to hook precmd. What it means to have a function loaded is a separate question. Zsh has an autoloading feature that only loads functions from disk when they're actually called. When you do autoload -Uz foobar, that makes the function named foobar available to call. When you actually call foobar, that loads the definition from disk.
Is there something like closures for zsh?
1,512,259,936,000
Is it possible to treat a block of commands as an anonymous function? function wrap_this { run_something # Decide to run block or maybe not. run_something else } wrap_this { do_something do_somthing else } # Do something else wrap_this { do_something_else_else do_something_else_else_else } (I realize you create a function or file for each block, but I find this option clearer and easier to read in certain situations.) while does it with do/done and function does it with { multiple lines }. I realize BASH does not have anonymous functions, but is it possible to pass multiple commands to another function, like you can do when defining a function or while?
This is the shortest solution that I could think of: Given these functions: # List processing map() { while IFS='' read -r x; do "$@" "$x"; done; } filter() { while IFS='' read -r x; do "$@" "$x" >&2 && echo "$x"; done; } foldr() { local f="$1"; local result="$2"; shift 2; while IFS='' read -r x; do result="$( "$f" "$@" "$x" "$result" )"; done; echo "$result"; } foldl() { local f="$1"; local result="$2"; shift 2; while IFS='' read -r x; do result="$( "$f" "$@" "$result" "$x" )"; done; echo "$result"; } # Helpers re() { [[ "$2" =~ $1 ]]; } Examples: # Example helpers toLower() { tr '[:upper:]' '[:lower:]'; } showStructure() { [[ "$1" == "--curly" ]] && echo "{$2; $3}" || echo "($1, $2)"; } # All lib* directories, ignoring case, using regex ls /usr | map toLower | filter re 'lib.*' # All block devices. (Using test, for lack of a full bash [[ … ]].) cd /dev; ls | filter test -b # Show difference between foldr and foldl $ ls / | foldr showStructure '()' (var/, (usr/, (tmp/, (sys/, (sbin/, (run/, (root/, (proc/, (opt/, (mnt/, (media/, (lost+found/, (lib64/, (lib32/, (lib@, (home/, (etc/, (dev/, (daten/, (boot/, (bin/, ()))))))))))))))))))))) $ ls / | foldr showStructure '{}' --curly {var/; {usr/; {tmp/; {sys/; {sbin/; {run/; {root/; {proc/; {opt/; {mnt/; {media/; {lost+found/; {lib64/; {lib32/; {lib@; {home/; {etc/; {dev/; {daten/; {boot/; {bin/; {}}}}}}}}}}}}}}}}}}}}}} (These examples are of course just usage examples, and in reality, this style would only make sense for more complicated use cases.) Generally, the following style can always be used: f() { something "$@" ; }; someList | map f g() { something "$1" "$2" …; }; someCommand | filter g ⋮ ⋮ ⋮ ⋮ It’s not quite lambda, but it is very very close. Only a few excess characters. But any of the following convenience abbreviations does not work as far as I can tell: λ() { [[ $@ ]]; } # fails on spaces λ() { [[ "${@:-1}" ${@:1:-1} ]]; } # syntax error alias λ=test # somehow ignored Unfortunately, bash is not very well suited for this style, even though some of its features have a very functional style.
Passing a code block as an anon. function
1,512,259,936,000
I wrote a function in Bash to see man pages in Vim: viman () { man "$@" | vim -R +":set ft=man" - ; } This works fine, the only problem occurs if I pass a man page to it which doesn't exist. It prints that the man page doesn't exist but still opens vim with an empty buffer. So, I changed the function to check the error code (which is 16 here) and exit if the page doesn't exist. The modified function looks somewhat like this: viman () { man "$@" | [[ $? == 16 ]] && exit 1 | vim -R +":set ft=man" - ; } But, now it doesn't do anything. I just want to quit the program if the man page doesn't exist, otherwise open the man page with vim.
Try this: capture the man output, and if successful launch vim viman () { text=$(man "$@") && echo "$text" | vim -R +":set ft=man" - ; }
Viewing man pages in Vim
1,512,259,936,000
I'm writing a zsh shell function (as opposed to a script) where I would really like the extended_glob option to be enabled. But since the function runs in the caller's context, I don't want to clobber their settings. What I'd like to do is conditionally enabled extended_glob as long as I need it and then restore it to the user's option. Is there any way to check whether an option is enabled in zsh?
You can use the local_options option to automatically restore options when the function exits. This would only be appropriate if your function does not make any other option changes that you intend to persist after the function has finished. Thus, you could write your function like this: do_something() { setopt local_options extended_glob ⋮ } If you have some other option that you want to persist after the function has returned, you can use the options associative array (from the zsh/parameter module) to easily check and manipulate individual options: do_something() { local eg=$options[extended_glob] setopt extended_glob ⋮ options[extended_glob]=$eg } If this module is not available in your installation, then you can use the -o test: do_something() { local eg=no [[ -o extended_glob ]] && eg= setopt extended_glob ⋮ setopt ${eg}extended_glob }
Restoring an option at the end of a function in zsh
1,512,259,936,000
In bash, sometimes I would like to reuse a function in several scripts. Is it bad to repeat the definition of the function in all the scripts? If so, what is some good practice? Is the following way a good idea? wrap the definition of a function myfunction in its own script define_myfunction.sh, in any script where the function is called, source define_myfunction.sh , and then call the function myfunction arg1 arg2 .... Thanks.
In bash, that is a good way of doing it, yes. Sourcing the "library" script using source or . would be an adequate solution to your issue of wanting to share a function definition between two or more scripts. Each script that needed access to the function(s) defined in the "library" script(s) would source the needed file(s), probably at the top of the script. This would allow you to collect related functions in a single "library" script and source it to get access to them. Not entirely unrelated, but the bash shell also has the ability to automatically source a file upon executing a non-interactive shell (i.e. a script). This may be used to set up a specific environment for the script: BASH_ENV="$HOME/stuff/script.env" ./myscript ... where script.env may do anything from defining functions and setting shell or environment variables, to sourcing other files etc. Some shells, like ksh93, has an environment variable that points to a directory that may contain function definition scripts like these. In ksh93, this variable is called FPATH. A script, $FPATH/foo, would contain a function called foo, which would automatically be found when the user types foo on the command line (or in a script). The bash shell does to my knowledge not have this specific functionality. Other shells have other forms of "auto-load" functionality.
How shall I reuse a function in multiple scripts?
1,512,259,936,000
The shellshock bug in bash works by way of environment variables. Honestly I was suprised by the fact that there is such a feature like: "passing on of function definitions via env vars" Therefore this question while maybe not perfectly formulated is to ask for an example or a case in which it would be necessary to have this feature? Bonus. Do other shells zsh, dash etc. also have this feature?
When a script invokes another script, variables of the parent script can be exported, and then they'll be visible in the child script. Exporting functions is an obvious generalization: export the function from the parent, make it visible in the child. The environment is the only convenient way a process can pass arbitrary data to its children. The data has to be marshalled into strings that don't contain null bytes, which isn't a difficulty for shell functions. There are other potential methods, such as shared memory blocks or temporary files passed via file descriptors, but these could cause problems with intermediate programs that don't know what to do with them or would close them. Programs expect to run in an environment that contains variables that they don't know or care about, so they won't go overwriting or erasing them. The choice of using the function name as the name of the environment variable is a strange one. For one thing, it means that an exported variable clashes with an exported function of the same name. Exported functions are an old feature. Functions were added in the Bourne shell in SVR2, and exported functions in the Version 8 shell released the same year (1984). In that shell, variables and functions used the same namespace. I don't know how function export worked. The Heirloom shell is based on a Bourne variant which has functions but doesn't export them. ATT ksh supposedly supports exporting functions, but looking at the source or playing with it, I can't see that it does, as of ksh93u. env -i /usr/bin/ksh -c 'f=variable; f () { echo function; }; typeset -fx f; /usr/bin/env; ksh -c f' _=*25182*/usr/bin/env PWD=/home/gilles SHLVL=1 A__z="*SHLVL ksh: f: not found Ksh's public domain clones (pdksh, mksh), dash and zsh don't support exporting functions.
Why does bash even parse/run stuff put in the environment variable?
1,512,259,936,000
I want to write a function that checks if a given variable, say, var, starts with any of the words in a given list of strings. This list won't change. To instantiate, let's pretend that I want to check if var starts with aa, abc or 3@3. Moreover, I want to check if var contains the character >. Let's say this function is called check_func. My intended usage looks something like if check_func "$var"; then do stuff fi For example, it should "do stuff" for aardvark, abcdef, [email protected] and 12>5. I've seen this SO question where a user provides part of the work: beginswith() { case $2 in "$1"*) true;; *) false;; esac; } My idea is that I would iterate over the list mentioned above and use this function. My difficulty lies in not understanding exactly how exiting (or whatever replaces returning) should be done to make this work.
check_prefixes () { value=$1 for prefix in aa abc 3@3; do case $value in "$prefix"*) return 0 esac done return 1 } check_contains_gt () { value=$1 case $value in *">"*) return 0 esac return 1 } var='aa>' if check_prefixes "$var" && check_contains_gt "$var"; then printf '"%s" contains ">" and starts with one of the prefixes\n' "$var" fi I divided the tests up into two functions. Both use case ... esac and returns success (zero) as soon as this can be determined. If nothing matches, failure (1) is returned. To make the list of prefixes more of a dynamic list, one could possibly write the first function as check_prefixes () { value=$1 shift for prefix do case $value in "$prefix"*) return 0 esac done return 1 } (the value to inspect is the first argument, which we save in value and then shift off the list of arguments to the function; we then iterate over the remaining arguments) and then call it as check_prefixes "$var" aa abc 3@3 The second function could be changed in a similar manner, into check_contains () { value=$1 shift case $value in *"$1"*) return 0 esac return 1 } (to check for some arbitrary substring), or check_contains_oneof () { value=$1 shift for substring do case $value in *"$substring"*) return 0 esac done return 1 } (to check for any of a number of substrings)
Write a function that checks if a string starts with or contains something
1,512,259,936,000
I'm trying to create a function and believe I found a good working example but I don't understand all the logic behind it. More specifically, on the "while" line, could someone explain what test is and does? what is $# (isn't # a comment char?) and were does the -gt 0 parameter comes from? couldn't find it in the while man page. Here is the example: function my_function() { while test $# -gt 0 do $ echo "$1" shift done } Thank you.
While # on it's own is definitely a comment, $# contains the number of parameters passed to your function. test is an program which lets you perform various tests, for example, if one number is greater than another number (if your operator is -gt; there are many other operators, see man test). It will return success if the the test is successful (in this case, if the number of parameters IS greater than 0). the shift command throws away the first parameter. It also decreases $# The code as a whole can be seen as: do something with a parameter (in this case, showing it on the screen), then discard it; repeat until there are no parameters left. If you want to see all the parameters that are left, useful for debugging, check the contents of $@
What does "while test $# -gt 0" do?
1,512,259,936,000
To make it short, doing something like: -bash$ function tt { echo $0; } -bash$ tt $0 will return -bash, but how to get the function name called, i.e. tt in this example instead?
In bash, use FUNCNAME array: tt() { printf '%s\n' "$FUNCNAME" } With some ksh implementations: tt() { printf '%s\n' "$0"; } In ksh93: tt() { printf '%s\n' "${.sh.fun}"; } From ksh93d and above, you can also use $0 inside function to get the function name, but you must define function using function name { ...; } form. In zsh, you can use funcstack array: tt() { print -rl -- $funcstack[1]; } or $0 inside function. In fish: function tt printf '%s\n' "$_" end
How to determine callee function name in a script
1,512,259,936,000
I have this function, rpargs () { local i args=() for i in "$@" do test -e "$i" && args+="$(realpath --canonicalize-existing -- "$i")" || args+="$i" done } And I want to return args. The only ways I can think of are either to printf '%s\0' and then split it via expansion flags (0@), or to use a global like the code above.
zsh's return builtin can only return a 32bit signed integer like the _exit() system call. While that's better than most other Bourne-like shells, that still can't return arbitrary strings or list of strings like the rc/es shells. The return status is more about returning a success/failure indication. Here, alternatively, you can have the function take the name of the array to fill in as argument, like: myfunc() { local arrayname=$1; shift # ... eval $arrayname'=("$elements[@]")' # the returned $? will be 0 here for success unless that eval command # fails. } myfunc myarray other args Your printf '%s\0' approach wouldn't work for array elements that contain NULs. Instead you could use the qq parameter expansion flag to quote elements on output, and the z (to parse quotes) and Q (to remove quoting) on input like: myfunc() { # ... print -r -- ${(qq)elements} } myarray=("${(@Q)${(z)$(myfunc)}}") But in addition to being less legible, it's also less efficient as it means forking a process and transfering the output of myfunc through a pipe in addition to the quoting/unquoting.
What's the idiomatic way of returning an array in a zsh function?
1,512,259,936,000
Consider this script: function alfa(bravo, charlie) { if (charlie) return "charlie good" else { return "charlie bad" } } BEGIN { print alfa(1, 1) print alfa(1, 0) print alfa(1, "") print alfa(1) } Result: charlie good charlie bad charlie bad charlie bad Does Awk have a way to tell when an argument has not been provided?
Yes, you can do this: function alfa(bravo, charlie) { if (charlie) { return "charlie good" } if (charlie == 0 && charlie == "") { return "charlie not provided" } if (!charlie && charlie != 0) { return "charlie null" } if (!charlie && charlie != "") { return "charlie 0" } } Result: charlie good charlie 0 charlie null charlie not provided
Detect optional function argument (scalar)
1,512,259,936,000
I'm dabbling in traps in Bash again. I've just noticed the RETURN trap doesn't fire up for functions. $ trap 'echo ok' RETURN $ f () { echo ko; } $ f ko $ . x ok $ cat x $ As you can see it goes off as expected for sourcing the empty file x. Bash's man has it so: If a sigspec is RETURN, the command arg is executed each time a shell function or a script executed with the . or source builtins finishes executing. What am I missing then? I have GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu).
As I understand this, there's an exception to the doc snippet in my question. The snippet was: If a sigspec is RETURN, the command arg is executed each time a shell function or a script executed with the . or source builtins finishes executing. The exception is described here: All other aspects of the shell execution environment are identical between a function and its caller with these exceptions: the DEBUG and RETURN traps (see the description of the trap builtin under SHELL BUILTIN COMMANDS below) are not inherited unless the function has been given the trace attribute (see the description of the declare builtin below) or the -o functrace shell option has been enabled with the set builtin (in which case all functions inherit the DEBUG and RETURN traps), and the ERR trap is not inherited unless the -o errtrace shell option has been enabled. As for functrace, it can be turned on with the typeset's -t: -t Give each name the trace attribute. Traced functions inherit the DEBUG and RETURN traps from the calling shell. The trace attribute has no special meaning for variables. Also set -o functrace does the trick. Here's an illustration. $ trap 'echo ko' RETURN $ f () { echo ok; } $ cat y f $ . y ok ko $ set -o functrace $ . y ok ko ko As for declare, it's the -t option again: -t Give each name the trace attribute. Traced functions inherit the DEBUG and RETURN traps from the calling shell. The trace attribute has no special meaning for variables. Also extdebug enables function tracing, as in ikkachu's answer.
RETURN trap in Bash not executing for function
1,512,259,936,000
I know that set -e is my friend in order to exit on error. But what to do if the script is sourced, e.g. a function is executed from console? I don't want to get the console closed on error, I just want to stop the script and display the error-message. Do I need to check the $? of each command by hand to make that possible ? Here an example-script myScript.sh to show the problem: #!/bin/sh set -e copySomeStuff() { source="$1" dest="$2" cp -rt "$source" "$dest" return 0 } installStuff() { dest="$1" copySomeStuff dir1 "$dest" copySomeStuff dir2 "$dest" copySomeStuff nonExistingDirectory "$dest" } The script is used like that: $ source myScript.sh $ installStuff This will just close down the console. The error displayed by cp is lost.
I would recommend having one script that you run as a sub-shell, possibly sourcing a file to read in function definitions. Let that script set the errexit shell option for itself. When you use source from the command line, "the script" is effectively your interactive shell. Exiting means terminating the shell session. There are possibly ways around this, but the best option, if you wanted to set errexit for a session, would be to simply have: #!/bin/bash set -o errexit source file_with_functions do_things_using_functions Additional benefit: Will not pollute the interactive session with functions.
Return on error in shellscript instead of exit on error
1,512,259,936,000
How can one measure individual calls to bash functions from inside the bash file. I have a program that I call using the command eclipse -b col_solve.pl -e "myPred" This call outputs some information, the last of which is SUCCESS or FAIL. I am writing a script that is called on a bunch of files in a directory, and for each of these files, outputs The name The status (SUCCESS or FAIL) and the (user) time it took to execute . This is the code that I know works: I use this to get the status (retrieving the last word in the output): stat= get_stat ( ){ stat=$(awk '{print $NF}' <<< $1); } I use this to call the program : run_eclipse_on ( ){ get_stat "$(eclipse -b col_solve.pl -e "run(\"$1\")" )"; } The problematic code is the following: for i in `ls $1` ; #where $1 is the directory containing the files do tps=$(/usr/bin/time -f %U \ #to get just the user time [run_eclipse_on $1/$i] ); # HERE it is! echo $i::$stat::::$tps; # gives, for ex: file_name::SUCCESS::::0.20 done The culprit line is the one where the function is called. I tried surrounding it with `, {, [, $(, ' and ". Nothing worked... Is it even possible...?
Use the time keyword instead of the external command. Using the keyword allows you to run time on any shell command, including function calls, not just on running a program. You can control the output format to some extent through the TIMEFORMAT variable. TIMEFORMAT=%2U time run_eclipse_on … echo "$i::$stat" The time output gets printed on its own line, though. Bash allows a trick: you can change TIMEFORMAT during the command, so you can stuff more things in there. time { run_eclipse_on …; TIMEFORMAT="${i//%/%%}::${stat//%/%%}::%2U"; } The output from time is printed to standard error. If you need it on standard output, just redirect with 2>&1. That will also redirect whatever the command printed on stderr, however. To preserve stderr, you can do some file descriptor shuffling. { time { { run_eclipse_on …; TIMEFORMAT=$stat::%2U; } 2>&3; } 2>&1; } 3>&2
Using time on bash functions (not commands)
1,512,259,936,000
Here's a simplified version of my script. My question is, How do I return the exit code from apt-get in this case? #!/bin/bash install_auto() { apt-get -h > /dev/null 2>&1 if [ $? -eq 0 ] ; then return $(sudo apt-get install --assume-yes $@) fi return 1 } echo "installing $@" install_auto "$@" echo $? echo "finished" exit 0 The output is: ./install_test.sh: line 5: return: Reading: numeric argument required Update: I came up with something that works: return $(sudo apt-get install --assume-yes "$@" >/dev/null 2>&1; echo $?) Is that a good approach?
Bash's return() can only return numerical arguments. In any case, by default, it will return the exit status of the last command run. So, all you really need is: #!/usr/bin/env bash install_auto() { apt-get -h > /dev/null 2>&1 if [ $? -eq 0 ] ; then sudo apt-get install --assume-yes $@ fi } You don't need to explicitly set a value to be returned since by default a function will return $?. However, that will not work if the first apt command failed and you did not go into the if loop. To make it more robust, use this: #!/usr/bin/env bash install_auto() { apt-get -h > /dev/null 2>&1 ret=$? if [ $ret -eq 0 ] ; then ## If this is executed, the else is ignored and $? will be ## returned. Here, $?will be the exit status of this command sudo apt-get install --assume-yes $@ else ## Else, return the exit value of the first apt-get return $ret fi } The general rule is that in order to have a function return the exit status of a particular job and not necessarily the last one it ran, you will need to save the exit status to a variable and return the variable: function foo() { run_a_command arg1 arg2 argN ## Save the command's exit status into a variable return_value= $? [the rest of the function goes here] ## return the variable return $return_value } EDIT: Actually, as @gniourf_gniourf pointed out in the comments, you could greatly simplify the whole thing using &&: install_auto() { apt-get -h > /dev/null 2>&1 && sudo apt-get install --assume-yes $@ } The return value of this function will be one of: If apt-get -h failed, it will return its exit code If apt-get -h succeeded, it will return the exit code of sudo apt-get install.
How to return the exit code? Error: return: Reading: numeric argument required
1,512,259,936,000
I'm trying to source a file whose name is passed from stdin. My plan is to create a function like this: mySource() { # get stdin and pass it as an argument to `source` source $(cat) } to be called like this: $ echo "file1.sh" | mySource wherein file1.sh is: FILE=success export FILE Assuming $FILE is initialized to hello world, when I run $ echo "file1.sh" | mySource, I expect $ echo $FILE to print success; however, instead it prints hello world. Is there some way to source a file from a function?
You can change your mySource function to: mySource() { source "$1" } Then calling it with: $ mySource file.sh $ printf '%s\n' "$FILE" success You can also make mySource handles multiple files: mySource() { for f do source "$f" done }
Calling `source` from bash function
1,512,259,936,000
Suppose I have bash_functions.sh: function test(){ } function test2(){ } And in my ~/.bashrc I do: source ~/bash_functions.sh Is it possible to, when sourcing it, avoid sourcing a specific function? I mean, source everything in bash_functions.sh, except for test?
In a function definition foo () { … }, if foo is an alias, it is expanded. This can sometimes be a problem, but here it helps. Alias foo to some other name before sourcing the file, and you'll be defining a different function. In bash, alias expansion is off by default in non-interactive shells, so you need to turn it on with shopt -s expand_aliases. If sourced.sh contains foo () { echo "foo from sourced.sh" } then you use it this way foo () { echo "old foo" } shopt -s expand_aliases # necessary in bash; just skip in ash/ksh/zsh alias foo=do_not_define_foo . sourced.sh unalias foo; unset -f do_not_define_foo foo then you get old foo. Note that the sourced file must use the foo () { … } function definition syntax, not function foo { … }, because the function keyword would block alias expansion.
Is it possible to source a file in bash, but skipping specific functions?
1,512,259,936,000
I'm writing a set of shell functions that I want to have working in both Bash and KornShell93, but with Bash I'm running into a "circular name reference" warning. This is the essence of the problem: function set_it { typeset -n var="$1" var="hello:$var" } function call_it { typeset -n var="$1" set_it var } something="boff" call_it something echo "$something" Running it: $ ksh script.sh hello:boff $ bash script.sh script.sh: line 4: warning: var: circular name reference hello: KornShell93 does exactly what I want, but Bash fails, and also warns about the same thing on line 2 if the something variable in the script is named var instead. I'd like to have the var variable be local to each function, which is why I use typeset, but Bash doesn't seem to like "dereferencing" a nameref to a variable with the same name as the nameref itself. I can't use local -n or declare -n since it would break in ksh which lacks these, and even if I did, it doesn't solve the issue. The only solution I've found is to use unique variable names in each function, which seems rather silly since they are local. The Bash manual says the following about typeset: typeset [...] -n Give each name the nameref attribute, making it a name reference to another variable. That other variable is defined by the value of name. All references and assignments to name, except for changing the -n attribute itself, are performed on the variable referenced by name's value. [...] When used in a function, declare and typeset make each name local, as with the local command, unless the -g option is supplied. If a variable name is followed by =value, the value of the variable is set to value. It is obvious that there is something I don't understand about Bash's name references and function-local variables. So, the question is: In this case, am I missing something about Bash's handling of name reference variables, or is this a bug/mis-feature in Bash? Update: I'm currently working with GNU bash, version 4.3.39(1)-release (x86_64-apple-darwin15) as well as with GNU bash, version 4.3.46(1)-release (x86_64-unknown-openbsd6.0). The Bash shipped with macOS is too old to know about name references at all. Update: Even shorter: function bug { typeset -n var="$1" printf "%s\n" "$var" } var="hello" bug var Results in bash: warning: var: circular name reference. The var in the function should have different scope from the var in the global scope. This imposes an unnecessary restriction on the caller. The restriction being "you're not allowed to name your variables whatever you want, because there may be a name clash with a (local) nameref in this function".
Chet Ramey (Bash maintainer) says There was extensive discussion about namerefs on bug-bash earlier this year. I have a reasonable suggestion about how to change this behavior, and I will be looking at it after bash-4.4 is released. In the meanwhile, I'm resorting to slightly obfuscate the names of my local nameref variables, so that they don't clash within the library nor (hopefully) with global shell variable names. In bash 5.0, this is ever so slightly remedied (but not really fixed). The following is the observed behaviour: $ foo () { typeset -n var="$1"; echo "$var"; } $ var=hello $ foo var bash: typeset: warning: var: circular name reference bash: warning: var: circular name reference bash: warning: var: circular name reference hello This shows that it works, but that there also are are a few warnings. The relevant NEWS entry says i. A nameref name resolution loop in a function now resolves to a variable by that name in the global scope.
Circular name references in bash shell function, but not in ksh
1,512,259,936,000
In zsh I am using the following function to delete a local and a remote branch with one command: gpDo () { git branch -d "$1" && git push --delete origin "$1" } Currently, auto-completion for the Git branch does not work. I have to manually type the whole branch name. How can I get tab completion working for such as function?
I assume you're using the “new” completion system enabled by compinit. If you're using oh-my-zsh, you are. You need to tell zsh to use git branch names for gpDo. Git already comes with a way to complete branch names. As of zsh 5.0.7 this is the function __git_branch_names but this isn't a stable interface so it could change in other versions. To use this function, put this line in your .zshrc: compdef __git_branch_names gpDo With this declaration, completion after gpDo will only work after you've completed something on a git command line at least once. This is due to a quirk of function autoloading in zsh. Alternatively, run _git 2>/dev/null in your .zshrc; this causes an error because the completion function is called in an invalid context, but the error is harmless, and the side effect of loading _git and associated functions including __git_branch_names` remains. Alternatively, define your own function for git branch completion. Quick-and-dirty way: _JJD_git_branch_names () { compadd "${(@)${(f)$(git branch -a)}#??}" }
zsh: Tab completion for function with Git commands
1,512,259,936,000
I'm customizing my zsh PROMPT and calling a function that may or may not echo a string based on the state of an environment variable: function my_info { [[ -n "$ENV_VAR"]] && echo "Some useful information\n" } local my_info='$(my_info)' PROMPT="${my_info}My awesome prompt $>" I would like the info to end on a trailing newline, so that if it is set, it appears on its own line: Some useful information My awesome prompt $> However, if it's not set, I want the prompt to be on a single line, avoiding an empty line caused by an unconditional newline in my prompt: PROMPT="${my_info} # <= Don't want that :) My awesome prompt $>" Currently I work around the $(command substitution) removing my newline by suffixing it with a non-printing character, so the newline isn't trailing anymore: [[ -n "$ENV_VAR"]] && echo "Some useful information\n\r" This is obviously a hack. Is there a clean way to return a string that ends on a newline? Edit: I understand what causes the loss of the trailing newline and why that happens, but in this question I would specifically like to know how to prevent that behaviour (and I don't think this workaround applies in my case, since I'm looking for a "conditional" newline). Edit: I stand corrected: the referenced workaround might actually be a rather nice solution (since prefixing strings in comparisons is a common and somewhat similar pattern), except I can't get it to work properly: echo "Some useful information\n"x [...] PROMPT="${my_info%x}My awesome prompt $>" does not strip the trailing x for me. Edit: Adjusting the proposed workaround for the weirdness that is prompt expansion, this worked for me: function my_info { [[ -n "$ENV_VAR"]] && echo "Some useful information\n"x } local my_info='${$(my_info)%x}' PROMPT="$my_info My awesome prompt $>" You be the judge if this is a better solution than the original one. It's a tad more explicit, I think, but it also feels a bit less readable.
Final newlines are removed from command substitutions. Even zsh doesn't provide an option to avoid this. So if you want to preserve final newlines, you need to arrange for them not to be final newlines. The easiest way to do this is to print an extra character (other than a newline) after the data that you want to obtain exactly, and remove that final extra character from the result of the command substitution. You can optionally put a newline after that extra character, it'll be removed anyway. In zsh, you can combine the command substitution with the string manipulation to remove the extra character. my_info='${$(my_info; echo .)%.}' PROMPT="${my_info}My awesome prompt $>" In your scenario, take care that my_info is not the output of the command, it's a shell snippet to get the output, which will be evaluated when the prompt is expanded. PROMPT=${my_info%x}… didn't work because that tries to remove a final x from the value of the my_info variable, but it ends with ). In other shells, this needs to be done in two steps: output=$(my_info; echo .) output=${output%.} In bash, you wouldn't be able to call my_info directly from PS1; instead you'd need to call it from PROMPT_COMMAND. PROMPT_COMMAND='my_info=$(my_info; echo .)' PS1='${my_info%.}…'
Elegant way to prevent command substitution from removing trailing newline
1,512,259,936,000
I have 3 functions, like function WatchDog { sleep 1 #something } function TempControl { sleep 480 #somthing } function GPUcontrol { sleep 480 #somethimg } And i am runing it like WatchDog | TempControl | GPUcontrol This script is in rc.local file. So, logically it should run at automatically. The thing is that first function is doing fine. But second and third is not starting. But if I am starting it like sudo bash /etc/rc.local that is working fine. What is the problem? The same thing if i am adding it to init.d directory.
Pipe sends the output of one command to the next. You are looking for the & (ampersand). This forks processes and runs them in the background. So if you ran: WatchDog & TempControl & GPUcontrol It should run all three simultaneously. Also when you run sudo bash /etc/rc.local I believe that is running them in series not in parallel (it waits for each command to finish before starting the next). That would be sort of like this: WatchDog ; TempControl ; GPUcontrol Command Separators ; semi-colon - command1 ; command2 This will execute command2 after command1 is finished, regardless of whether or not it was successful & ampersand - command1 & command2 This will execute command1 in a subshell and execute command2 at the same time. || OR logical operator - command1 || command2 This will execute command1 and then execute command2 ONLY if command1 failed && AND logical operator - command1 && command2 This will execute command1 and then execute command2 ONLY if command1 succeeded.
Parallel running of functions
1,512,259,936,000
I am aware that aliases can be bypassed by quoting the command itself. However, it seems that if builtin commands are "shadowed" by functions with the same names, there is no way to execute the underlying builtin command except...by using a builtin command. If you can get to it. To quote the bash man page (at LESS='+/^COMMAND EXECUTION' man bash): COMMAND EXECUTION After a command has been split into words, if it results in a simple command and an optional list of arguments, the following actions are taken. If the command name contains no slashes, the shell attempts to locate it. If there exists a shell function by that name, that function is invoked as described above in FUNCTIONS. If the name does not match a function, the shell searches for it in the list of shell builtins. If a match is found, that builtin is invoked. So, is it possible to recover from the following, without starting a new shell? unset() { printf 'Haha, nice try!\n%s\n' "$*";} builtin() { printf 'Haha, nice try!\n%s\n' "$*";} command() { printf 'Haha, nice try!\n%s\n' "$*";} I didn't even add readonly -f unset builtin command. If it is possible to recover from the above, consider this a bonus question: can you still recover if all three functions are marked readonly? I came up with this question in Bash, but I'm interested in its applicability to other shells as well.
When bash is in posix mode, some builtins are considered special, which is compliant with POSIX standard. One special thing about those special builtins, they are found before function in command lookup process. Taking this advantage, you can try: $ unset builtin Haha, nice try! builtin $ set -o posix $ unset builtin $ builtin command -v echo echo though it does not work if set is overridden by a function named set: $ set() { printf 'Haha, nice try!\n%s\n' "$*";} $ set -o posix Haha, nice try! In this case, you just have to set POSIXLY_CORRECT to make bash enter posix mode, then you have all special builtins: $ POSIXLY_CORRECT=1
How to bypass bash functions called `command`, `builtin` and `unset`?
1,497,561,233,000
Notice that we return from a loop, which is redirected. I don't know, if I should worry about the write buffer of "file". function f { i=1 while : do echo aaaaaaaaaaaaabbbbbbbbbbbbbbbbb ((i++)) if [ $i -gt 3 ] then return # return while redirected fi done >> file # writing to file } f NOTE: I am aware that this function could be easily rewritten, so that is would not return from within the redirected loop. However this is just a simplified example for the purpose of this question. So please, do not try to improve this code. I am not interested in workarounds, either. My sole question is, if there is something I should be particularly aware of. Like the file descriptor is not closed properly. Or sometimes I can expect only half of the buffer (i.e. "aaaaaa") to be written into the file. I would like to know, if this is a very bad idea, and why? Or maybe it would work without unforeseen race conditions or similar? (but again, I don't want answers like "it is bad because you should use this-and-this pattern instead")
While each command may have its own write buffer, there is no write buffer that is shared between commands even built-in in bash (or even two invocations of a same command, built-in or not). Even ksh93 which has been known to do some I/O optimisation (for instance, it will read-ahead and share some data on input (causing some bugs)), doesn't do it. So, it will be safe in that regard. After a command finishes, like your echo aaaaaaaaaaaaabbbbbbbbbbbbbbbbb, and provided it doesn't fork unattended process running in background, you can be guaranteed that all I/O has been done. A few notes though. In a function like: f() { { echo x return echo y } > file echo something else } In the Bourne shell (and the Bourne shell only), that return would break output of the inner command group, but not return from the function as in that ancient shell, that command group would run in a subshell because of the redirection (so you'd see the something else). That no longer happens in modern shells like bash, but you'd have the same problem in bash if you wrote: f() { ( echo x return echo y ) > file echo something else } or f() { { echo x return echo y } | tr xy ab echo something else } Beware there are cases where some commands are not waited for. In: f() { { echo >(sleep 1; echo x) return } > file } f; cat file You may find that x doesn't show up because cat is run before echo x. Some shells (though not bash) have similar potential issues with pipeline components other that the rightmost one.
bash, return from redirected loop, is it safe?
1,497,561,233,000
I can grep the output of jobs, and I can grep the output of a function. But why can't I grep the output of jobs when it's in a function? $ # yes, i can grep jobs $ jobs [1]+ Running vim [2]+ Stopped matlab $ jobs | grep vim [1]+ Running vim $ # yes, of course i can grep a function $ type mockjobs mockjobs is a function mockjobs () { echo '[1]+ Running vim banjo' } $ mockjobs | grep vim [1]+ Running vim banjo $ # now put those two together and surely I can grep??? $ type realjobs realjobs is a function realjobs () { jobs } $ realjobs | grep vim $ # Nope, WTF? $ bash --version GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) $ # funny though, redirection works just fine: $ tmpfile=$(mktemp); realjobs > $tmpfile; grep vim $tmpfile; rm $tmpfile [1]+ Running vim I'm not seeing a bug in the bash list, but maybe I missed it? There's reference to an issue in Bash 2.02 when jobs is part of a pipeline, but nothing recent and in a function that I can find. What am I missing here?
Eric Blake answered on the bash-bugs mailing list: jobs is an interesting builtin - the set of jobs in a parent shell is DIFFERENT than the set of jobs in a subshell. Bash normally creates a subshell in order to do a pipeline, and since there are no jobs in that subshell, the hidden execution of jobs has nothing to report. Bash has code to special-case jobs | when it can obviously tell that you are running the jobs builtin as the sole command of the left side of a pipe, to instead report about the jobs of the parent shell, but that special-case code cannot kick in if you hide the execution of jobs, whether by hiding it inside a function as you did, or by other means such as: eval jobs | grep vim
Cannot grep jobs list when jobs called in a function
1,497,561,233,000
How can I check if a command is a built-in command for ksh? In tcsh you can use where; in zsh and bash you can use type -a; and in some modern versions of ksh you can use whence -av. What I want to do is write an isbuiltin function that works in any version of ksh (including ksh88 and any other "old" versions of ksh) that behaves like this: Accept multiple arguments and check if each is built-in Return 0 (success) if all of the given commands are built-in At the first non-built-in command, stop checking, return 1 (failure), and print a message to stderr. I already have working functions like this for zsh and bash using the aforementioned commands. Here is what I have for ksh: isbuiltin() { if [[ "$#" -eq 0 ]]; then echo "Usage: isbuiltin cmd" >&2 return 1 fi for cmd in "$@" do if [[ $cmd = "builtin" ]]; then #Handle the case of `builtin builtin` echo "$cmd is not a built-in" >&2 return 1 fi if ! whence -a "$cmd" 2> /dev/null | grep 'builtin' > /dev/null ; then echo "$cmd is not a built-in" >&2 return 1 fi done } This function works for ksh93. However, it appears that ksh88's version of whence doesn't support the -a option, which is the option to make it display all occurrences. Without the ability to display all occurrences, I can only use whence -v, which does tell me whether a command is built-in but only if there isn't also an alias or function of the same name. Question: Is there something else I can use in place of whence -av in ksh88? Solution Using the accepted answer (opening a subshell), here is my updated solution. Place the following in .kshrc: isbuiltin() { if [[ "$#" -eq 0 ]]; then printf "Usage: isbuiltin cmd\n" >&2 return 1 fi for cmd in "$@" do if ( #Open a subshell so that aliases and functions can be safely removed, # allowing `whence -v` to see the built-in command if there is one. unalias "$cmd"; if [[ "$cmd" != '.' ]] && typeset -f | egrep "^(function *$cmd|$cmd\(\))" > /dev/null 2>&1 then #Remove the function iff it exists. #Since `unset` is a special built-in, the subshell dies if it fails unset -f "$cmd"; fi PATH='/no'; #NOTE: we can't use `whence -a` because it's not supported in older versions of ksh whence -v "$cmd" 2>&1 ) 2> /dev/null | grep -v 'not found' | grep 'builtin' > /dev/null 2>&1 then #No-op. Needed to support some old versions of ksh : else printf "$cmd is not a built-in\n" >&2 return 1 fi done return 0 } I have tested this with ksh88 in Solaris, AIX, and HP-UX. It works in all the cases I tested. I have also tested this with the modern versions of ksh in FreeBSD, Ubuntu, Fedora, and Debian.
If your concern is about aliases, just do: [[ $(unalias -- "$cmd"; type -- "$cmd") = *builtin ]] ($(...) create a subshell environment, so unalias is only in effect there). If you're also concerned about functions, also run command unset -f -- "$cmd" before type.
Checking if a command is a built-in in ksh
1,497,561,233,000
I have a BASH script that calls a function, which calls other functions: #!/bin/bash function foo { function bar { # do something } bar } foo How can I return from bar directly to the main function? The case is that bar handles user input and if it receives a negative answer, it must return to the main function, otherwise it has to return to foo. Returning to foo is not a problem with a simple return statement. For the other, I tried this (which actually works): #!/bin/bash function foo { function bar { if [negative] # abstract statement then return 1 else return 0 fi } ! bar && return } foo But since I have functions like foo spread across the whole project (bar is defined in a header file), is there a way that only requires modification to bar? The project is ~2k lines long and consists of several files, it would be much easier that way if there's a solution.
There is no nice way (I am aware of) to do that but if you are willing to pay the price... Instead of putting the code in functions you can put it in files you source. If the functions need arguments then you have to prepare them with set: set -- arg1 arg2 arg3 source ... Three files: testscript.sh #! /bin/bash startdir="$PWD" echo mainscript while true; do echo "begin: helper loop" source "${startdir}/func_1" echo "end: helper loop" break done echo mainscript func_1 echo "begin: func_1" source "${startdir}/func_2" echo "end: func_1" func_2 echo "begin: func_2" echo break from func_2 break 100 echo "end: func_2" Result: > ./testscript.sh mainscript begin: helper loop begin: func_1 begin: func_2 break from func_2 mainscript
BASH return to main function
1,497,561,233,000
I've got 'color cat' working nicely, thanks to others (see How can i colorize cat output including unknown filetypes in b&w?). In my .bashrc: cdc() { for fn in "$@"; do source-highlight --out-format=esc -o STDOUT -i $fn 2>/dev/null || /bin/cat $fn; done } alias cat='cdc' # To be next to the cdc definition above. I'd like to be able to use this technique for other functions like head, tail and less. How could I do that for all four functions? Any way to generalize the answer? I have an option for gd doing git diff using gd() { git diff -r --color=always "$@" }
Something like this should do what you want: for cmd in cat head tail; do cmdLoc=$(type $cmd | awk '{print $3}') eval " $cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - done } " done You can condense it like this: for cmd in cat head tail; do cmdLoc=$(type $cmd |& awk '{print $3}') eval "$cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - ; done }" done Example With the above in a shell script, called tst_ccmds.bash. #!/bin/bash for cmd in cat head tail; do cmdLoc=$(type $cmd |& awk '{print $3}') eval "$cmd() { for fn in \"\$@\"; do source-highlight --failsafe --out-format=esc -o STDOUT -i \"\$fn\" | $cmdLoc - ; done }" done type cat type head type tail When I run this, I get the functions set as you'd asked for: $ ./tst_ccmds.bash cat () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /bin/cat - ; done } head is a function head () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /usr/bin/head - ; done } tail is a function tail () { for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" 2> /dev/null | /usr/bin/tail -; done } In action When I use these functions in my shell (source ./tst_ccmds.bash) they work as follows: cat head tail plain text What's the trick? The biggest trick, and I would call it more of a hack, is the use of a dash (-) as an argument to cat, head, and tail through a pipe which forces them to output the content that came from source-highlight through STDIN of the pipe. This bit: ...STDOUT -i "$fn" | /usr/bin/head - .... The other trick is using the --failsafe option of source-highlight: --failsafe if no language definition is found for the input, it is simply copied to the output This means that if a language definition is not found, it acts like cat, simply copying its input to the standard output. Note about aliases This function will fail if any of head,tail or cat are aliases because the result of the type call will not point to the executable. If you need to use this function with an alias (for example, if you want to use less which requires the -R flag to colorize) you will have to delete the alias and add the aliased command separately: less(){ for fn in "$@"; do source-highlight --failsafe --out-format=esc -o STDOUT -i "$fn" | /usr/bin/less -R || /usr/bin/less -R "$fn"; done }
How can I colorize head, tail and less, same as I've done with cat?
1,497,561,233,000
I'm trying to figure out how to make a function that can take an array as a parameter and sort it. I think it is done with positional variables, but I'm not sure.
Sort the easy way with sort, tr: arr=($(for i in {0..9}; do echo $((RANDOM%100)); done)) echo ${arr[*]}| tr " " "\n" | sort -n | tr "\n" " " Into a new array: arr2=($(echo ${arr[*]}| tr " " "\n" | sort -n)) Without help by tr/sort, for example bubblesort: #!/bin/bash sort () { for ((i=0; i <= $((${#arr[@]} - 2)); ++i)) do for ((j=((i + 1)); j <= ((${#arr[@]} - 1)); ++j)) do if [[ ${arr[i]} -gt ${arr[j]} ]] then # echo $i $j ${arr[i]} ${arr[j]} tmp=${arr[i]} arr[i]=${arr[j]} arr[j]=$tmp fi done done } # arr=(6 5 68 43 82 60 45 19 78 95) arr=($(for i in {0..9}; do echo $((RANDOM%100)); done)) echo ${arr[@]} sort ${arr[@]} echo ${arr[@]} For 20 numbers, bubblesort might be sufficient.
How to create a function that can sort an array in bash?
1,497,561,233,000
I'm trying to define a bash function dynamically using following code: delegate_function() { echo "output from delegate"; } eval "parent_function() { echo $(delegate_function); }" The intent is to have parent function dynamically dispatch to the delegate when executed. However due to the way eval works my function is being defined as follows: kshitiz:/tmp$ type parent_function parent_function is a function parent_function () { echo output from delegate } How can I instead have the definition as: parent_function () { echo $(delegate_function); } Is there a way to escape some part of the string from being evaluated by eval?
Escaping $ should be enough to make this work: eval "parent_function() { echo \$(delegate_function); }"
Defining bash function dynamically using eval
1,497,561,233,000
I create a script, paste data in it, saving, executing, and delete: vi ~/ms.sh && chmod +x ~/ms.sh && nohup ~/ms.sh && rm ~/ms.sh #!/bin/bash commands... function myFunc { commands... } myFunc () How could I properly run only myFunc, in the background, or alternatively, in another process? If it's even possible?
You can use a shell function pretty much anywhere you can use a program. Just remember that shell functions don't exist outside the scope in which they were created. #!/bin/bash # f() { sleep 1 echo "f: Hello from f() with args($*)" >&2 sleep 1 echo "f: Goodbye from f()" >&2 } echo "Running f() in the foreground" >&2 f one echo "Running f() in the background" >&2 f two & echo "Just waiting" >&2 wait echo "All done" exit 0
How to run a function in background?
1,497,561,233,000
I'm trying make a function that simplifies grepping a log I have to work with on a regular basis. I'd like to use extended regexp with it pipe and redirect the output, etc. But I'm having trouble doing this using the standard grep pattern file syntax in the function. The way I have it set up now is horrible to look at, but gets the job done: alias mygrep='cat /path/to/logs/my.log | grep' This allows me to run the following without trouble mygrep -i -E "WARN|java" |less Seems like the correct implementation should be something like the following. function mygrep () { args=$* grep "$args" /path/to/logs/my.log } However when I use this with the search and pipe parameters above, grep returns an invalid option error. What am I missing?
The following code has $args quoted: grep "$args" /path/to/logs/my.log This asks to pass the entire value of that variable as a single parameter to grep. Thus, if you call mygrep with parameters -i and -E, grep will in fact receive a single parameter -i -E, which is indeed invalid. The following should do it: function mygrep () { grep "$@" /path/to/logs/my.log } Writing "$@" does the right thing: it's similar to $*, except that it properly expands each parameter to a single word.
Function to simplify grep with an often used log
1,497,561,233,000
When I execute the following script #!/usr/bin/env bash main() { shopt -s expand_aliases alias Hi='echo "Hi from alias"' Hi # Should Execute the alias \Hi # Should Execute the function "Hi" } function Hi() { echo "Hi from function" } main "$@" Very first time it executes the function and then always executes as alias: $ . Sample.sh Hi from function Hi from function Hi from function $ . Sample.sh Hi from alias Hi from function Hi from function Why is it so? This does not happen in the following case #!/usr/bin/env bash function Hi() { echo "Hi from function" } shopt -s expand_aliases alias Hi='echo "Hi from alias"' Hi # Should Execute the alias \Hi # Should Execute the function "Hi" Very first time it executes the function and then always executes as alias: $ . Sample.sh Hi from alias Hi from function Hi from function $ . Sample.sh Hi from alias Hi from function Hi from function
Alias expansion in a function is done when the function is read, not when the function is executed. The alias definition in the function is executed when the function is executed. See Alias and functions and https://www.gnu.org/software/bash/manual/html_node/Aliases.html This means, the alias will be defined when function main is executed, but when the function was read for the first time the alias was not yet defined. So the first time function main will execute function Hi three times. When you source the script for the second time, the alias is already defined from the previous run and can be expanded when the function definition is read. When you now call the function it is run with the alias expanded. The different behavior occurs only when the script is sourced with . Sample.sh, i.e. when it is run in the same shell several times. When you run it in a separate shell as ./Sample.sh it will always show the behavior of the first run.
Unexpected behavior of Bash script: First executes function, afterwards executes alias
1,497,561,233,000
I'm looking for something similar to Bash's built-in command that will only run something if it is a function. So currently I have an insecure way of doing: # Go through arguments in order for i in $*; do if [ -z `which $i` ]; then # Run function $i && echo -n ' ' fi done This if statement doesn't work properly. Anyway, even if I could check if it's a command and not a function, I can't run explicitly run a function, which is bad because if anyone has any programs in $PATH that are the same name as my function, they will be run. If I nullify PATH or set it to anything else, anyone could still use $i to run a program explicit, so that's also not a solution. Any way I can "secure" my shell script?
With bash, you can do something like this: for f do if declare -F -- "$f" >/dev/null 2>&1; then : "$f" is a function, do something with it fi done declare -F -- "$f" >/dev/null 2>&1 will return success code if $f is a bash function, output nothing. You might also want to disable some special builtin commands when bash run in POSIX mode by adding builtin enable -n -- "$f".
Execute only if it is a bash function
1,497,561,233,000
I'm new to bash functions but was just starting to write some bits and pieces to speed up my work flow. I like to test this as I go along so I've found myself editing and sourcing my ~/.profile a lot and find ~/. a bit awkward to type... So the first thing I thought I'd do was the following: sourceProfile(){ source ~/.profile } editProfile(){ vim ~/.profile && sourceProfile } when running editProfile I'm getting an issue on the sourceProfile call. Initially I was getting the error: -bash: ~./profile: No such file or directory Note the lack of typo in my function! However it works if I use an alias instead. alias sourceProfile='source ~/.profile' However after adding that alias and then commenting it out and uncommenting the function I start getting a syntax error instead: -bash: /home/jonathanramsden/.profile: line 45: syntax error near unexpected token `(' -bash: /home/jonathanramsden/.profile: line 45: `sourceProfile(){' the proceeding line is: alias sservice='sudo service' I'm pretty sure all I did was comment/uncomment! And based on my googling it seems like that's the syntax for defining functions.
Aliases are like some form of macro expansion, similar to the pre-preprocessing done in C with #define except that in shells, there's no clear and obvious delimitation between the pre-processing stage and the interpretation stage (also, aliases are not expanded in all contexts and there can be several rounds of alias expansion like with nested aliases). When you do: alias sourceProfile='source ~/.profile' sourceProfile() { something } The alias expansion turns it into: source ~/.profile() { something } which is a syntax error. And: alias sourceProfile='source ~/.profile' editProfile(){ vim ~/.profile && sourceProfile } Turns it into: editProfile(){ vim ~/.profile && source ~/.profile } So, if you later redefine sourceProfile as a function, editProfile will not call it, because the definition of the editProfile has the expanded value of original alias. Also, for functions (or any compound command), aliases are only expanded at function definition time (while they're read and parsed), not at run time. So this: editProfile(){ vim ~/.profile && sourceProfile } alias sourceProfile='source ~/.profile' editProfile won't work because sourceProfile was not defined at the time the body of the editProfile function was parsed, and there won't be any alias expansion at the time of running the editProfile function. So, avoid mixing aliases and functions. And be wary of the implications of using aliases as they're not really commands but some form of macro expansion.
Bash functions, something strange going on!
1,497,561,233,000
In my .bash_aliases I have defined a function that I use from the command line like this: search -n .cs -n .cshtml -n .html SomeTextIWantToSearchFor /c/code/website/ /c/stuff/something/whatever/ The function builds a grep command that pipes the result to another grep command (unfortunately convoluted because I am stuck on an old version): search() { local file_names opt OPTARG OPTIND pattern file_names=() while getopts ":n:" opt; do case $opt in n) file_names+=( "$OPTARG" ) ;; esac done shift $((OPTIND-1)) pattern="$1" shift if (( ${#file_names[@]} > 0 )); then file_names="${file_names[@]}" file_names=${file_names// /':\|'}: grep -I -r "$pattern" "$@" | grep "$file_names" else grep -I -r "$pattern" "$@" fi } I have defined another function that calls this function: search-some-set-of-files() { local file_names directories file_names=( "-n page1.cshtml" "-n page2.cshtml" "-n page3.cshtml" ) directories=( "/c/code/website/" "/c/stuff/something/whatever/" ) search "${file_names[@]}" "$@" "${directories[@]}" } From the command line, I call this function like this: search-some-set-of-files SomeTextIWantToSearchFor For some reason, the results include every single file in the target directories. i.e., the results are not filtered by grep according to the file names I specified. If I change the last line of the search-some-set-of-files function to echo the command, I get this: $ search-some-set-of-files SomeTextIWantToSearchFor search -n .cs -n .cshtml -n .html SomeTextIWantToSearchFor /c/code/website/ /c/stuff/something/whatever/ Which is exactly what I want. If I copy that command (or type it verbatim) into the command line, the results are as they should be. If I enable debugging mode (set -x), I can see that each argument is being quoted separately by the shell: $ search-some-set-of-files SomeTextIWantToSearchFor + search-some-set-of-files SomeTextIWantToSearchFor + local file_names directories + file_names=("-n page1.cshtml" "-n page2.cshtml" "-n page3.cshtml") + directories=("/c/code/website/" "/c/stuff/something/whatever/") + search '-n page1.cshtml' '-n page2.cshtml' '-n page3.cshtml' SomeTextIWantToSearchFor /c/code/website/ /c/stuff/something/whatever/ + return + etc... So I think the problem lies in how the arguments are being passed to the search function. How do I fix this?
Your problem is the second grep: ... | grep "$file_names" When you call your function, the space between the -n and the file name (-n page1.cshtml) is included in the $file_names array. Then, the substitution: file_names=${file_names// /':\|'}: Will add an extra :\| at the beginning of the string because of the leading space. So, your second grep command is actually: ... | grep ":\|page1.cshtml:\|page2.cshtml:\|page3.cshtml:" As a result, grep matches all lines since all result lines will include filename: and that is matched by :. So, an easy solution would be to remove the spaces: file_names=( "-npage1.cshtml" "-npage2.cshtml" "-npage3.cshtml" ) Everything should then work as expected.
Function that calls another function with list of arguments doesn't work
1,497,561,233,000
I want to know what return values we can use that will not be mistaken by for ex. SIGINT? ex.: $sleep 10 $#hit ctrl+c $echo $? 130 so I know I must not use anything like return 130 or exit 130 so this would be misleading: $function FUNC(){ return 130; };FUNC;echo $? 130
You can use any number between 0 to 255 except for reserved exit codes (click here to know more)
What return/exit values can I use in bash functions/scripts?
1,497,561,233,000
How can I get this script file's functions to load without having to source it every time? I created a file foo with script functions I'd like to run. It's in /usr/bin, which is in the PATH. File foo: #!/bin/bash echo "Running foo" function x { echo "x" } However, when I type a function name like x in the terminal: x: command not found When I type foo: Running foo shows up (so the file is in PATH and is executable) After typing source foo, I can type x to run function x A pretty basic question, I know. I'm just trying to abstract my scripts so they're more manageable (compared to dumping everything in .profile or .bashrc).
mikeserv's answer is good for the details of what goes on "behind the scenes", but I feel another answer here is warranted as it doesn't contain a simple usable answer to the exact title question: How can I get this script file's functions to load without having to source it every time? The answer is: Source it from your .bashrc or your .bash_profile so it is available in each shell you run. For example, I have the following in my .bash_profile: if [ -d ~/.bash_functions ]; then for file in ~/.bash_functions/*.sh; do . "$file" done fi
How can I get this script file's functions to load without having to source it every time? "command not found" (Bash/scripting basics)
1,497,561,233,000
I have written one simple function in shell that returns 0 or 1 based on some condition.Let me call that function name foo foo(){ ... ... } Now i am trying to call foo in if condition as follow:- if ( foo $1 ) ... .. It works fine.But when i used follow approach to call ,then i get error if [ foo $1 ] ... ... Why does it throws error as "Unary operator expected"?
When you use: if ( foo $1 ) You are simple executing foo $1 in a subshell and if is acting on it's exit status. When you use: if [ foo $1 ] You are attempting to use the shell test and foo is not a valid test operator. You can find the valid test operators here. It's not necessarily relevant for your issue but you should also always quote variables especially inside the shell test brackets. The shell test will succeed simply with the presence of something. So even when using a valid test operator you could get unwanted results: $ unset var $ [ -n $var ] && echo yes yes $ [ -n "$var" ] && echo yes $ [ -n "" ] && echo yes $ [ -n ] && echo yes yes $ [ foo ] && echo yes yes $ [ foo bar ] && echo yes -bash: [: foo: unary operator expected The presence of a single string inside the shell test will evaluate to true where the presence of two or more strings expects that one of them be a valid test operator.
Calling function in Shell script
1,497,561,233,000
If you call the command help declare. You will see the following information: -t NAME : to make NAMEs have the `trace' attribute Is there any example that demonstrates the use of this option. I thought that this plays the same role as that of the command set -o functrace except that it only applies to the arguments instead of all functions. The motivation of this question is that I want a function foo to inherits a trap. So I tried declare -t foo but it did not work. I can certainly use set -o functrace to make all functions to inherit a trap, but there are circumstances when I want only one or two functions to inherit a trap. Here is the script: function foo { var=1 var=2 var=3 } declare -t foo var=0 trap 'echo var is $var' DEBUG foo trap - DEBUG # turn off the DEBUG trap Here is the output of the script: var is 0 var is 3 I was expecting to get something like: var is 0 var is 1 var is 2 var is 3
declare -t foo sets the trace attribute on the variable foo (which has no special effect anyway). You need to use -f to set it on the function: declare -ft foo With your script modified to use -f, I get the following output (explanation in comments): var is 0 # foo called var is 0 # before the first command in function is run var is 0 # var=1 var is 1 # var=2 var is 2 # var=3 var is 3 # trap ...
What is the use of declare with option -t
1,497,561,233,000
Here is a simplified code that prints the name of Directory if it contains a Filename with same name as the parent directory and .md extension. FIND(){ find . -type d -exec sh -c ' for d do [ -f "${d}/${d##*/}.md" ] && printf "%s\n" "$d" done' find-sh {} + } FIND To generalize I want to send the Search term ${d}/${d##*/}.md as an argument to the FIND function, but unfortunately this does not outputs anything: FIND(){ local SearchTerm="${1}" find . -type d -exec sh -c ' for d do [ -f "${SearchTerm}" ] && printf "%s\n" "$d" done' find-sh {} + } FIND '${d}/${d##*/}.md' I am sure there is some issue with the quotation of the SearchTerm. Any hints? I tried: FIND '\${d}/\${d##*/}.md' but has no output
The in-line script that you call is single-quoted (as it should be). This means that the sh -c shell will get a script where "${SearchTerm}" is unexpanded. Since that shell does not have a SearchTerm variable, its value will be empty. Since you tagged your question with bash, you can pass the name of an exported function: # Our find function. # Takes the name of a test function that will be called # with the pathname of a directory. myfind () { local thetest="$1" # Run find, passing the name of the function into the in-line script. find . -type d -exec bash -c ' testfunc=${1:-:}; shift for dirpath do "$testfunc" "$dirpath" && printf "%s\n" "$dirpath" done' bash "$thetest" {} + } # Our test function. test_md_file () { [ -f "$1/${1##*/}.md" ] } export -f test_md_file # Run the thing. myfind test_md_file The testfunc=${1:-:} in the code will assign $1 to testfunc if it's available and not empty, otherwise, it will use : as the test (a no-op utility that returns true).
Find exec sh: Shell variable not getting passed to subshell
1,497,561,233,000
I want to execute a Bash function at a scheduled time. I think the right tool for this is the at command. However, it doesn't seem to be working. function stupid () { date } export -f stupid cat << EOM | at now + 1 minute stupid EOM After waiting the minute, I check my mail and this is the error I see: sh: line 41: syntax error near unexpected token `=\(\)\ {\ \ date" "}' sh: line 41: `"}; export BASH_FUNC_stupid()' I don't understand what's going wrong here, especially since I know the function works. $ stupid Fri May 29 21:05:38 UTC 2015 Looking at the error, I think the incorrect shell is being used to execute the function (sh as opposed to bash), but if I check $SHELL I see it points to /bin/bash, and the man page for at says: $ echo $SHELL /bin/bash $ man at ... SHELL The value of the SHELL environment variable at the time of at invocation will determine which shell is used to execute the at job commands. So Bash should be the shell running my function. What going on here? Is there a way to get my Bash function to run with at?
Bash functions are exported via the environment. The at command makes the environment, the umask and the current directory of the calling process available to the script by generating shell code that reproduces the environment. The script executed by your at job is something like this: #!/bin/bash umask 022 cd /home/nick PATH=/usr/local/bin:/usr/bin:/bin; export PATH HOME=/home/nick; export HOME … stupid Under older versions of bash, functions were exported as a variable with the name of the function and a value starting with () and consisting of code to define the function, e.g. stupid="() { date }"; export stupid This made many scenarios vulnerable to a security hole, the Shellshock bug (found by Stéphane Chazelas), which allowed anyone able to inject the content of an environment variable under any name to execute arbitrary code in a bash script. Versions of bash where with a Shellshock fix use a different way: they store the function definition in a variable whose name contains characters that are not found in environment variables and that shells do not parse as assignments. BASH_FUNC_stupid%%="() { date }"; export stupid Due to the %, this is not valid sh syntax, not even in bash, so the at job fails, whether it even attempts to use the function or not. The Debian version of at, which is used in many Linux distributions, was changed in version 3.16 to export only variables that have valid names in shell scripts. Thus newer versions of at don't pass post-Shellshock bash exported functions through, whereas older ones error out. Even with pre-Shellshock versions of bash, the exported function only works in bash scripts launched from the at job, not in the at job itself. In the at job itself, even if it's executed by bash, stupid is just another environment variable; bash only imports functions when it starts. To export functions to an at job regardless of the bash or at version, put them in a file and source that file from your job, or include them directly in your job. To print out all defined functions in a format that can be read back, use declare -f. { declare -f; cat << EOM; } | at now + 1 minute stupid EOM
Can you execute a Bash function with the `at` command?