date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,497,561,233,000 |
I found a Bash script snippet earlier with which to echo a string to stderr:
echoerr() { echo "$@" 1>&2; }
echoerr hello world
This remained in my clipboard, and when I wanted to edit a file (with VIM) I accidentally pasted this snippet again instead of the file name:
vim echoerr() { echo "$@" 1>&2; }
echoerr hello world
It seems to have re-assigned echoerr to vim:
$ where vim
vim () {
echo "$@" 1>&2;
}
/usr/bin/vim
Also, attempting to open a file with VIM now just echos the file name back:
vim path/to/some-file
Prints:
path/to/some-file
What happened? (I'm running zsh inside tmux)
|
Because zsh allows you to define function with multiple names. From man zshmisc:
function word ... [ () ] [ term ] { list }
word ... () [ term ] { list }
word ... () [ term ] command
where term is one or more newline or ;. Define a function which
is referenced by any one of word. Normally, only one word is
provided; multiple words are usually only useful for setting
traps. The body of the function is the list between the { and
}. See the section `Functions'.
| Command re-assigned |
1,497,561,233,000 |
I love using the following pattern for searching in files:
grep --color=auto -iRnHr --include={*.js,*.html,} --exclude-dir={release,dev,} "span" .
I'd like, however, to have this one wrapped into a bash command like this:
findinfiles {*.js,*.html,} {release,dev,} "span" // syntax is just a guessing
I cannot solve the problem of passing these kinds of braces into a bash function and using them with $1, $2 and so on. When I use the following:
function findinfiles() {
echo $1, $2, $3
}
Then:
me@pc ~> findinfiles {*.js,*.html,} {release,dev,} "span"
*.js, *.html, release
Of course, passing arguments to grep won't work this way. It seems that arguments are indexed but not properly grouped.
Can anyone teach me how to deal with these kinds of arguments?
|
When you run findinfiles {*.js,*.html,} {release,dev,} "span", here is what happens:
Bash parses quotes and splits the command into words: findinfiles, {*.js,*.html,} {release,dev,} span.
Bash expands the braces, resulting in the list of words findinfiles, *.js, *.html, release, dev, span.
Bash expands the wildcard patterns *.js and *.html to the list of matching file names. If no file name matches either pattern, it is left alone.
Bash looks up the name findinfiles, finds out that it's a function, and executes the function with the supplied parameters.
You can prevent the braces and wildcards from being expanded at the function call, but then the braces will appear literally in the arguments, which isn't particularly useful.
I suggest changing the format of your function call. Instead of using braces, use only commas, and manually split the arguments at commas inside the function.
findinfiles () {
local patterns="$1" excludes="$2" pattern="$3"
shift 3
typeset -a cmd dirs
if [[ $# -eq 0 ]]; then dirs=(.); else dirs=("$@"); fi
cmd=(grep --color=auto -iRnHr)
local IFS=,
for x in $patterns; do
cmd+=("--include=$x")
done
for x in $excludes; do
cmd+=("--exclude-dir=$x")
done
"${cmd[@]}" "${dirs}"
}
Explanations:
Store the first three parameters in local variables. Any extra parameters are directories to search in.
Set dirs to the list of extra parameters. If there are none, use . (current directory) instead.
Set IFS to a comma. When a script contains an unquoted variable expansion like $patterns and $excludes above, the shell performs the following:
Replace the variable by its value.
Split the variable into separate words wherever it contains a character that is present in IFS. By default, IFS contains whitespace characters (space, tab and newline).
Treat each word as a wildcard pattern and expand it to the list of matching files. If there is no matching file, leave the pattern alone.
(To avoid these expansion steps, use double quotes around the variable substitution, as in patterns="$1" and so on in the script above.)
The function builds the command line to execute incrementally in the array variable cmd. At the end, the command is executed.
Alternatively, you could build on the following settings:
shopt -s extglob globstar
fif_excludes='release dev'
alias fif='grep --color=auto -inH $fif_excludes'
Run commands like
fif span **/*.@(js|html)
Explanations:
shopt -s extglob activates the @(js|html) form of wildcard pattern, which matches either js or html. (This option activates other pattern forms, see the manual for details.)
shopt -s globstar actives the pattern **/ which matches subdirectories at any depth (i.e. it performs a recursive traversal).
To change the exclude list (which I expect doesn't happen often), modify the fif_excludes variable.
| Bash: passing braces as arguments to bash function |
1,497,561,233,000 |
In a bash script I have a log() function that is used in several places, as well as a logs() function that diverts lots of line to log(). When I run parts of the script with set -x, obviously all commands within logs() and log() are traced too.
I would like to define logs() and log() such that at least their content, at best even their call is suppressed from set -x output.
|
A quick and dirty hack that should work in all shells is to (temporarily) make your log an external script instead of a function.
In bash you could also use a trap '...' DEBUG and shopt -s extdebug combination, which is much more versatile than set -x. Example:
debug() {
local f=${FUNCNAME[1]} d=${#FUNCNAME[@]} c=$BASH_COMMAND
if [ "$NOTRACE" ]; then
case $debug_skip in ''|$d) debug_skip=;; *) return;; esac
eval "case \$c in $NOTRACE) debug_skip=\$d; return; esac"
fi
# before the 1st command in a function XXX
case $c in $f|"$f "*) return;; esac
printf >&2 "%*s(%s) %s\n" $((d * 2 - 4)) "" "$f" "$c"
}
(Of course, you can dispense with the strange format + indenting and make it completely set-x-like; you can also redirect it to another file instead of mixed with the stderr from commands.)
Then:
NOTRACE='"log "*'
shopt -s extdebug
trap debug DEBUG
log(){ log1 "$@"; }; log1(){ log2 "$@"; }
log2(){ log3 "$@"; }; log3(){ echo "$@"; }
foo(){ foo1 "$@"; }; foo1(){ foo2 "$@"; }
foo2(){ foo3 "$@"; }; foo3(){ echo "$@"; }
bar(){ case $# in 0) ;; *) echo "$1"; shift; bar "$@";; esac; }
foo 1 2 3
log 7 8 9
bar 1 2 3 4
will result in:
(main) foo 1 2 3
(foo) foo1 "$@"
(foo1) foo2 "$@"
(foo2) foo3 "$@"
(foo3) echo "$@"
1 2 3
7 8 9
(main) bar 1 2 3 4
(bar) case $# in
(bar) echo "$1"
1
(bar) shift
(bar) case $# in
(bar) echo "$1"
2
(bar) shift
(bar) case $# in
(bar) echo "$1"
3
(bar) shift
(bar) case $# in
(bar) echo "$1"
4
(bar) shift
(bar) case $# in
| Define bash function which does not show up in xtrace (set -x) |
1,497,561,233,000 |
I want to write my Bash functions each in a separate file, for easier version control, and source the whole lot of them in my .bashrc.
Is there a more robust way than e.g.:
. ~/.bash_functions/*.sh
|
It's simply a matter of surrounding it all with appropriate error checks:
fn_dir=~/.bash_functions
if [ -d "$fn_dir" ]; then
for file in "$fn_dir"/*; do
[ -r "$file" ] && . "$file"
done
fi
| How can I source several files into my .bashrc? |
1,497,561,233,000 |
See the code below:
a()(alias x=echo\ hi;type x;alias;x);a
I have an alias inside a function, I do not want to change the external environment (that is why I am using () instead of {}), even the code saying the alias was successfully setted, it does not work, check the output out:
x is aliased to `echo hi'
...
alias x='echo hi'
x: command not found
I heard about doing shopt -s expand_aliases would solve, but not only it has not had any effect as well as I could not depend on bash because I am working with dd-wrt's busybox's ash.
Someone know this issue?
|
If you're not averse to using eval:
$ busybox ash -c 'a()(alias x=echo\ hi;type x;alias;eval x);a'
x is an alias for echo hi
x='echo hi'
hi
I have no idea why this works.
| Why alias inside function does not work? [duplicate] |
1,497,561,233,000 |
I received a great function for highlighting files in Apple's finder using the command-line. It's basically a wrapper for osascript.
I got it from Mac OS X: How to change the color label of files from the Terminal and it looks like this,
# Set Finder label color
label(){
if [ $# -lt 2 ]; then
echo "USAGE: label [0-7] file1 [file2] ..."
echo "Sets the Finder label (color) for files"
echo "Default colors:"
echo " 0 No color"
echo " 1 Orange"
echo " 2 Red"
echo " 3 Yellow"
echo " 4 Blue"
echo " 5 Purple"
echo " 6 Green"
echo " 7 Gray"
else
osascript - "$@" << EOF
on run argv
set labelIndex to (item 1 of argv as number)
repeat with i from 2 to (count of argv)
tell application "Finder"
set theFile to POSIX file (item i of argv) as alias
set label index of theFile to labelIndex
end tell
end repeat
end run
EOF
fi
}
I put it in via vim .bash_profile, ran source .bash_profile and was able to run the function with label 2 /Users/brett/Desktop/test.txt. Perfect.
But what if I'm updating all our old PHP mysql_query( statements to PDO and I want to visually highlight the files I need to edit?
I would normally run,
find /Users/brett/Development/repos/my-repo/ -name "*.php" -print0 | xargs -0 grep -Iil 'mysql_query(' | xargs -I '{}' -n 1 label 2 {}
But it returns,
xargs: label: No such file or directory
I read that I should try running export -f label, but that doesn't seem to help either.
Does anyone know how I can pipe paths/files from grep through xargs to a .bash_profile function?
|
To call label with xargs you could try something like this:
export -f label
find /Users/brett/Development/repos/my-repo/ -name "*.php" -print0 |
xargs -0 grep -Iil 'mysql_query(' |
xargs -I {} -n 1 bash -c 'label 2 {}'
Note how label 2 {} in the latter xargs call has been changed to bash -c 'label 2 {}'. As xargs cannot call the function directly, we export the label function to child bash processes of the parent shell, then fork a child shell and process the function there.
Notes:
~/.bash_profile is typically not sourced by non-login shells, so export -f label is needed to export the label function to the shell invoked by xargs.
The -c option tells bash to read commands to be executed from the option argument string.
| Can you pipe to a .bash_profile function? |
1,497,561,233,000 |
Bash can print a function defintion:
$ bash -c 'y(){ echo z; }; export -f y; export -f'
y ()
{
echo z
}
declare -fx y
However this fails under POSIX Bash, /bin/sh and /bin/dash:
$ bash --posix -c 'y(){ echo z; }; export -f y; export -f'
export -f y
Can a function definition be printed under a POSIX shell?
|
You can not do it portably. POSIX spec did not specify any way to dump function definition, nor how functions are implemented.
In bash, you don't have to export the function to the environment, you can use:
declare -f funcname
(Work in zsh)
This works even you run bash in posix mode:
$ bash --posix -c 'y(){ echo z; }; declare -f y'
y ()
{
echo z
}
In ksh:
typeset -f funcname
(Works in bash, zsh, mksh, pdksh, lksh)
In yash:
typeset -fp funcname
This won't work if yash enter POSIXly-correct mode:
$ yash -o posixly-correct -c 'y() { ehco z; }; typeset -fp y'
yash: no such command `typeset'
With zsh:
print -rl -- $functions[funcname]
whence -f funcname
type -f funcname
which funcname
Note that both whence -f, which, type -f will report alias first with the same name. You can use -a to make zsh report all definitions.
POSIXly, you'd have to record your function definition yourself which you could do with:
myfunction_code='myfunction() { echo Hello World; }'
eval "$myfunction_code"
or a helper function
defn() {
code=$(cat)
eval "${1}_code=\$code; $1() { $code; }"
}
defn myfunction << 'EOF'
echo Hello World
EOF
printf '%s\n' "$myfunction_code"
| POSIX print function definition |
1,497,561,233,000 |
I was trying to execute a program which will create a directory on the basis of a complete path provided to it from the prompt and, if the directory already exists it will return an error (for "directory already exists") and ask again for the name in a recursive function.
Here it is what I tried: let us say a file, test1 in it:
#!/bin/bash
echo "enter the directory name"
read ab
check(){
if (( echo `mkdir $ab` 2>/dev/null )); then
echo "directory created "
echo `ls -ld $ab`
exit
else
echo "try again "
echo "enter new value for directory:"
read ab
check
fi
}
check
The problem here is if the directory exists then the program works fine but if it does not exist then it creates it but then goes to the else part of the program.
|
The echo always succeeds. Do without it and the subshell:
#!/bin/bash
echo "enter the directory name"
read ab
check(){
if mkdir "$ab" 2>/dev/null; then
echo "directory created "
ls -ld "$ab"
exit
else
echo "try again "
echo "enter new value for directory: "
read ab
check
fi
}
check
| Executing a command within `if` statement and on success perform further steps |
1,497,561,233,000 |
This issue is related to Using bash shell function inside AWK
I have this code
#!/bin/bash
function emotion() {
#here is function code end with return value...
echo $1
}
export -f emotion
#I've put all animals in array
animalList=($(awk '{print $1}' animal.csv))
#loop array and grep all the lines form the file
for j in ${animalList[@]}
do
: #here I'am running a bash script calling emotion function
grep $j animal.csv | awk '{for(i=2;i<=NF;i++){system("bash -c '\''emotion "$i"'\''")}}'
done
and I have this file:
cat smile happy laugh
dog angry sad
mouse happy
wolf sad cry
fox sleep quiet
The output should like this:
smile
happy
laugh
angry
sad
happy
sad
cry
sleep
quiet
The issue it tells me bash: emotion: command not found
According to akarilimano's comment here
this is not working on my Ubuntu 16.04. This is strange, because it used to work "on Ubuntu 14.04.
So how to do it in newer versions?
|
That's probably not the best way to approach the problem.
From awk, all you can do is build a command line that system() passes to sh. So, you need the arguments to be formatted in the sh syntax.
So you'd need:
emotion() {
echo "$i"
}
export -f emotion
awk -v q="'" '
function sh_quote(s) {
gsub(q, q "\\" q q, s)
return q s q
}
{
for (i = 2; i <= NF; i++)
status = system("bash -c '\''emotion \"$@\"'\'' bash " sh_quote($1)
}'
Here quoting awk's $1 so it can be safely embedded in the sh command line that ends up running bash with the content of $1 as last argument, which then passes it to emotion.
That assumes your sh and your awk don't strip the special environment variables that bash uses to export functions (like pdksh and derivatives (such as mksh) do for instance, or dash since 0.5.8 which explains your 14.04 vs 16.04 issue), and that your distribution has not disabled exported functions in bash.
If it does, you could do it like for ksh/zsh, and pass the definition of the function some other way, like:
CODE=$(typeset -f emotion) awk -v q="'" '
function sh_quote(s) {
gsub(q, q "\\" q q, s)
return q s q
}
{
for (i = 2; i <= NF; i++)
status = system("bash -c '\''eval \"$CODE\"; emotion \"$@\"'\'' bash " \
sh_quote($1)
}'
In both cases, that means running one sh and one bash for it. Maybe you can pass the $i to bash some other way than via a system() that executes two instances of a shell each time. Like:
awk '{for (i=2; i<=NF; i++) printf "%s\0" $i}' |
while IFS= read -r i; do
emotion "$i"
done
Or do the word splitting in bash directly:
unset IFS
while read -ra fields; do
for i in "${fields[@]:1}"; do
emotion "$i"
done
done
| How can I call a bash function in bash script inside awk? |
1,497,561,233,000 |
Over the years I've collected sort of a library of bash functions the shell and scripts refer to.
To decrease the import boilerplate, I'm exploring options how to reasonably include the library in scripts.
My solution has two parts - first importing configuration (env vars), followed by sourcing the library of functions.
~/bash_envs: (the configuration)
export SOME_VAR=VALUE
export SHELL_LIB=/path/to/library.sh
# convenience funtion, so scripts who source env_vars file (ie this file) can
# simply call it, instead of including the same block in each file themselves.
function _load_common() {
# import common library/functions:
source $SHELL_LIB
}
export -f _load_common
# marker var used to detect whether env vars (ie this file) have been loaded:
export __ENV_VARS_LOADED_MARKER_VAR=loaded
Now following code is ran from scripts:
if [[ "$__ENV_VARS_LOADED_MARKER_VAR" != loaded ]]; then # in our case $__ENV_VARS_LOADED_MARKER_VAR=loaded, ie this block is not executed
USER_ENVS=/home/laur/bash_envs
if [[ -r "$USER_ENVS" ]]; then
source "$USER_ENVS"
else
echo -e "\n ERROR: env vars file [$USER_ENVS] not found! Abort."
exit 1
fi
fi
_load_common
This produces _load_common: command not found exception. Why is that?
Note __ENV_VARS_LOADED_MARKER_VAR=loaded is nicely exported and visible which is why there's no reason to source $USER_ENVS; yet _load_common() function is not found, albeit it being exported from the same place as __ENV_VARS_LOADED_MARKER_VAR.
|
The problem
Observe:
$ bash -c 'foobar () { :; }; export -f foobar; dash -c env' |grep foobar
$ bash -c 'foobar () { :; }; export -f foobar; bash -c env' |grep foobar
BASH_FUNC_foobar%%=() { :
$ bash -c 'foobar () { :; }; export -f foobar; ksh93 -c env' |grep foobar
BASH_FUNC_foobar%%=() { :
$ bash -c 'foobar () { :; }; export -f foobar; mksh -c env' |grep foobar
$ bash -c 'foobar () { :; }; export -f foobar; zsh -c env' |grep foobar
BASH_FUNC_foobar%%=() { :
$ bash -c 'foobar () { :; }; export -f foobar; busybox sh -c env' |grep foobar
BASH_FUNC_foobar%%=() { :
Environment variables are a feature of the Unix operating system. Support for them goes all the way down to the kernel: when a program calls another program (with the execve system call), one of the parameters of the call is the new program's environment.
The built-in command export in sh-style shells (dash, bash, ksh, …) causes a shell variable to be used as an environment variable which is transmitted to processes that the shell calls. Conversely, when a shell is called, all environment variables become shell variables in that shell instance.
Exported functions are a bash feature. Bash “exports” a function by creating an environment variable whose name is derived from the name of the function and whose value is the body of the function (plus a header and a trailer). You can see above how the name of the environment variable is constructed: BASH_FUNC_ then the name of the function then %%.
This name is not a valid name for a shell variable. Recall that shells import environment variables as shell variables when they start. Different shells have different behaviors when the name of an environment variable is not a valid shell variable. Some pass the variable through to their subprocesses (above: bash, ksh93, zsh, BusyBox), while others only pass the exported shell variables to their subprocesses (above: dash, mksh), which effectively removes the environment variables whose name is not a valid shell variable (non-empty sequence of ASCII letters, digits and _).
Originally, bash used an environment variable with the same name as the function, which would mostly have avoided this problem. (Only mostly: function names can contain characters that are not allowed in shell variable names, such as -.) But this had other downsides, such as not allowing to export a shell variable and a function with the same name (whichever one was exported last would overwrite the other in the environment). Critically, bash changed when it was discovered that the original implementation caused a major security hole. (See also What does env x='() { :;}; command' bash do and why is it insecure?, When was the shellshock (CVE-2014-6271/7169) bug introduced, and what is the patch that fully fixes it?, How was the Shellshock Bash vulnerability found?) A downside of this change is that exported functions no longer go through some programs, including dash and mksh.
Your system probably has dash as /bin/sh. It's a very popular choice. /bin/sh is used a lot, so the chances are high that there was a call to sh somewhere in the call path from the original instance of bash that ran export -f _load_common to the instance of bash that tried to use the function. __ENV_VARS_LOADED_MARKER_VAR passed through because it has a valid variable name, but BASH_FUNC__load_common%% didn't pass through.
The solution
Don't use exported functions. They have little use in the first place, and for you they are completely useless. The only advantage of exporting functions is to call bash without requiring that instance of bash to read the definition of the function from somewhere, for example to define a function in a script and to pass it to a bash instance invoked from find -exec or xargs or parallel. But in your case, you already have code to read the function definition. So just read the function definition unconditionally. Remove export -f _load_common, remove __ENV_VARS_LOADED_MARKER_VAR, and just call source "$USER_ENVS".
| bash: exported function not visible, but variables are |
1,497,561,233,000 |
Possible Duplicate:
Executing user defined function in a find -exec call
Suppose I have the following bash code:
!#/bin/bash
function print_echo (){
echo "This is print_echo Function" $1;
}
find ./ -iname "*" -exec print_echo {} \;
For each -exec command i get the following error:
find: `print_echo': No such file or directory
NOTE: Before it , i tested it for a critical program , and solved my program with another algorithm but it's a question: Why find command doesn't accept built-in bash command and function name as argument of -exec?
|
The find does not accept your function as a command because its -exec predicate literally calls C library exec function to start the program. Your function is available only to the bash interpreter itself. Even if you define your function inside your .bashrc file it will be 'visible' only to the bash.
So, if you really need two execute with find's -exec some custom sequence of commands put it into a separate script file or use other workarounds.
| Find functions, commands, and builtins [duplicate] |
1,497,561,233,000 |
In my .bash_profile file, I'd like to setup a single command alias that is two commands that execute one after another. The first command takes an argument from the command line and the second is actually script (located in ~/bin with execute permissions).
My profile file has this:
alias pd='function pd2() { pushd "$@"; set_title_tab; }'
I attempt to execute it in the shell like so:
$ pd ~/Documents/
and I get an error:
-bash: syntax error near unexpected token `~/Documents/'
set_title_tab is a shell script written by William Scott
Is there a better way to accomplish this?
|
aliases do not support input parameters, and there's no need to wrap functions in aliases. Simply use a function:
pd() {
pushd "$@"
set_title_tab
}
pd ~/Documents
| Combine two commands in .bash_profile |
1,497,561,233,000 |
My .bashrc was getting a little long, so I decided to break it up into smaller files according to topic and then call these files from within .bashrc as so
#my long .bashrc file
bash .topic1rc
bash .topic2rc
but, within one of these sub-scripts, I had created a bash function. Unfortunately it is no longer available to me as it was before I broke everything into chunks. How do a break my .bashrc file into chunks, but retain access to the variables and functions that I create?
|
If I understand your question correctly, you need to source or . your files. For example, within your .bashrc and taking care of order (only you know):
source .topic1rc
source .topic2rc
source can be shortened to . on the command line for ease of use - it is exactly the same command. The effect of source is to effectively inline the script that is called.
| How to make functions created in a bash script persist like those in .bashrc? |
1,497,561,233,000 |
Forgive me; I'm pretty new to bash files and the like. Here is a copy of my .bashrc:
alias k='kate 2>/dev/null 1>&2 & disown'
function kk {kate 2>/dev/null 1>&2 & disown}
The alias in the first line works fine, but the second line throws:
bash: /home/mozershmozer/.bashrc: line 3: syntax error near unexpected token `{kate'
bash: /home/mozershmozer/.bashrc: line 3: `function kk {kate 2>/dev/null >1>&2 & disown}'
I'm running Linux Mint 17.3 and those are the only two lines in my .bashrc file. Pretty much everything else on my machine is default vanilla. Ultimately I want to play around with the function to get it to do some specific things, but I hit the syntax wall immediately. The exact function I have listed here is just a sort of experimental dummy to allow me to learn the syntax more clearly.
|
In bash and other POSIX shells, { and } aren't exactly special symbols so much as they are special words in this context. When creating a compound command like in your function definition, it is important that they remain words, i.e. surrounded by whitespace.
The final command in a single-line function definition like this must be terminated by a semicolon. Otherwise the closing brace } is treated as an argument to the command.
As an aside, if you want your function to be portable to other POSIX shells, it is better to use a different function syntax:
kk () { kate 2>/dev/null 1>&2 & disown; }
The use of function is specific to bash, while the form given here works with bash as well as others like sh, Korn and Almquist shells.
disown is also bash specific.
| Syntax Error near Unexpected Token in a bash function definition [closed] |
1,497,561,233,000 |
on bash (v4.3.11) terminal type this:
function FUNCtst() { declare -A astr; astr=([a]="1k" [b]="2k" ); declare -p astr; };FUNCtst;declare -p astr
(same thing below, just to be easier to read here)
function FUNCtst() {
declare -A astr;
astr=([a]="1k" [b]="2k" );
declare -p astr;
};
FUNCtst;
declare -p astr
will output this (outside of the function the array looses its value, why?)
declare -A astr='([a]="1k" [b]="2k" )'
bash: declare: astr: not found
I was expecting it output this:
declare -A astr='([a]="1k" [b]="2k" )'
declare -A astr='([a]="1k" [b]="2k" )'
how to make it work?
|
From the man page:
When used in a function, declare makes each name local, as with the local command, unless the -g option is used.
Example:
FUNCtst() {
declare -gA astr
astr=([a]="1k" [b]="2k" )
declare -p astr
}
FUNCtst
declare -p astr
prints
declare -A astr=([a]="1k" [b]="2k" )
declare -A astr=([a]="1k" [b]="2k" )
| Bash array declared in a function is not available outside the function |
1,497,561,233,000 |
In a bash script I define a function that is called from find. The problem is that the scope of variables does not extend to the function. How do I access variables from the function? Here is an example:
variable="Filename:"
myfunction() {
echo $variable $1
}
export -f myfunction
find . -type f -exec bash -c 'myfunction "{}"' \;
This will output filenames but without the string "Filename:".
Is there perhaps a better way to invoke a function from find such that it is called for every file that find finds, and variables are still defined?
|
You could declare variable as an environment variable, i.e.,
export variable="Filename:"
| Scope of variables when calling function from find |
1,497,561,233,000 |
I want to write a init script that should basically run
nvm use v0.11.12 && forever start /srv/index.js
as the user webconfig. nvm is a shell function that is declared in ~webconfig/.nvm/nvm.sh, which is included via source ~/.nvm/nvm.sh in webconfig's .bashrc.
I tried the following:
sudo -H -i -u webconfig nvm
echo "nvm" | sudo -H -i -u webconfig
but they fail with
-bash: nvm: command not found
-bash: line 1: nvm: command not found
When I run sudo -H -i -u webconfig and enter nvm manually in that shell, it works. What am I doing wrong?
|
The problem here, as is so often the case, is about the different types of shell:
When you open a terminal emulator (gnome-terminal for example), you are executing what is known as an interactive, non-login shell.
When you log into your machine from the command line, or run a command such as su username, or sudo -u username, you are running an interactive login shell.
So, depending on what type of shell you have started, a different set of startup files are read. From man bash:
When bash is invoked as an interactive login shell, or as a non-inter‐
active shell with the --login option, it first reads and executes com‐
mands from the file /etc/profile, if that file exists. After reading
that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile,
in that order, and reads and executes commands from the first one that
exists and is readable. The --noprofile option may be used when the
shell is started to inhibit this behavior.
In other words, ~/.bashrc is ignored by login shells. Since you are using the -i option to sudo, the startup files for the user's login shell are being read (from man sudo):
-i, --login
Run the shell specified by the target user's password data‐
base entry as a login shell. This means that login-specific
resource files such as .profile or .login will be read by the
shell.
So, what you can do is
Define the function in the user's ~/.profile or ~/.bash_profile instead. Bear in mind that ~/.profile is ignored if ~/.bash_profile exists. Also keep in mind that ~/.bash_profile is bash-specific so I would use .profile instead, just make sure that ~/.bash_profile does not exist.
Source ~/.nvm/nvm.sh from ~/.profile.
| Run nvm (bash function) via sudo |
1,497,561,233,000 |
I recently wrote the following bash function:
makeaudiobook () {
count=1
almbumartist=$2
for f in $1; do
preprocess $f > $f-preprocessed
text2wave $f-preprocessed -o $f.wav
lame -b 16 --tt $f --ta $albumartist --tl $albumartist --tn $count $f.wav $f.mp3
rm -rf $f.wav $f-preprocessed
count=$(($count+1))
done
}
It does not work as expected. It does not iterate over all of the files that fit the glob expression I specify in $1. As a one-liner, this would work. An in fact, I made this into a function because I was tired of writing it out as a one-liner. But as a bash function, it doesn't work. So what do I need to do to make it work?
I want to be able to call it on things like the output of split, like,
me@localhost$~ makeaudiobook x?? haskell
...where x?? matches xaa, xab, xac, etc. Like I said, as a one-liner this is fine. So what do I need to do to pass this glob expression to $1?
|
x?? is expanded at the time of function call. So your function is already called with xaa xab xac... .
The simplest way would be to change the ordering of your parameters:
makeaudiobook () {
count=1
almbumartist=$1
shift
for f in "$@"; do
preprocess "$f" > "$f"-preprocessed
text2wave "$f"-preprocessed -o "$f".wav
lame -b 16 --tt "$f" --ta "$albumartist" --tl "$albumartist" --tn "$count" "$f".wav "$f".mp3
rm -rf "$f".wav "$f"-preprocessed
count=$(($count+1))
done
}
(look how I put " around every variable - this is to prevent bad things from happening if you have whitespace in your file names, what often happens with audio files)
and call it with makeaudiobook haskell x??, i. e. with rotated arguments.
| for loop in bash function |
1,497,561,233,000 |
How do I prefix -p to every argument passed to my function?
Modifying the arguments themselves and creating a new array are both fine.
|
This should work nicely even for complicated arguments with whitespace and worse:
#!/bin/bash
new_args=()
for arg
do
new_args+=( '-p' )
new_args+=( "$arg" )
done
for arg in "${new_args[@]}"
do
echo "$arg"
done
Test:
$ ~/test.sh foo $'bar\n\tbaz bay'
-p
foo
-p
bar
baz bay
| Prefix every argument with -o in BASH |
1,554,126,051,000 |
Is there a way to lookup (just print the completion definition to stdout) currently loaded zsh completion functions.
I understand that they are stored somewhere on my fpath and I could do something like ag $fpath completionname to find the definition in the files.
Is there a cleaner way, a zsh function or something to do this?
|
The name of the completion function for the command foo is $_comps[foo].
To see the code of a function myfunc, run echo -E $functions[myfunc], or just echo $functions[myfunc] if you have the bsd_echo option on, or print -rl $functions[myfunc]. So to see the code of the completion function for the command foo, run echo -E $functions[$_comps[foo]]. Alternatively, run which $_comps[foo] if the function name has no alias.
This shows the code without comments (and with normalized whitespace: it's a human-readable dump of the bytecode that zsh stores internally). If you want to see the original file defined in the code, run whence -v $_comps[foo] or echo $functions_source[$_comps[foo]]. The functions_source array is only available if the module zsh/parameter is loaded (you can do this with zmodload zsh/parameter) and only since zsh 5.4.
If you haven't used the function yet, you'll see something like builtin autoload -XU instead of the code, and no path to the source. To see the path to the source, run autoload -r $_comps[foo] first to make zsh resolve the path to the source, then you can display it with which or whence or $functions_source.
zmodload zsh/parameter
(($+_comps[foo])) &&
autoload -r $_comps[foo] &&
((+$functions_source[$_comps[foo]])) &&
cat $functions_source[$_comps[foo]]
| How to look up zsh completion definitions |
1,554,126,051,000 |
Normally when I cat a file like this
it's hard to read without colorizing.
I've managed to get cat to use source-highlight like this:
cdc() {
for fn in "$@"; do
source-highlight --out-format=esc -o STDOUT -i $fn 2>/dev/null || /bin/cat $fn
done; }; alias cat='cdc'
which now produces the following for a recognized file extension - .sh in this case:
However without the .sh, e.g. if the file is just called .bash_functions the colorizing doesn't happen - because the file extension is not known.
Is there any way I can get color-highlight to color dot files (files that begin with a dot) as sh colors ?
btw this builds on top of How can i colorize cat output including unknown filetypes in b&w?
man source-higlight shows the following but I'm not clear what to do:
...
--outlang-def=filename
output language definition file
--outlang-map=filename
output language map file (default=`outlang.map')
--data-dir=path
directory where language definition files and language maps are searched for. If not specified
these files are searched for in the current directory and in the data dir installation directory
--output-dir=path
output directory
--lang-def=filename
language definition file
--lang-map=filename
language map file (default=`lang.map')
--show-lang-elements=filename
prints the language elements that are defined
in the language definition file
--infer-lang
force to infer source script language (overriding given language specification)
|
Define your cdc function as
cdc() {
for fn do
if [[ "${fn##*/}" == .* ]]
then
source-highlight --src-lang=sh --out-format=esc -i "$fn"
else
source-highlight --out-format=esc -i "$fn"
fi 2> /dev/null || /bin/cat "$fn"
done
}
for fn do is short for for fn in "$@"; do.
${fn##*/} looks at the value of $fn and removes
everything from the beginning up through (and including) the last /.
I.e., if $fn is a full pathname, this will be just the filename part.
[[ (the_above) == .* ]] checks
whether the filename matches the .* glob/wildcard pattern;
i.e., whether the filename begins with a ..
Note that this usage of == works only inside [[ … ]];
it does not work inside [ … ].
So, if $fn is a “dot file”,
run source-highlight with the --src-lang=sh option.
You should always put shell variable references in double quotes
unless you have a good reason not to,
and you’re sure you know what you’re doing.
Unix/Linux filenames can contain spaces.
If you had a file named foo bar, and you said /bin/cat "foo bar",
cat would display the contents of the file foo bar.
But, if you said cdc "foo bar"
(with the current version of your cdc function),
you would run source-highlight with -i foo bar,
which would look for a file called foo and generally make a mess of things.
And so it would fail, and your function would try /bin/cat foo bar,
which would likewise fail.
Using "$fn" makes this work for filenames that contain spaces.
The cp program requires you to specify, on the argument list,
the name of the file or directory you want it to write to.
This is one of the few exceptions to the rule that most programs
write to standard output by default (unless you specify otherwise).
You don’t need to say -o STDOUT, and I wonder
why the author(s) of the program even made it possible for you to specify that.
And, yes, I realize that you just copied all of that
from the answer to your other question.
Obviously, if $fn is not a dot file,
just run source-highlight the normal way, and let it check for an extension.
Note that the 2> /dev/null and the || /bin/cat "$fn" can be done
for the if … then … else … fi block in its entirety;
they don’t have to be repeated for each branch.
Hmm. My version of source-highlight (3.1.7)
has a --src-lang=LANGUAGE option
(-s LANGUAGE, as used by yaegashi, for short).
I just noticed that it isn’t in the source-highlight man page excerpt
you included in your question. So, obviously,
if your version of source-highlight doesn’t support that option,
my answer won’t work for you. (And, of course, neither will yaegashi’s.)
If that’s the case, you should see if you can install version 3.1.7
(or compatible) of source-highlight.
| How can I make source-highlight colorize .dotfiles by default? |
1,554,126,051,000 |
I like to encapsulate commands within shell-functions using the same name. But to avoid the shell-function calling itself recursively, I specify the complete path of the command as the following example:
less()
{
test 0 -lt $# && echo -ne "\e]2;$@\a\e]1;$@\a" # Window title
/usr/bin/less -i -rS --LONG-PROMPT --shift 5 "$@"
}
But for some commands, I do not want to specify the path because it may change. I would prefer to use $PATH.
For instance, my following attempt failed to call mvn command using backslash: \mvn
mvn() # colorizes maven output
{
\mvn "$@" 2>&1 | #here: the shell-function recursively calls itself indefinitely
sed -u '
s/^\[ALL\].*/\o033[1;37m&\o033[0m/
s/^\[FATAL\].*/\o033[1;31m&\o033[0m/
s/^\[ERROR\].*/\o033[1;31m&\o033[0m/
s/^\[WARNING\].*/\o033[1;33m&\o033[0m/
s/^\[INFO\].*/\o033[1;37m&\o033[0m/
s/^\[DEBUG\].*/\o033[1;36m&\o033[0m/
s/^\[TRACE\].*/\o033[1;32m&\o033[0m/'
}
What is the best way to bypass this issue?
Please do not suggest to use a different shell-function name.
(I usually use bash but I am interested about other shell solutions.)
|
Prepend your actual commands (not functions) with command shell builtin, it has exactly the purpose you're looking for. Therefore your shell-function should look that:
mvn()
{
command mvn "$@" 2>&1 |
sed -u '
s/^\[ALL\].*/\o033[1;37m&\o033[0m/
s/^\[FATAL\].*/\o033[1;31m&\o033[0m/
s/^\[ERROR\].*/\o033[1;31m&\o033[0m/
s/^\[WARNING\].*/\o033[1;33m&\o033[0m/
s/^\[INFO\].*/\o033[1;37m&\o033[0m/
s/^\[DEBUG\].*/\o033[1;36m&\o033[0m/
s/^\[TRACE\].*/\o033[1;32m&\o033[0m/'
}
| Best way to call command within a shell function having the same name [duplicate] |
1,554,126,051,000 |
I have a bash script that contains many common functions definitions for our Linux system.
Is it possible to source it and use functions from another shell flavor (csh and ksh) ?
|
It should be easy enough to create a wrapper script around the functions; for Bash:
#!/bin/bash
doSomething()
{
...
}
doSomethingElse()
{
...
}
FUNCTION_NAME=$1
shift
${FUNCTION_NAME} "$@"
exit $?
Call:
/path/to/functionWrapper.sh doSomething [<param1>] [...]
| Is it possible to use functions declared in a shell flavor in another shell type |
1,554,126,051,000 |
When I create
alias wd='ps -ef | grep java | awk {'print $2 " " $9'} | egrep "(A|B|C|D)"'
or
function wd () {
ps -ef | grep java | awk '{print $2}' ...
}
in my .bashrc file, I get errors. Interestingly, if I source my .bashrc file with the function, it 'compiles', but when executing, gives me:
context is
>>> \ <<< {\print
missing }
Can someone help me with this, and also answer when its better to put something in a function versus in an alias?
|
Why the alias doesn't work
alias wd='ps -ef | grep java | awk {'print $2 " " $9'} | egrep "(A|B|C|D)"'
The alias command receives three arguments. The first is the string wd=ps -ef | grep java | awk {print (the single quotes prevent the characters between them from having a special meaning). The second argument consists of a single space character. (In .bashrc, the positional parameters $2 and $9 are empty, so $2 expands to a list of 0 words.) The third argument is } | egrep "(A|B|C|D)" (again the single quotes protect the special characters).
The alias definition is parsed like any other shell command when it is encountered. Then the string defined for the alias is parsed when the alias is expanded. Here are some possible ways to define this alias. First possibility: since the whole alias definition is within single quotes, only use double quotes in the commands, which means you must protect the " and $ meant for awk with backslashes.
alias wd='ps -ef | grep java | awk "{print \$2 \" \" \$9}" | egrep "(A|B|C|D)"'
Second possibility: every character stands for itself within single quotes, except that a single quote ends the literal string. '\'' is an idiom for “single quote inside a single-quoted string”: end the single-quoted string, put a literal single quote, and immediately start a new single-quoted string. Since there's no intervening space, it's still the same word.
alias wd='ps -ef | grep java | awk '\''{print $2 " " $9}'\'' | egrep "(A|B|C|D)"'
You can simplify this a bit:
alias wd='ps -ef | grep java | awk '\''{print $2, $9}'\'' | egrep "(A|B|C|D)"'
Tip: use set -x to see how the shell is expanding your commands.
Why the function doesn't work
I don't know. The part you show looks ok. If you still don't understand why your function isn't working after my explanations, copy-paste your code.
Alias or function?
Use an alias only for very simple things, typically to give a shorter name to a frequently-used command or provide default options. Examples:
alias grep='grep --color'
alias cp='cp -i'
alias j=jobs
For anything more complicated, use functions.
What you should have written
Instead of parsing the ps output, make it generate output that suits you.
wd () {
ps -C java -o pid=,cmd= | egrep "(A|B|C|D)"
}
| alias or bash function does not work |
1,554,126,051,000 |
I'm not entirely sure why I'm getting the error in my .bash_profile
syntax error near unexpected token `('
when I use the keyword grom() for my function. I wanted to create a bash function that will just automatically rebase my master branch with origin
# git status
alias gs='git status'
# git stash list
alias gsl='git stash list'
grom() {
branch_name="$(git symbolic-ref --short -q HEAD)"
git fetch && git rebase origin/master && git checkout master && git rebase origin/master && git checkout $branch_name
}
# git stash apply. must append stash@{num}
alias gsa="git stash apply"
When I change the name of the function, it compiles fine. I couldn't find grom as a keyword so I'm not sure what the issue is. If I rename the function to anything else like git-rom or even something like groms, it compiles fine. Is there some special keywords that do not work? This is on Mac OS X.
|
If you're using bash, you may have better luck declaring it as:
function grom() { … }
(Note: function will not work in strict POSIX shells like dash!)
@aug suggested (via edits to this answer) that this is due to a conflicting alias (or, less plausibly, a builtin that somehow got defined).
The reserved word function either alters the loading order to preempt the collision (aliases expand during function definition) or else avoids the issue by disabling bash's posix mode (which may allow overriding a builtin).
From the bash(1) man page:
Aliases are expanded when a function definition is read, not when the function is executed, because a function definition is itself a compound command. As a consequence, aliases defined in a function are not available until after that function is executed.
If you have a conflicting alias, you can try unalias grom before sourcing .bash_profile (it isn't necessary to add to that file unless you're still defining that conflicting alias) to clear your past experiments. Alternatively, launch a new terminal for a clean start.
| grom() keyword in bash throws unexpected '(' token |
1,554,126,051,000 |
I would like to write a function in bash, then export that function and execute it over ssh. Is that possible, and if yes, how?
I tried
#!/bin/bash
function myfunc() {
echo $1
}
export -f myfunc
but this doesn't seem to work.
|
In the example that you mention in your comment it is parallel that transfers the function to the remote environment (and it works only bash). So you have to use parallel to try it. After defining and exporting (as per Q), you should:
function myfunc() {
echo $1
}
export -f myfunc
parallel --env myfunc -S server 'myfunc abc' ::: bar
There is a part in the tutorial about that.
The bash function forwarding feature with --env has been available starting with parallel version 20130722.
| How to export a function in bash over ssh? |
1,554,126,051,000 |
How can I display a list of shell variables in Bash, not including functions, which often clutter the output, because of the many related to completion?
I have examined declare, and it has an option for limiting the output to functions (declare -f), but not for limiting the output to "plain" shell variables?
|
The command compgen -v will display a list of names of shell variables in the current bash shell session. Also, declare -p, which lists the attributes and values of all variables in a form that is (almost always) suitable for shell input, does not list functions.
| How to display only shell variables (not functions) [duplicate] |
1,554,126,051,000 |
How do I use the positional parameters (which are given from the command line) inside of a function declaration?
When inside the function definition, $1 and $2 are the only the positional parameters of the function itself, not the global positional parameters!
|
It's not exactly clear to me what you are asking, but the following example might clear things up:
$ cat script
#!/usr/bin/env bash
echo "Global 1st: ${1}"
echo "Global 2nd: ${2}"
f(){
echo "In f() 1st: ${1}"
echo "In f() 2nd: ${2}"
}
f "${1}" "${2}"
$ ./script foo bar
Global 1st: foo
Global 2nd: bar
In f() 1st: foo
In f() 2nd: bar
| Use of positional parameters inside function defintion |
1,554,126,051,000 |
How does Bash initialize local variables? Will the following commands always do the same thing (when used inside a function)?
local foo
local foo=
local foo=""
|
local foo="" and local foo= are exactly equivalent. In both cases, the right-hand side of the equal sign is an empty string.
local foo and local foo= are different: local foo leaves foo unset while local foo= sets foo to an empty string. More precisely, local foo creates a local variable, and that variable is initially unset. A subsequent assignment foo=… sets the local value, and that assignment can be combined with the local statement. Witness:
bash-4.3$ demo () {
local unset empty=
echo "unset=\"${unset-(not set)}\" empty=\"${empty-(not set)}\""
}
bash-4.3$ demo
unset="(not set)" empty=""
This is the same behavior as ksh (except that in ksh you need to use the keyword typeset instead of local). On the other hand, in zsh, local foo sets foo to an empty string.
| Bash local variable initialization |
1,554,126,051,000 |
Is there a way to return a specific value in an echoing function?
return allows me to return an exit status for the function. I need to return a more sophisticated data structure such as an array, or a string. Usually I would need to echo the value I want to be returned. But what if I need to echo informative messages in my function, and I only need to get the last echo which holds the result I need?
I have this chunk of code I would like to make a function out of it, but I would like to keep the informative echo as they are useful for guiding the user's input.
modules=(module1 module2 module3)
is_valid=-1
while [ $is_valid -lt 1 ]
do
echo "Please chose and order the available modules you need:"
echo -e $(list_array_choices modules[@])
echo -n "> "
read usr_input
choices=("$usr_input")
is_valid=$(is_list_in_range choices[@] ${#modules[@]})
[ "$is_valid" -eq -1 ] && echo -e "Error: your input is invalid.\n"
done
I would like to do something like
function get_usr_choices() {
modules=${!1}
is_valid=-1
while [ $is_valid -lt 1 ]
do
echo "Please chose and order the available modules you need:"
echo -e $(list_array_choices modules[@])
echo -n "> "
read usr_input
choices=("$usr_input")
is_valid=$(is_list_in_range choices[@] ${#modules[@]})
[ "$is_valid" -eq -1 ] && echo -e "Error: your input is invalid.\n"
done
echo ${choices[@]} # This is the result I need.
}
choices=$(get_usr_choices modules[@])
Alas the echos completely mess up the output as I get a string containing all the echo, including the informative ones. Is there a way to do what I want a clean way?
|
You could output all the other content directly to screen assuming you don't ever want to do anything with it other than display.
Something similar to this could be done
#!/bin/bash
function get_usr_choices() {
#put everything you only want sending to screen in this block
{
echo these
echo will
echo go
echo to
echo screen
}> /dev/tty
#Everything after the block is sent to stdout which will be picked up by the assignment below
echo result
}
choices=$(get_usr_choices)
echo "<choices is $choices>"
Running this returns
these
will
go
to
screen
<choices is result>
| Get specific result from function |
1,554,126,051,000 |
This is probably a very easy to answer question, but I could not find any questions already asking this due to different wording when writing titles.
Running help alias on my bash prompt returns only this:
alias: alias [-p] [name[=value] ... ]
Then a very short text that has nothing to do with what I'm asking.
I also tried:
help function
But that didn't give me much information either.
For example:
alias mancat="man command | cat"
So that I could run mancat grep which would be equivalent to man grep | cat.
I know that is called variables, but they are undefined and I would want to be able change them at any time like when running my example command.
|
You need to use function, not alias, so that
mancat () { man "$1" | cat ; }
mancat grep
will do what you want.
Similarly
mygrep () { "$1" "$3" "$2" | "$1" -v "$4" | "$5" -n1; }
mygrep grep pattern1 file pattern2 head
mygrep grep pattern1 file pattern2 tail
will grep for pattern1 in the file and then select only lines which don't match pattern2 (grep -v) and at the end select only first (or last) line.
| How do I define alias with variables which can be changed at runtime? |
1,554,126,051,000 |
I have this function to get MAC address from IP:
ip2arp() {
local ip="$1"
ping -c1 -w1 "$ip" >/dev/null
arp -n "$ip" | awk '$1==ip {print $3}' ip="$ip"
}
What is the right way for using it later? Save to /usr/bin as sh and make it executable, or save it in a home directory and make an alias in bash? Is there a right and wrong way?
|
If it's only for your personal use then you could add it to your shell's initialization file as a function, e.g. ~/.bashrc.
For a summary of the different initialization files in Bash you can consult the Bash Guide for Beginners:
Bash Guide for Beginners - Section 3.1: Shell Initialization Files
Also see the Bash Reference Manual:
Bash Reference Manual - Section 6.2: Bash Startup Files
A typical pattern would be to put your function definition in your ~/.bashrc file and source that file from your ~/.bash_profile.
But it's probably worth noting that which profile file to use can depend on your OS, your terminal application, and your own preferences. See for example the following posts on AskDifferent:
Why doesn't .bashrc run automatically?
Why doesn't Mac OS X source ~/.bashrc?
Also see this post on StackOverflow:
What's the difference between .bashrc, .bash_profile, and .environment?
Alternatively, you can create a personal directory for your own scripts (e.g. I use ~/local/bin) and then add that directory to your PATH in your profile file (i.e. export PATH="${HOME}/local/bin:${PATH}).
If you want to make it available to other users then you might put it in /usr/local/bin as a script (rather than /usr/bin).
For further discussion regarding where to put executable files, see the following posts:
/usr/bin vs /usr/local/bin on Linux
Where should a local executable be placed?
| How to save a function in bash for later use? |
1,554,126,051,000 |
I need two ways to terminate a part of my bash script.
Either a counter reaches a predefined number, or the user manually forces the script to continue with whatever the value the counter currently has.
Specifically - I'm listing USB drives. If there is 15 of them, the function that counts them exits and the script can continue.
My code looks a bit like this:
scannew(){
NEW=0
OLD=$NEW
while [ true ]; do
# count the new drives
lsblk -r > drives.new
diff drives.old drives.new | grep disk | cut -d' ' -f 2 | sort > drives.all
NEW=$(wc -l drives.all | cut -d' ' -f1)
echo -en " Detected drives: $NEW \r"
sleep 0.01
if [ "$NEW" -eq "15" ]; then # exit if we reach the limit
break
fi
done
}
# SOME CODE...
lsblk -r > drives.old
scannew & # start live device counter in the background
SCAN_PID=$! # remember it's PID
wait $SCAN_PID 2>/dev/null # wait until it dies
echo "It's on!"
# REST OF THE CODE...
I tried various stuff with the read command, but the result is, the script will always wait for read to exit (after pressing ENTER) and I can't make the "15 limit" condition to override that.
For example I tried using read -t instead of sleep in the scannew() function:
scannew(){
NEW=0
OLD=$NEW
while [ true ]; do
# count the new drives
lsblk -r > drives.new
diff drives.old drives.new | grep disk | cut -d' ' -f 2 | sort > drives.all
NEW=$(wc -l drives.all | cut -d' ' -f1)
echo -en " Detected drives: $NEW \r"
read -t 0.01 -n 1 && break # read instead of sleep
if [ "$NEW" -eq "15" ]; then
break
fi
done
}
However - it seems that the function subprocess doesn't have access to stdin, and using read -t 0.01 -n 1 < /dev/stdin && break instead didn't work either.
How can I make this work?
|
Let me start by saying, you could just inline all the stuff you have in scannew, since you're waiting anyway, unless you intend to scan again at some other point in your script. It's really the call to wc that you're concerned might take too long, which, if it does, you can just terminate it. This is a simple way to set that up using trap which allows you to capture signals sent to a process and set your own handler for it:
#! /usr/bin/env bash
# print a line just before we run our subshell, so we know when that happens
printf "Lets do something foolish...\n"
# trap SIGINT since it will be sent to the entire process group and we only
# want the subshell killed
trap "" SIGINT
# run something that takes ages to complete
BAD_IDEA=$( trap "exit 1" SIGINT; ls -laR / )
# remove the trap because we might want to actually terminate the script
# after this point
trap - SIGINT
# if the script gets here, we know only `ls` got killed
printf "Got here! Only 'ls' got killed.\n"
exit 0
However, if you want to retain the way you do things, with scannew being a function run as a background job, it takes a bit more work.
Since you want user input, the proper way to do it is to use read, but we still need the script to go on if scannew completes and not just wait for user input forever. read makes this a bit tricky, because bash waits for the current command to complete before allowing traps to work on signals. The only solution to this that I know of, without refactoring the entire script, is to put read in a while true loop and give it a timeout of 1 second, using read -t 1. This way, it'll always take at least a second for the process to finish, but that may be acceptable in a circumstance like yours where you essentially want to run a polling daemon that lists usb devices.
#! /usr/bin/env bash
function slow_background_work {
# condition can be anything of course
# for testing purposes, we're just checking if the variable has anything in it
while [[ -z $BAD_IDEA ]]
do
BAD_IDEA=$( ls -laR / 2>&1 | wc )
done
# `$$` normally gives us our own PID
# but in a subshell, it is inherited and thus
# gives the parent's PID
printf "\nI'm done!\n"
kill -s SIGUSR1 -- $$
return 0
}
# trap SIGUSR1, which we're expecting from the background job
# once it's done with the work we gave it
trap "break" SIGUSR1
slow_background_work &
while true
do
# rewinding the line with printf instead of the prompt string because
# read doesn't understand backslash escapes in the prompt string
printf "\r"
# must check return value instead of the variable
# because a return value of 0 always means there was
# input of _some_ sort, including <enter> and <space>
# otherwise, it's really tricky to test the empty variable
# since read apparently defines it even if it doesn't get input
read -st1 -n1 -p "prompt: " useless_variable && {
printf "Keypress! Quick, kill the background job w/ fire!\n"
# make sure we don't die as we kill our only child
trap "" SIGINT
kill -s SIGINT -- "$!"
trap - SIGINT
break
}
done
trap - SIGUSR1
printf "Welcome to the start of the rest of your script.\n"
exit 0
Of course, if what you actually want is a daemon that watches for changes in the number of usb devices or something, you should look into systemd which might provide something more elegant.
| Wait for a process to finish OR for the user to press a key |
1,554,126,051,000 |
I have lots of functions in my bashrc, but for newly created ones I often forget the name of the function.
So for example, when I have defined this function in my .bashrc:
function gitignore-unstaged
{
### Description:
# creates a gitignore file with every file currently not staged/commited.
# (except the gitingore file itself)
### Args: -
git ls-files --others | grep --invert-match '.gitignore' > ./.gitignore
}
And I would like to have another function which prints out the definition of the function like:
$ grepfunctions "gitignore"
function gitignore-unstaged
{
### Description:
# creates a gitignore file with every file currently not staged/commited.
# (except the gitingore file itself)
### Args: -
git ls-files --others | grep --invert-match '.gitignore' > ./.gitignore
}
But instead of matching for "gitignore" I want it to match every string between funtction and }, so $ grepfunctions "###" and $ grepfunctions "creates" should output the exact same thing. That's also the reason, why declare -f and such don't solve the problem.
What I have tried
I can't use grep
I know, that sed -n -e '/gitignore-unstaged/,/^}/p' ~/.bashrc prints out what i want - but sed -n -e '/creates/,/^}/p' ~/.bashrc not. Instead, I receive:
# creates a gitignore file with every file currently not staged/commited.
# (except the gitingore file itself)
### Args: -
git ls-files --others | grep --invert-match '.gitignore' > ./.gitignore
}
The function name and the first { are cut out, which is not what I want.
How can I print out the complete function declaration of any function which has a specific string inside it? Of course, other tools than sed are also allowed.
|
Note that with zsh, you can do:
printf '%s() {\n%s\n}\n\n' ${(kv)functions[(R)*gitignore*]}
To retrieve the information from the currently defined functions (that doesn't include the comments obviously).
Now, if you want to extract the information from the source file, then you can't do reliably unless you implement a full shell parser.
If you can make some assumption on how your functions are declared, like for instance if you always use that ksh-style function definition, with function and } at the start of the line, you could do:
perl -l -0777 -ne 'for (/^function .*?^\}$/gms) {
print if /gitignore/}' ~/.bashrc
or to only look in the function body:
perl -l -0777 -ne 'for (/^function .*?^\}$/gms) {
print if /\{.*gitignore/s}' ~/.bashrc
| How can I print out the complete function declaration of any function which has a specific string inside it? |
1,554,126,051,000 |
i did not yet found a solution to this. Anyone a hint?
i sometimes write bash functions in my shell scripts and i love to have my scripts being verbose, not just for debugging. so sometimes i would like to display the "name" of a called bash function as a "variable" in my scripts.
what i did sometimes is setting just a regular variable containing the function name. like this:
test ()
{
funcName=test
echo "function running..."
echo "\$0 is : $0"
echo "function name is : $funcName"
}
but that is kinda stupid.
Is there something better?
|
Sometimes it's enough to read man bash:
FUNCNAME
An array variable containing the names of all shell
functions currently in the execution call stack. The
element with index 0 is the name of any
currently-executing shell function. The bottom-most
element (the one with the highest index) is "main".
This variable exists only when a shell function is
executing. Assignments to FUNC- NAME have no effect and
return an error status. If FUNCNAME is unset, it loses
its special properties, even if it is subsequently
reset.
Example usage:
#!/usr/bin/env bash
func()
{
echo I am inside "$FUNCNAME"
}
foo()
{
echo I am inside "$FUNCNAME"
}
func
foo
| how to get or reflect the name of the bash function which is called? [duplicate] |
1,554,126,051,000 |
I'm performing several commands on large files from within a bash script. In order to monitor the progress I use the pv command.
An example command could look like this
cat $IN_FILE | pv -w 20 -s $(du -sb $IN_FILE | awk '{print $1}') | grep ...
The script contains multiple commands of similar structure, rather than having to use the same prefix on all of those commands over and over again I defined a function for the progress monitoring part as short hand. (Note: the fSize and fSize Sum functions wrap the du command part from the above example, where fSizeSum returns the sum of the size of two files fSize the size of one file):
function prog()
{
local __size
if [[ $2 ]]
then
__size=$(fSizeSum $1 $2)
else
__size=$(fSize $1)
fi
echo "cat $1 | pv -w 20 -s ${__size}"
}
Actually that approach already shortens the commands down a bit. I can execute the above example now like this:
eval "$(prog $IN_FILE) | grep ..."
What is still bothering me, is that I now have to quote each line and use eval. I'd rather like to use the function "prog" like it was a regular shell command i.e.:
prog $IN_FILE | grep ...
All my attempts to modify my function to work as a prefix in a chain of piped commands have failed so far.
Is there any way I can make self defined bash-script function to act like it was a regular bash command and put it in front of a pipe chain? If there is no way to do so using functions is there any other method to achieve this goal - like defining a local alias within the script?
|
In your function definition, I would suggest replacing:
echo "cat $1 | pv -w 20 -s ${__size}"
with just:
cat $1 | pv -w 20 -s ${__size}
This way, the function itself will execute this bit of code, without requiring a call to eval in the caller.
| How to use a bash function like a regular command in a pipe chain? |
1,554,126,051,000 |
I want to be able to name a terminal tab so I can keep track of which one is which. I found this function (here) and put it in my .bashrc:
function set-title() {
if [[ -z "$ORIG" ]]; then
ORIG=$PS1
fi
TITLE="\[\e]2;$*\a\]"
PS1=${ORIG}${TITLE}
}
and now when I call set-title my new tab name the tab name is changed as expected to "my new tab name". The problem is that I want to open a new tab and name it using set-title. The way I have tried to do this, is like this:
gnome-terminal --geometry=261x25-0+0 --tab -e "bash -c 'set-title tab1; sleep 10'" --tab -e "bash -c 'set-title tab2; sleep 10"
However, now I get the following error message:
bash: set-title: command not found
And I think this is to do with the new gnome tab not knowing about the .bashrc function yet.
How can I get this to work?
|
Instant of using function set-title you can create command with this functionality, so remove function set-title() that you add from ~/.bashrc and create a file /usr/local/bin/set-title:
#!/bin/bash
echo -ne "\033]0;$1\007"
Add chmod: chmod +x /usr/local/bin/set-title.
And after you re-open terminal you can use this command by: set-title TEST (If you have /usr/local/bin/ in your $PATH).
And then you can use it when creating new tab by this way:
gnome-terminal --geometry=261x25-0+0 \
--tab -e "bash -c 'set-title TAB1; sleep 10'" \
--tab -e "bash -c 'set-title TAB2; sleep 10'"
If you somehow don't have /usr/local/bin/ in your $PATH, you can try with absolute path to the set-title command:
--tab -e "bash -c '/usr/local/bin/set-title TAB1; sleep 10'"
| Call a .bashrc function from a bash shell script |
1,554,126,051,000 |
I have written a function which acts in a similar way to tee but also pre-pends a datestamp. everything works fine except when i want to output to a file which is only root writable (in my case a logfile within /var/log). I've simplified the following code snippet to just include the bits which are not working:
#!/bin/bash
#script ~/test_logger.sh
logfile=/var/log/test.log
logger()
{
while read data
do
echo $data >> $logfile
done
return 0
}
sudo ls ~ | logger
it works fine if i run the whole script like so sudo ~/test_logger.sh but i can't always do this since i want to use the logger function in files like ~/.bash_logout which are run automatically. i've tried putting sudo in front of the echo in the while loop but this does not work. any ideas?
|
it's generally bad practice to put sudo in a script. A better choice would be to call the script with sudo from ~/.bash_logout or wherever else you want to use it, if you must, or better still just make /var/log/test.log world-writable.
| How can I use sudo within a function? |
1,554,126,051,000 |
On small systems where there is no locate installed, How would an alias look like that gets the same result as locate?
I can imagine find can produce the same output so an alias could look like
alias locate="find / -name"
But that doesn't seem to work the same as locate:
locate test
will only find files with the name exactly called test while locate will find all files containing that.
Workaround
use:
locate *test*
|
To improve the huge speed impact on find you could simulate something like locate
alias locate="if [ ! -e /tmp/locate.db -a ! -e /tmp/locate.lockdb ]
then touch /tmp/locate.lockdb
trap \"rm /tmp/locate.lockdb; rm /tmp/locate.db; exit\" SIGHUP SIGINT SIGTERM
find /|tee /tmp/locate.db
chmod 666 /tmp/locate.db
rm /tmp/locate.lockdb
elif [ -e /tmp/locate.lockdb ]
then find /
else cat /tmp/locate.db
fi|grep "
Of course there is an issue with permissions! It would be better to write some setuid commands for tee and cat to write the database in super user mode and at a better location but /tmp.
A cheap alternative on most single or few user systems would be to write a per-user locate.db somewhere near $HOME.
Another nice alias is able to update/find. Hmm finally I think this alias works better than the original locate ;)
alias relocate="if [ ! -e /tmp/locate.lockdb ]
then rm /tmp/locate.db
fi
locate "
Edit I actually though that relocate should just be used like the locate alias above. If you use the relocate without an argument you get an error. The idea is to use relocate "no file to search for" if you don't want to search but just update the database.
Ok, the find must be setuid'ed too. But then you can throw away your locate package.
The grep argument should be passed through sed to quote the . dots.
NOTE FOR THE NOOBS: When I'm talking about setuid here DON'T SET THE SETUID FLAG ON TOOLS LIKE tee,cat or find. This would be a security breach of your system! What I mean is to write secure alternatives for these simple commands that work in setuid mode and that work in a restricted way, just for the purpose to provide fitted tools for this alias.
| locate alias with find |
1,554,126,051,000 |
Taking how reference the following code for simplicity
#!/bin/bash
number=7
function doSomething() {
number=8
}
doSomething
echo "$number"
It prints 8.
But with:
#!/bin/bash
number=7
function doSomething() {
number=8
}
$(doSomething)
echo "$number"
It prints 7.
I have the following questions:
What are the technical names for each one?, I mean functioncall and $(functioncall)
How does work each approach? It seems the former considers (affects) the variables outside of the function itself, the latter not
When is mandatory use one approach over the other (it mostly about performance issues - of course - if there is any), if there are other reasons, they are welcome.
|
You are experiencing the subtleties of the Command Substitution.
The call
doSomething
is a simple function call. It executes the function, pretty much as if you had copy-and-pasted the commands of the function to the place where you call it. Hence, it overwrites the variable number with the new value 8.
The call
$(doSomething)
on the other hand is a command substitution. It is meant to execute the function and return whatever the function printed to stdout. It is usually not used "standalone", but in variable assignments, e.g.,
os_type=$(uname)
That will execute the command uname, which on a Linux system would print Linux to the console, and store its result to the shell variable os_type. Therefore, it doesn't make sense to use a command substitution with a command or function that doesn't output anything, like your doSomething. Indeed, since the substitution $(doSomething) is basically a placeholder for the output of doSomething, the only reason why you don't get a script error there is that your function doesn't output anything. Had you stated, e.g.,
$(uname)
instead of
$(doSomething)
your shell would have tried to execute the command Linux and generated a
Linux: No such file or directory
error(1).
The key point to understand the effect you observe is that in a command substitution, the command is executed in a subshell, i.e. any changes to variables made are not backpropagated to the shell where you execute the main script. Therefore, while internally it runs the commands of doSomething and sets a variable number to 8, it does so in its own shell process that has nothing to do with the shell process that runs your script (save for the fact that its stdout is being retrieved), and therefore cannot modify the variable number you used in the main script.
For further reading, you may want to look at
What is command substitution in a shell?
Is $() a subshell?
here on this site, or
Greg's Wiki on Command Substitution
The Bash Manual on Command Substitution
for more overview.
(1) On the other hand, this means you can use a command substitution to execute a command whose name you do not know at the time of writing the script, but that you can find out by executing another command you do know.
| Script function call: function vs $(function) |
1,554,126,051,000 |
I am presently writing a Bash function to convert all the man pages listed by equery files <PACKAGE> | grep /usr/share/man/man (if you are unfamiliar equery is a tool used on Gentoo-based systems that is from the app-portage/gentoolkit package) into HTML files and one bit of information I need in order to do this is how I can remove everything but the man page's name, without its file extension, from its full path. I realize this phrasing may be confusing so I will explain what I mean by this with an example. Running equery files sys-apps/portage | grep /usr/share/man/man gives the output:
/usr/share/man/man1
/usr/share/man/man1/dispatch-conf.1.bz2
/usr/share/man/man1/ebuild.1.bz2
/usr/share/man/man1/egencache.1.bz2
/usr/share/man/man1/emaint.1.bz2
/usr/share/man/man1/emerge.1.bz2
/usr/share/man/man1/emirrordist.1.bz2
/usr/share/man/man1/env-update.1.bz2
/usr/share/man/man1/etc-update.1.bz2
/usr/share/man/man1/fixpackages.1.bz2
/usr/share/man/man1/quickpkg.1.bz2
/usr/share/man/man1/repoman.1.bz2
/usr/share/man/man5
/usr/share/man/man5/color.map.5.bz2
/usr/share/man/man5/ebuild.5.bz2
/usr/share/man/man5/make.conf.5.bz2
/usr/share/man/man5/portage.5.bz2
/usr/share/man/man5/xpak.5.bz2
out of this output say I take the final line /usr/share/man/man5/xpak.5.bz2 (which in the wording I used previously is this man page's full path), for the purpose of my example. Then what I would want to extract from it, within a Bash script, is xpak.5 (which is its file name, without its extension). How would I do this? I am presently using this Bash function:
function manhtmlp {
for i in `equery files "$1" | grep /usr/share/man/man`
do
bzcat $i | mandoc -Thtml > $HOME/GitHub/fusion809.github.io/man/${i}.html
sudo chmod 777 -R $HOME/GitHub/fusion809.github.io/man/${i}.html
done
}
on the fifth and sixth lines I use the notation ${i} to indicate where I would like to be able to prune the man page's full path for its file name (without its extension). The user-supplied input (denoted by $1 in this function) denotes the name of a package, including its category (e.g., it would equal sys-apps/portage for the Portage package manager).
EDIT: Why this question is distinct from Stripping directory paths to get file names.
This previous question's answers while similar to what I would like do not tell one how to strip file extensions away from file names, only the rest of their path. So in the example of /usr/share/man/man5/xpak.5.bz2 the answer to the aforementioned question would provide one a way to get xpak.5.bz2 out of this full file path, but not xpak.5.
|
Using parameter expansion:
line="/usr/share/man/man5/xpak.5.bz2"
# printf "%s\n" "${line##*/}"
# xpak.5.bz2
file="${line##*/}"
printf "%s\n" "${file%.*}"
xpak.5
In Zsh, you can do nested parameter expansions:
printf "%s\n" "${${line##*/}%.*}"
xpak.5
| How do I remove all but the file name (with no extension) from a full file path? [duplicate] |
1,554,126,051,000 |
I'm writing a function that will make a REST API calls which could be either GET, PUT, DELETE, POST, etc.
I would like to feed this method to the function as a parameter and add it to the options array for that single function call. Is this possible?
Currently I am solving this by creating a separate local array but would prefer to only use the single options array.
#!/bin/bash
options=(
--user me:some-token
-H "Accept: application/json"
)
some_func () {
local urn=$1
shift
local func_opts=("${options[@]}" "$@")
printf '%s\n' "${func_opts[@]}"
}
# This should return all options including -X GET
some_func /test -X GET
# This should return only the original options
printf '%s\n' "${options[@]}"
I could also use a temporary array to store the contents of options, add the new options, and then reset it before the function ends, but I don't think that is a particularly clean method either.
|
One option would be to explicitly use a subshell for the function, then override its local copy of the array, knowing that once the subshell exits, the original variable is unchanged:
# a subshell in () instead of the common {}, in order to munge a local copy of "options"
some_func () (
local urn=$1
shift
options+=("$@")
printf '%s\n' "${options[@]}"
)
| Add to array only within scope of function |
1,554,126,051,000 |
I want to define a function, and call that function every n seconds. As an example:
function h
echo hello
end
Calling h works:
david@f5 ~> h
hello
But when using watch, it doesn't...
watch -n 60 "h"
...and I get:
Every 60.0s: h f5: Wed Oct 10 21:04:15 2018
sh: 1: h: not found
How can I run watch in fish, with the function I've just defined?
|
Another way would be to save the function, then ask watch to invoke fish:
bash$ fish
fish$ function h
echo hello
end
fish$ funcsave h
fish-or-bash$ watch -n 60 "fish -c h"
funcsave saves the named function definition into a file in the path ~/.config/fish/functions/, so ~/.config/fish/function/h.fish in the above case.
| Define function in fish, use it with watch |
1,554,126,051,000 |
I have a function defined in my .bashrc that I would like to bypass:
function func() {
// func
}
export -f func
When I run env -i func I can access the func command without the function in the way, but if I try "func" or \func then I don't have any luck.
I read on another post that \ should work to bypass bash functions, is this true? If so, is there any reason that I am not able to use it in this case?
|
The official way to prevent function definitions from being used by the shell is to call:
command func
See: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/command.html
| Does \ work for escaping functions? |
1,554,126,051,000 |
I have this code that does work:
get_parameter ()
{
echo "$query" | sed -n 's/^.*name=\([^&]*\).*$/\1/p' | sed "s/%20/ /g"
}
But I want to replace the "name" with the parameter that I pass to get_parameter
get_parameter ()
{
echo "$query" | sed -n 's/^.*$1=\([^&]*\).*$/\1/p' | sed "s/%20/ /g"
}
NAME=$( get_parameter name )
This however, doesn't work. Where am I wrong?
|
Quoting: In short, variables are not replaced with their values inside 'single-quoted' strings (aka. "variable substitution"). You need to use any one of "double quotes", $'dollar quotes', or
<<EOF
here strings
EOF
| How to pass a string parameter on bash function? |
1,554,126,051,000 |
I was trying to create a function that loops over inputs and executes a command - regardless of how they are delimited.
function loop {
# Args
# 1: Command
# 2: Inputs
for input in "$2" ; do
$1 $input
done
}
declare -a arr=("1" "2" "3")
$ loop echo "$arr[@]"
1
$ loop echo 1 2 3
1
$ loop echo $arr
1
However as per this answer, for .. in .. works for arrays:
for item in "${arr[@]}" ; do
echo "$item"
done
It also works for space separated values:
for item in 1 2 3 ; do
echo "$item"
done
In a nutshell, how do I get the effect of "${arr[@]}" and 1 2 3 while passing it an argument.
Also would it be possible to extend this notion of looping to any kind of delimited items for example \n separated contents like a file? In Python we have a concept of iterators, is there something similar in bash?
|
You have not called the array properly. $arr will only expand to the first element in the array and $arr[@] will expand to the first element with the literal string [@] appended to it.
To call all elements of an array use: "${arr[@]}"
The other issue you have is that $2 only contains the second positional parameter, where you are trying to iterate through the 3rd, 4th, 5th, etc. They will all be stored in $@.
To accomplish your goal you could do something like:
function loop {
local command=$1
shift
for i in "$@"; do
"$command" "$i"
done
}
This will set command to the first positional parameter and then shift so that $@ can be used to loop through the remaining ones. Then you just need to call the array properly:
$ declare -a arr=("element1" "element2" "element3")
$ loop echo "${arr[@]}"
element1
element2
element3
$ loop printf 'hello ' 'world\n'
hello world
$ loop touch file1 file2 file3
$ ls
file1 file2 file3
If you want this function to be able to accept various delimiters you could do something like:
function loop {
local command=$1
local delim=$2
shift 2
set -- $(tr "$delim" ' ' <<<"$@")
for i in "$@"; do
"$command" "$i"
done
}
This means you have to specify what delimiter is being used via the second parameter though, like:
$ loop echo '|' 'one|two|three'
one
two
three
$ loop echo '\n' "$(printf '%s\n' 'one' 'two' 'three')"
one
two
three
However this has some bugs (If you specify a custom delimiter it will still also delimit by whitespace)
| Generic function to loop over inputs and execute a command in bash? |
1,554,126,051,000 |
Are there any relevant standards that dictate what an implementation of sh must do with an empty function?
The following snippet defines a function with zero statements
a() {
}
The subshell version appears to be treated identically
a() (
)
ash and zsh accept either construction as a function that does nothing and has an exit status of zero.
ksh (ksh93) and bash both reject this function as a syntax error
$ a() {
> }
ksh: syntax error: `}' unexpected
and with bash
bash-4.4$ a() {
> }
bash: syntax error near unexpected token `}'
|
No, a function may not have an empty body between braces in a conforming application.
POSIX defines a function definition command as:
fname ( ) compound-command [io-redirect ...]
where all those words are placeholders for things defined elsewhere in the specification. compound-command is the body of the function.
A compound command is defined as one of several items, including loops, conditionals, and case statements, but most relevantly here as a grouping command, which is defined in two cases:
( compound-list )
Execute compound-list in a subshell environment; see Shell Execution Environment. Variable assignments and built-in commands that affect the environment shall not remain in effect after the list finishes. (... passage about arithmetic expansion elided ...)
{ compound-list ; }
Execute compound-list in the current process environment. The semicolon shown here is an example of a control operator delimiting the } reserved word. Other delimiters are possible, as shown in Shell Grammar; a <newline> is frequently used.
An empty {\n} body would be valid if compound-list could be empty.
The Shell Grammar in turn defines the parsing rules of the shell command language, including compound_list:
compound_list : linebreak term
| linebreak term separator
This means a compound list is either linebreak followed by a term, or linebreak followed by a term and a separator. separator is ; or &. linebreak is a possibly-empty sequence of newlines. So this can be empty if term can be empty.
term is:
term : term separator and_or
| and_or
and and_or:
and_or : pipeline
| and_or AND_IF linebreak pipeline
| and_or OR_IF linebreak pipeline
The last two lines cover && and ||. pipeline is a non-empty sequence of commands separated by | characters. command is a simple command, a compound command, or a function definition. So term, and command, can be empty if either simple or compound commands can be empty.
A simple command always includes one of cmd_name, cmd_word, or cmd_prefix. cmd_prefix is either a redirect or an assignment, optionally attached to another prefix. The other two both break down to WORD, a token in the grammar that is a non-empty sequence of word characters. So a simple command is never empty.
We've looked at compound commands already, but let's circle back from the perspective of the grammar this time. A compound command is one of a brace group, subshell, for, while, or until loop, and if, or a case. All of these contain a fixed word (like "for") or a ( or { at minimum. So a compound command is never empty.
Thus a command is never empty, so pipeline is never empty, nor and_or, term, or compound_list. That means that
{
}
is not permitted, and so the function definition
a() {
}
is not valid either.
All of the above applies for a conforming, portable shell script. zsh and ash are free to extend their implementations to handle otherwise-invalid scripts however they want, and the implementation they've chosen seems sensible and convenient. Bash, ksh, dash, and others have taken the more minimalistic route of implementing what was required. All of these are conformant choices.
A portable script will always need to provide a non-empty function body, but a script targeting (say) zsh alone does not.
| Can a function in sh have zero statements? |
1,554,126,051,000 |
*nix commands (and functions?) have a number with them, like fsck(8), killall(1), etc.
What does the number mean?
|
The character explicitly specifies the section that the manual page is part of. On most Unices, the section definitions are as follows:
General/user commands
System calls
Library functions
Special files and drivers
File formats
Games and screensavers
Miscellanea and conventions
System administration commands, priveleged commands, and daemons
Kernel routines
SysV has a similar, but not identical structure:
General commands
(M) System administration commands and daemons
System calls
C library functions
File formats and conventions
Miscellanea
Games and screensavers
Special files and drivers
On some systems, the following sections also exist:
0 - C library headers
L - Math library functions
N - TCL functions/keywords
X - X-Windows documentation
P - POSIX specifications
| What does the "(8)" in fsck(8) mean? [duplicate] |
1,554,126,051,000 |
I try to understand the seek(2) function from Unix version 6.
This example:
seek(0,0,2)
So the first argument is the file descriptor. And 0 would be the standard input.
The second argument is the offset, which is 0.
And the third argument tells us according to man page "the pointer is set to the size of the file plus offset."
But why would you do this? Why would you point after the file?
The line is from the source code.
|
The seek(0, 0, 2) will skip over all data that is buffered up for file descriptor 0. So after this command, the next read from that filedescriptor will not read anything that was buffered.
I think if you examine the code and understand what the actual purpose is, you'll understand that even though file descriptor 0 is normally stdin, this program is really only useful if it is part of a script that is read through that file descriptor.
For example, take a look at the following script:
goto
echo "hello"
The goto without any argument is going to trigger the seek.
Without the seek(0, 0, 2) after the goto command exits, the script would still run the echo "hello" command because the caller of the goto command is simply going to read the next command from the script.
| seek function in unix |
1,554,126,051,000 |
I'm trying to set the fish history pager to be bat -l fish for syntax highlighting.
(i.e. set the PAGER environment variable bat -l fish just for the history command).
I tried:
# 1:
alias history "PAGER='bat -l fish' history"
# results in "The function “history” calls itself immediately, which would result in an infinite loop"
# 2:
alias history "PAGER='bat -l fish' \history"
# results in the same.
# 3:
alias _old_history history
alias history "PAGER='bat -l fish' _old_history"
# results in (line 1): The call stack limit has been exceeded
I'm aware that abbr works in this case, but this changes my history command, and this is not what I want.
|
Two things are happening here:
fish's alias actually creates a function.
fish ships with a default history function already.
So when you write
alias history "PAGER='bat -l fish' history"
what you actually have is the recursive function
function history
PAGER='bat -l fish' history $argv
end
Some solutions:
use a different name for your alias
alias hist 'PAGER="bat -l fish" history'
Don't alias _old_history hist, but copy it instead
functions --copy history _old_history
alias history 'PAGER="bat -l fish" _old_history'
If you don't care to keep fish's function, invoke the builtin history command in your own function
function history
builtin history $argv | bat -l fish
end
why doesn't the builtin history support pager?
I assume the fish designers didn't think that was a core part of the history functionality. I assume they put the user-facing stuff in a function that users can override.
Here's the relevant snippet from the default history function:
case search # search the interactive command history
test -z "$search_mode"
and set search_mode --contains
if isatty stdout
set -l pager (__fish_anypager)
and isatty stdout
or set pager cat
# If the user hasn't preconfigured less with the $LESS environment variable,
# we do so to have it behave like cat if output fits on one screen.
if not set -qx LESS
set -x LESS --quit-if-one-screen
# Also set --no-init for less < v530, see #8157.
if type -q less; and test (less --version | string match -r 'less (\d+)')[2] -lt 530 2>/dev/null
set -x LESS $LESS --no-init
end
end
not set -qx LV # ask the pager lv not to strip colors
and set -x LV -c
builtin history search $search_mode $show_time $max_count $_flag_case_sensitive $_flag_reverse $_flag_null -- $argv | $pager
else
builtin history search $search_mode $show_time $max_count $_flag_case_sensitive $_flag_reverse $_flag_null -- $argv
end
| how to alias the `history` function in fish shell |
1,554,126,051,000 |
#!/bin/sh
execute_cmd()
{
$($@)
}
execute_cmd export MY_VAR=my_val
echo ${MY_VAR}
Since $() executes in a sub-shell, $MY_VAR isn't set properly in the shell the script is running.
My question, how can I pass the export command to a function and execute it in the current shell the script is running in?
|
You should check Gilles' answer for all the details. In short, you can use eval instead of $():
execute_cmd()
{
eval "$@"
}
From bash manual:
eval [arguments]
The arguments are concatenated together into a single command, which is then read and executed, and its exit status returned as the exit status of eval. If there are no arguments or only empty arguments, the return status is zero.
Other shells usually have eval built-in with similar semantics.
| passing export command to function in shell script |
1,554,126,051,000 |
A typical way for a shell function to "return" its result is to assign it to some global variable.
Is there any convention/best-practice on the name of this variable?
|
REPLY is commonly used for that. It's used by read and select in bash, ksh and zsh at least.
In the zsh documentation:
REPLY
This parameter is reserved by convention to pass string values
between shell scripts and shell builtins in situations where a
function call or redirection are impossible or undesirable. The
read builtin and the select complex command may set REPLY, and
filename generation both sets and examines its value when
evaluating certain expressions. Some modules also employ REPLY for
similar purposes.
reply
As REPLY, but for array values rather than strings.
Beware of potential implications when changing the type of that variable though.
Another approach would be to pass the name of the variable as argument:
get_date() {
local date
date=$(date)
eval "$1=\$date"
}
That doesn't work if calling get_date with a variable called date. ksh93 addresses that by introducing namerefs which are a way to reference variables in the scope of the caller. bash recently added nameref support, but they reference variables in the same scope (by name only), so are not useful for that.
function get_date {
typeset -n var=$1
var=$(date)
}
In ksh93, that works even if called as get_date var. There are ways to make it work in bash, mksh and yash by exploiting a bug of unset, but I wouldn't go there as the bug may be fixed in the future.
Another way would be to not use local variables in your functions that set arbitrary variables other than the $1, $2... position parameters. That can make for pretty unreadable code though. See for instance that implementation of a getopts_long shell function.
| Are there any conventions for the name of global variable that holds function's result? |
1,554,126,051,000 |
I have created lots of directories and I would to make my life lazy and to auto cd into the directory that I made with the option of -g with the result of mkdir -g foo
The terminal would be like this:
User:/$ mkdir -g foo
User:/foo/$
I have looked at this page but with no success.
Is there a one-liner that allows me to create a directory and move into it at the same time?
I have tried in my .bash_profile with this command, but it just overrides the command entirely. (and realizing that it is an infinite loop later)
mkdir() {
if "$1" -eq "-g"
mkdir "$2"
cd "$2"
else
## do the regular code
fi
}
The other thing is that I don't want to make more aliases because I will eventually forget all of the aliases that I have.
So not I don't want this:
the command entirely. (and realizing that it is an infinite loop later)
mkcd() {
if "$1" -eq "-g"
mkdir "$2"
cd "$2"
else
## do the regular code
fi
}
I know that I most likely don't have the option thingy right, please correct me on that, but any help would be awesome!
|
Use the command builtin to call an external command, bypassing any function of the same name. Also:
What follows if is an ordinary command. To perform a string comparison, invoke the test built-in, or the [ builtin with the same syntax plus a final ], or the [[ … ]] construct which has a more relaxed syntax. See using single or double bracket - bash for more information.
The -eq operator is for comparing integers. Use = to compare strings.
I added some limited support for passing multiple arguments (including options such as -p). Below I call cd only if mkdir succeeded, and I call it on the last argument passed to the function.
-g will only be recognized as the very first argument. Parsing options to locate and remove it from whatever position it's in is possible, but harder.
mkdir () {
local do_cd=
if [ "$1" = "-g"]; then
do_cd=1
shift
fi
command mkdir "$@" &&
if [ -n "$do_cd" ]; then
eval "cd \"\${$#}\""
fi
}
I don't recommend defining your custom options. They won't be shown in --help or man pages. They aren't more memorable than defining a command with a different name: either you don't remember that you have them, and there's no advantage compared to a command with a different name, or you don't remember that they're custom, and then you can find this out immediately with a custom name by asking the shell whether it's a built-in command or function (type mkcd), which is not the case with a custom option. There are use cases for overriding a standard command, but this is not one.
| how to add a custom option to the mkdir command |
1,554,126,051,000 |
If in zsh, I type set and see precmd_functions=(_precmd_function_dostuff _precmd_function_domore).
Where are _precmd_function_dostuff and _precmd_function_domore defined (i.e. are they defined in a file? which file?)?
I can type functions to see the definitions of _precmd_function_dostuff and _precmd_function_domore, but this doesn't tell me where they are defined.
|
In zsh 5.3 or above,
type _precmd_function_domore
should return something like
_precmd_function_domore is a shell function from /usr/local/etc/zshrc.d/80-PetaLinux
With zsh 5.4 or above, you can also use:
echo $functions_source[_precmd_function_domore]
When you run zsh with the xtrace option (like with zsh -x), it writes debugging information on stderr that shows every command it runs (not function definitions though). You can modify the $PS4 variable (the prompt variable used for the xtrace output, see info zsh PS4) so it gives you more information like for each command that it runs, from which file and on each line the command was read from.
PS4='+%x:%I> ' zsh -x 2> >(grep precmd_func)
Would run a new zsh interactive shell instance, with stderr filtered by grep to show the lines that contain precmd_func.
Or with zsh, you can invoke that _precmd_function_domore function under xtrace and with %x:%I in $PS4 to see where the function definition was read from:
$ grep -n precmd ~/.zshrc
192:precmd_foo() echo foo
$ (PS4='+%x:%I> '; set -x; precmd_foo)
+zsh:2> precmd_foo
+/home/stephane/.zshrc:194> echo foo
foo
(note the off-by-two line number here though).
| In zsh, where are precmd_functions defined? |
1,554,126,051,000 |
The following works great on the command-line:
$ ffmpeg -i input.m4a -metadata 'title=Spaces and $pecial char'\''s' output.m4a
How do I parameterize this command and use it in a script/function? I would like to add multiple metadata tags like this:
$ set-tags.sh -metadata 'tag1=a b c' -metadata 'tag2=1 2 3'
update:
I simplified my question a little too much. I actually want to call a script that calls a script with with the parameterized command in it.
This is my exact use case:
This function converts files to audio-book format (defined in .profile):
# snippet of .profile
convert_to_m4b () {
FILE="$1"
BASENAME=${FILE%.*}; shift
ffmpeg -i "$FILE" -vn -ac 1 -ar 22050 -b:a 32k "$@" tmp.m4a &&
mv tmp.m4a "$BASENAME.m4b"
}; export -f convert_to_m4b
Function convert_to_m4b is called from download-and-convert.sh:
#/bin/sh
MP3_URL=$1; shift
FILENAME=$1; shift
if [ ! -f "${FILENAME}.mp3" ]; then
curl --location --output "${FILENAME}.mp3" "$MP3_URL"
fi
convert_to_m4b "${FILENAME}.mp3" "$@"
Download-and-convert.sh is called from process-all.sh:
#/bin/sh
download-and-convert.sh http://1.mp3 'file 1' -metadata 'title=title 1' -metadata 'album=album 1'
download-and-convert.sh http://2.mp3 'file 2' -metadata 'title=title 2' -metadata 'album=album 2'
...
...
download-and-convert.sh http://3.mp3 'file N' -metadata 'title=title N' -metadata 'album=album N'
I get this error from ffmpeg:
[NULL @ 00000000028fafa0] Unable to find a suitable output format for ''@''
'@': Invalid argument
"$@" works if I inline convert_to_m4b in download-and-convert.sh instead of calling the function.
The following does not work because the quotes are lost, causing arguments with spaces to be incorrectly split up:
#/bin/sh
ffmpeg -i input.m4a $@ output.m4a
I have tried various methods of quoting the $@, but this ends up quoting '-metadata' as well, so the command line argument is not properly recognized.
I guess I would like to only surround each argument with quotes if that argument was quoted to begin with. This seems difficult to do because bash strips the quotes before being passing arguments to the script/function.
Or is there a better method of relaying the -metadata arguments? (like environment variables or files)
|
"$@" does exactly what you want provided that you use it consistently. Here's a little experiment for you:
script1.sh:
#! /bin/sh
./script2.sh "$@"
script2.sh:
#! /bin/sh
./script3.sh "$@"
script3.sh:
#! /bin/sh
printf '|%s|\n' "$@"
With this the arguments stay unmolested all the way down:
$ ./script1.sh -i input.m4a -metadata "title=Spaces and \$pecial char's" output.m4a
|-i|
|input.m4a|
|-metadata|
|title=Spaces and $pecial char's|
|output.m4a|
| Passing arguments with spaces and quotes to a script (without quoting everything) |
1,554,126,051,000 |
I know I can write bash scripts like:
foo() {
echo $1;
}
but can I define a function that writes:
foo(string) {
echo $string;
}
I just can't find my way out of this.
|
The only available form is the first one; see the manual for details.
To use named parameters, the traditional technique is to assign them at the start of your function:
foo() {
string=$1
# ...
echo "${string}"
}
| Bash function with arguments |
1,554,126,051,000 |
I am having trouble with what should be a simple bash script.
I have a bash script that works perfectly:
function convert_to ()
x_max=2038
y_max=1146
x_marg=100
y_marg=30
x_grid=150
y_grid=150
if (("$x_pos" > "($x_marg+($x_grid/2))")); then
x_pos=$((x_pos-x_marg))
x_mod=$((x_pos%x_grid))
x_pos=$((x_pos/x_grid))
x_pos=$((x_pos*x_grid))
fi
However, I want to change the script where I pass 4 values to the function as arguments:
function convert_to ()
pos="$1"
marg="$2"
grid="$3"
max="$4"
# I verify that the inputs have arrived with this display
zenity --info --title "Info" --text "inputs: pos: $pos marg: $marg grid: $grid max: $max"
if (("$pos" > "($marg+($grid/2))")); then
pos=$((pos-marg))
mod=$((pos%grid))
pos=$((pos/grid))
pos=$((pos*grid))
fi
}
Where I would then call the function as follows:
x_pos="$(convert_coordinates $x_pos, $x_marg, $x_grid, $x_max)"
Y_pos="$(convert_coordinates $y_pos, $y_marg, $y_grid, $y_max)"
However, the new script always fails with syntax errors: operand expected (error token is ",").
I've also tried many variations:
pos=$[[ $pos - $marg ]] ...... which results in syntax error: operand expected (error token is "[ 142, - 100, ]")
pos=[[ $pos - $marg ]] .......... fails with command not found
pos=$[[ "$pos" - "$marg" ]] ..... fails with command not found
pos=$(("$pos"-"$marg")) ......... syntax error: operand expected (error token is ""142,"-"100,"")
The only difference between the working script and non-working is the fact that I am passing the arguments in the second script ... So, i tried setting the argument values to constant values within the function (defeating my purpose of passing arguments and making the script worthless) .. But, now the calculations within the function are working without error.
So I'm at a loss for what I am doing incorrectly ... I want to be able to pass arguments to the function and then do mathematical calculations using the passed values.
|
In bash, the argument separators are spaces, so :
instead of :
x_pos="$(convert_coordinates $x_pos, $x_marg, $x_grid, $x_max)"
do
x_pos="$(convert_coordinates $x_pos $x_marg $x_grid $x_max)"
| functions arguments |
1,554,126,051,000 |
How can I use the integer value returned by a function in shell script that takes some arguments as input?
I am using the following code:
fun()
{
echo "hello ${1}"
return 1
}
a= fun 2
echo $a
I am not sure how should I call this function. I tried the methods below, bot none of them seems to work:
a= fun 2
a=`fun 2`
a=${fun 2}
|
The exit code is contained in $?:
fun 2
a=$?
| How to call a shell function |
1,347,105,879,000 |
I just discovered this useful bit of code on this useful-looking website.
#!/bin/sh
exec tclsh "$0" ${1+"$@"}
proc main {} {
set lines [lrange [split [read stdin] \n] 0 end-1]
set count [llength $lines]
for {set idx_1 0} {$idx_1 < $count} {incr idx_1} {
set idx_2 [expr {int($count * rand())}]
set temp [lindex $lines $idx_1]
lset lines $idx_1 [lindex $lines $idx_2]
lset lines $idx_2 $temp
}
puts [join $lines \n]
}
main
Unfortunately, I don't like scripts. If I can, I make bash functions instead. (I've even gone so far as to completely obfuscate Python scripts in order to fit them into my ~/.bashrc unobtrusively, viz:)
dna-ify () {
python -c "exec'eJxdkUFrhDAQhe/5FdNsIQpu9rIspSBtLz330N4EiTq6AY0hiaW7v75j1F1aLwlv3vdmMu4eDpN3h0qbA5pvsJdwHg3bQT022nT51+f7/omx1o0DlGU7hclhWYIe7OgCWKdNINXUQRO1qsp1VjmPGfiLz6BSHk/HDNAsmZ6xWHaQ36zyzXXTgCZ8xEqSrhapmqZUay0Rre7RqAFFBoZUn4sXkbL5RlkrEY+Z8ZSi27mFlxv4zIA+H2jujpDRokn+GFLpUDVEYk+sGcP8BukDDS61VyFckvRfyN1wQ/3aaBsprumMvaUqu97JeJFxMZiIa68res61uhmW1cnqdFw9K0spMRMSvvww2NdQcPzBuhA8gy0iA14I2WBkC7HEFSK9S3NPEgoOj68EerQ55yn7BbL1snM='.decode('base64').decode('zlib')" $@
}
So, is there any (and, as the above sample should indicate, I do mean any) way I can do that with this bit of code?
|
Tell tclsh to read the script from a different file descriptor, and use a here document to pass the script.
shuffle () {
tclsh /dev/fd/3 "$@" 3<<'EOF'
proc main {} {
…
}
main
EOF
}
| Is there any way I can fit this into my ~/.bashrc as a function? |
1,347,105,879,000 |
I have the following bash function:
lscf() {
while getopts f:d: opt ; do
case $opt in
f) file="$OPTARG" ;;
d) days="$OPTARG" ;;
esac
done
echo file is $file
echo days is $days
}
Running this with arguments does not output any values. Only after running the function without arguments, and then again with arguments does it output the correct values:
-bash-4.1$ lscf -d 10 -f file.txt
file is
days is
-bash-4.1$ lscf
file is
days is
-bash-4.1$ lscf -d 10 -f file.txt
file is file.txt
days is 10
Am I missing something?
|
Though I can't reproduce the initial run of the function that you have in your question, you should reset OPTIND to 1 in your function to be able to process the function's command line in repeated invocations of it.
From the bash manual:
OPTIND is initialized to
1 each time the shell or a shell script is invoked. When an
option requires an argument, getopts places that argument into
the variable OPTARG. The shell does not reset OPTIND
automatically; it must be manually reset between multiple calls
to getopts within the same shell invocation if a new set of
parameters is to be used.
From the POSIX standard:
If the application sets OPTIND to the value 1, a new set of parameters can be used: either the current positional parameters or new arg values. Any other attempt to invoke getopts multiple times in a single shell execution environment with parameters (positional parameters or arg operands) that are not the same in all invocations, or with an OPTIND value modified to be a value other than 1, produces unspecified results.
The "shell invocation" that the bash manual mentions is the same as the "single execution environment" that the POSIX text mentions, and both refer to your shell script or interactive shell. Within the script or interactive shell, multiple calls to your lscf will invoke getopts in the same environment, and OPTIND will need to be reset to 1 before each such invocation.
Therefore:
lscf() {
OPTIND=1
while getopts f:d: opt ; do
case $opt in
f) file="$OPTARG" ;;
d) days="$OPTARG" ;;
esac
done
echo file is $file
echo days is $days
}
If the variables file and days should not be set in the calling shell's environment, they should be local variables. Also, quote variable expansions and use printf to output variable data:
lscf() {
local file
local days
OPTIND=1
while getopts f:d: opt ; do
case $opt in
f) file="$OPTARG" ;;
d) days="$OPTARG" ;;
esac
done
printf 'file is %s\n' "$file"
printf 'days is %s\n' "$days"
}
| bash function arguments strange behaviour |
1,347,105,879,000 |
Doing some code refactoring, and I realized I don't know if this matters at all:
The function definition is going to be sourced from another file (a sort of library). The function uses certain variables within the function body. Those variables will be set by the time the function is actually called in the script, but it will be much cleaner if I can source the function library at the start of the script; however the variables aren't defined at this point.
Is there any disadvantage to doing this? As far as I know bash won't actually do anything with a function definition until it is called...right? In which case the unset variables used within the function won't matter, since those variables will contain the correct values later, before the function is actually called.
Or will it mess things up to include so-far-unset variables in a function definition?
|
fn(){ printf %s\\n "${v-not set}"; }
v=value; fn; unset v; fn
value
not set
A shell function is a literal string stored in the shell's memory. At define time it is parsed, but is not evaluated for expansions (other than shell aliases) or redirections. These are only evaluated at call time.
In fact, and somewhat related, in this way it is possible to get a function to define its own input with a new temporary file at each invocation.
fn(){ ls -l /dev/fd/0; cat; } <<INFILE
$@
INFILE
fn some args; fn other args
#in dash
lr-x------ 1 mikeserv mikeserv 64 Nov 16 12:50 /dev/fd/0 -> pipe:[4076148]
some args
lr-x------ 1 mikeserv mikeserv 64 Nov 16 12:50 /dev/fd/0 -> pipe:[4077081]
other args
#bash
lr-x------ 1 mikeserv mikeserv 64 Nov 16 12:51 /dev/fd/0 -> /tmp/sh-thd-1036060995 (deleted)
some args
lr-x------ 1 mikeserv mikeserv 64 Nov 16 12:51 /dev/fd/0 -> /tmp/sh-thd-531856742 (deleted)
other args
| Is there any danger to using an unset variable in a bash function definition? |
1,347,105,879,000 |
I want to define the function cpfromserver in bash so that when I run
$ cpfromserver xxx yyy zzz
the result is the same as if I had typed
$ scp [email protected]:"/some/location/xxx/xxx.txt /some/location/xxx/xxx.pdf /some/location/yyy/yyy.txt /some/location/yyy/yyy.pdf /some/location/zzz/zzz.txt /some/location/zzz/zzz.pdf" /somewhere/else/
where it works for any number of arguments.
(That is, the function should copy filename.txt and filename.pdf from the directory /some/location/filename/ on the remote.server to the local directory /somewhere/else/ for every filename I specify as an argument to the function. And do it all in a single ssh connection.)
Currently, I have written a function that works for a single argument, and I just loop over it, but this establishes separate ssh connections for each argument, which is undesirable.
My difficulty is that I only know how to use function arguments individually by their position ($1, $2, etc.) — not how to manipulate the whole list.
[Note that I am writing this function as a convenience tool for my own use only, and so I would prioritize my own ease of understanding over handling pathological cases like filenames with quotation marks or linebreaks in them and whatnot. I know that the filenames I will be using this with are well-behaved.]
|
Try this way:
cpfromserver () {
files=''
for x in "$@"
do
files="$files /some/location/$x/$x.txt /some/location/$x/$x.pdf"
done
scp [email protected]:"$files" /somewhere/else/
}
Important caveat from comments: "It's worth noting for posterity that this solution definitely won't work for complicated filenames. If a filename contains a space, or a newline, or quotes, this approach will definitely fail."
| Write bash function which operates on list of filenames |
1,347,105,879,000 |
I'm using function like this.
$ find-grep () { find . -type f -name "$1" -print0 | xargs -0 grep "$2" ; }
After I type:
$ find-grep *.c foo
I want to get expanded last command string. In this case:
find . -type f -name "*.c" -print0 | xargs -0 grep "foo"
Is there way to do it easily?
|
A saner version of @slm's:
find-grep() {
cmd=(find . -type f -name "$1" -exec grep "$2" {})
printf '%q ' "${cmd[@]}"
printf '+\n'
"${cmd[@]}" +
}
(no need for pipes or xargs here) Or:
find-grep () (
set -x
find . -type f -name "$1" -exec grep "$2" {} +
)
(note the () instead of {} to start a subshell to limit the scope of set -x. Note that it does not cause more processes to be forked it's just that the fork for the process that will execute find is done earlier).
Remember you need to quote wildcard characters so they are not expanded by the shell:
find-grep '*.c' pattern
If instead, you want it to be pushed to the history, so that you see the expanded command when you press the Up key, you could write it:
find-grep() {
cmd=$(printf '%q ' find . -type f -name "$1" -exec grep "$2" {})+
history -s "$cmd"
eval "$cmd"
}
| How to show last command with expanding function in bash |
1,347,105,879,000 |
I want to write a convenience function which loads all the matlab scripts it's passed on the command line. The syntax would look like
fmatlab myscript1.m myscript2.m ... mystriptN.m
I can easily do some preset number of scripts. For instance, if I give it just a single script:
function fmatlab () {
$MYMATLABPATH/matlab -r "edit "$1"" &
}
Or, for two scripts:
function fmatlab () {
$MYMATLABPATH/matlab -r "edit "$1" "$2"" &
}
and so on. The first opens matlab, then the single script it's passed in the editor and the second opens matlab then both of the two scripts it's passed. Both these cases are checked.
But I can't do a variable number of scripts. For instance, the seemingly obvious extension
function fmatlab () {
$MYMATLABPATH/matlab -r "edit "$@"" &
}
opens only the first script and none of the others. I checked to make sure the command in the first quote set was the same for the two script case:
function fmatlab_echo () {
echo "edit "$1" "$2""
echo "edit "$@""
}
and I get
$ fmatlab_echo script1.m script2.m
edit script1.m script2.m
edit script1.m script2.m
I am probably missing something basic about either $@, & (go to the next line of the terminal), or matlab -r or any combination of the three.
|
Looks like you want:
fmatlab () {
$MYMATLABPATH/matlab -r "edit $*" &
}
When $* is used inside double quotes, it joins all the parameters using a space (by default).
| Writing a bash function to autoload matlab scripts |
1,347,105,879,000 |
> cat b.txt
function first
{
sleep 1
echo $(echo $$)
}
function second
{
openssl enc -aes-256-cbc -k "$(first)"
}
echo nyi | second | second | second
>
> time sh -x b.txt
+ echo nyi
+ second
+ second
+ second
++ first
++ sleep 1
++ first
++ sleep 1
++ first
++ sleep 1
+++ echo 32383
+++ echo 32383
++ echo 32383
++ echo 32383
+ openssl enc -aes-256-cbc -k 32383
+++ echo 32383
+ openssl enc -aes-256-cbc -k 32383
++ echo 32383
+ openssl enc -aes-256-cbc -k 32383
ɚ��2;��<�Vp��H�����F�q�AHO��Sܽd��d4��X��#}
real 0m1.026s
user 0m0.016s
sys 0m0.025s
>
Question: why doesn this script runs for at least 3 seconds?
There is a "sleep 1" in the first function and it should be called 3 times in the second function.
According to the "real 0m1.026s" it seems that the sleep is only executed once. Or if it is parallel (??) then how can I make it linear?
|
The parts of a pipeline are started (close to) simultaneously.
All three invocations of second will start at the same time. The three subshells that this gives rise to will invoke first to expand "$(first)" and the three sleep 1 calls will happen concurrently (you can see in the trace output that they do happen).
It's only the I/O that serializes a pipeline, i.e. one process in the pipeline waiting for input from the previous, or waiting to have its output read by the next.
To have the bits of the pipeline start, run and exit in sequence:
echo nyi | second >out1
second <out1 >out2
second <out2
That is, run them separatedly and store the intermediate results in files.
| Function in function will not be called multiple times if requested? |
1,347,105,879,000 |
What is the dash equivalent to bash's:
compgen -A function
which lists the names of the declared functions.
|
As far as I can tell there is no equivalent. dash has a very small number of built-in commands and none of them list the declared functions.
| dash: List declared functions |
1,347,105,879,000 |
Somehow I'm not able to execute the following mapping:
function! s:MySurroundingFunctionIWantToKeep()
let s:Foobar={'foo': 'bar'}
map \42 :echo <sid>Foobar.foo<cr>
endfunction
call s:MySurroundingFunctionIWantToKeep()
I thought it works the same way as it does with a script-local function:
function! s:MySurroundingFunctionIWantToKeep()
function! s:Foobar()
echo 'bar'
endfunction
map \42 :call <sid>Foobar()<cr>
endfunction
call s:MySurroundingFunctionIWantToKeep()
Also freeing s:Foobar from s:MySurroundingFunctionIWantToKeep() doesn't help, like:
let s:Foobar={'foo': 'bar'}
map \42 :echo <sid>Foobar.foo<cr>
Declaring the s:Foobar script-local variable as g:Foobar global variable does the trick.
|
There is no way (that I know of) to directly access script-local variables outside the context of that script; <SID> only works for functions (and only in mappings).
You could provide indirect access through a function, though:
function! s:FoobarHash()
return s:Foobar
endfunction
function! s:MySurroundingFunctionIWantToKeep()
let s:Foobar={'foo': 'bar'}
map \42 :echo <sid>FoobarHash()['foo']<cr>
endfunction
call s:MySurroundingFunctionIWantToKeep()
Depending on how isolated you want to keep the variable, you could make the “accessor” function more restrictive (only allow certain keys, only allow read access, only allow writes to certain keys, etc).
let s:Foobar={'foo': 'bar', 'baz': 'quux'}
function! s:FoobarAccess(...)
" Provide limited access to the script-local hash s:Foobar
if a:0 == 1
" allow read access to all keys
return s:Foobar[a:1]
elseif a:0 == 2
" allow write access to only 'foo'
if a:1 !=# 'foo'
throw 'FoobarAccess: not allowed to write to key ' . a:1
endif
let old = s:Foobar[a:1]
let s:Foobar[a:1] = a:2
return old
else
throw 'FoobarAccess must take exactly one or two arguments'
endif
endfunction
map \42 :echo <sid>FoobarAccess('foo')<cr>
map \43 :echo <sid>FoobarAccess('foo','new value')<cr>
map \44 :echo <sid>FoobarAccess('baz')<cr>
map \45 :echo <sid>FoobarAccess('baz',1)<cr>
| How to reference a script-local dictionary in a Vim mapping? |
1,347,105,879,000 |
I wrote a ksh function for git checkout (I've removed some irrelevant proprietary components for the sake of the public question, if you're wondering why it's useful to me):
# Checkout quicker
checkout(){
if [ "$1" == "master" ]; then
git checkout master
else
git checkout $1
fi
}
When I look at the function on the command line using functions, though, I get an odd output:
$ functions checkout
checkout()
}
# Checkout quicker
checkout(){
if [ "$1" == "master" ]; then
git checkout master
else
$ <- (this is my PS1, which I'm not writing here because it's big)
Why is the function not displaying properly? Did I break functions by technically using the function name inside the function? I am using ksh93u+ 2012-08-01 on RHEL.
|
Looks like you may have hit the bug later fixed by this commit. I can reproduce it with CentOS 7's ksh93 with:
$ cat a
# Checkout quicker
checkout(){
if [ "$1" == "master" ]; then
git checkout master
else
git checkout $1
fi
}
$ . ./a
$ functions checkout
checkout(){
if [ "$1" == "master" ]; then
git checkout master
else
git checkout $1
fi
}
$ vi a
Editing the file the function was sourced from, adding some text before the definition of the function. Now:
$ cat a
01234567801234567890123456789
}
# Checkout quicker
checkout(){
if [ "$1" == "master" ]; then
git checkout master
else
git checkout $1
fi
}
$ functions checkout
checkout()
}
# Checkout quicker
checkout(){
if [ "$1" == "master" ]; then
git checkout mast$
Quoting that commit's log:
[v1.1] typeset -f <functname>: generate script using sh_deparse()
Problem reproducer:
Locate or write a script with a function definition
Source the script with . or source
Print the function definition with typeset -f; looks good
Edit the file to add or remove characters before or in the
function definition, or rename or remove the file
Print the function definition with typeset -f again; the output
is corrupted!
Cause: ksh remembers the offset and length of the function
definition in the source file, and prints function definitions by
simply dumping that amount of bytes from that offset in the source
file. This will obviously break if the file is edited or (re)moved,
so that approach is fundamentally flawed and needs to be replaced.
| How does my function break `functions`? |
1,347,105,879,000 |
Normally I like to have all of the debug output of a script go to a file, so I will have something like:
exec 2> somefile
set -xv
This work very will in bash, but I have noticed in ksh it behaves differently when it comes to functions. I noticed when I do this in ksh, the output does not show the function trace, only that the function was called.
When doing some additional testing, I noticed the behavior also depends on how the function was declared, if I use the ksh syntax of:
function doSometime {....}
All I see is the function call, however if declare the function using the other method, eg
doSomething() {....}
The trace works as expected. Is it possible to get set -xv to work the same with both types of function declarations? I tried export SHELLOPTS and that did not make a difference either.
I am using ksh93 on Solaris 11.
|
From the documentation:
Functions defined by the function name syntax and called by name execute in the same process as the caller and
share all files and present working directory with the caller. Traps caught by the caller are reset to their
default action inside the function.
Whereas
Functions defined with the name() syntax and functions defined with the function name syntax that are invoked
with the . special built-in are executed in the caller's environment and share all variables and traps with
the caller.
The solution is to not use the function keyword; stick to the standard form of function definitions.
Alternatively, if you're only interested in a few functions, typeset -tf fname will just trace the function fname (if it was defined with the function keyword).
To stop tracing: typeset +tf fname
To trace all such functions in ksh93: typeset -tf $(typeset +f)
To see which functions are traced: typeset +tf
To stop tracing all functions: typeset +tf $(typeset +tf)
| set -xv behavior in ksh vs bash |
1,347,105,879,000 |
Context:
I have an old bash script with a big section parsing its arguments. It happens now that I need to call this section twice, so I plan to move it to a function to avoid code duplication.
The problem:
In that section, set --, shift and $@ are used, meaning that they won't apply to the script anymore, but to the function, which is wrong.
Question:
From within the function, is there any way to get and set the script arguments ?
Scheme:
Something like this:
#!/bin/bash
# > 5000 lines
process_arg()
{
# about 650 lines
# set --
# $@ $* $1 ...
# shift <n>
}
while (( $# > 0 )); do
case $1 in
<cond>)
<some code here>
process_arg
<some more code here>
<other conditions and code here>
*)
<some different code here>
process_arg
<some different more code here>
esac
shift 1
done
|
Disclaimer:
From the discussion above, I implemented a solution. This is not, by far, the one I dreamed of, because of the verbosity of ${args_array[1]}, compared to $1. Makes the source less readable. So improvements, or a better solution are still welcome.
Source:
Tested, something like this:
#!/bin/bash
#########################
# DEBUG
#########################
# set -x
PS4='${xchars:-+} ${BASH_SOURCE}:${LINENO} (${FUNCNAME[@]}) + ' # full stack
#########################
# INITIAL ARGS FOR TEST
#########################
set -- a b c d e f g h
#########################
# UTILITIES
#########################
args_array=( "$@" ) # script args here
args_shift() # Usage readability OK, replaces shift <n>
{
typeset n=${1:-1}
echo "args_shift $1 in ${FUNCNAME[1]} -- ${args_array[@]}"
args_array=( "${args_array[@]:$n}" ) # ${1:-1} unsupported in this context
echo "args_shift $1 in ${FUNCNAME[1]} ++ ${args_array[@]}"
}
args_set() # Usage readability OK, replaces set -- <args>
{
echo "args_set $@ in ${FUNCNAME[1]} -- ${args_array[@]}"
args_array=( "$@" ) # function args here
echo "args_set $@ in ${FUNCNAME[1]} ++ ${args_array[@]}"
}
# Usage
# search/replace OK, and good readability afterward
# shift <n> ---> args_shift <n>
# set -- <args> ---> args_set <args>
# search/replace OK, but bad readability afterward, and refactoring--
# $@ ---> ${args_array[@]}
# $# ---> ${#args_array[@]}
# $1 ---> ${args_array[0]} !!! 1 -> 0
# $2 ---> ${args_array[1]} !!! 2 -> 1
# etc
#########################
# TEST
#########################
f()
{
args_shift
}
g()
{
args_set A B C D
}
# main
echo "main -- ${args_array[@]}"
f
args_shift 2
f
g
args_shift
f
echo "main ++ ${args_array[@]}"
Output:
main -- a b c d e f g h
args_shift in f -- a b c d e f g h
args_shift in f ++ b c d e f g h
args_shift 2 in main -- b c d e f g h
args_shift 2 in main ++ d e f g h
args_shift in f -- d e f g h
args_shift in f ++ e f g h
args_set A B C D in g -- e f g h
args_set A B C D in g ++ A B C D
args_shift in main -- A B C D
args_shift in main ++ B C D
args_shift in f -- B C D
args_shift in f ++ C D
main ++ C D
Remarks:
Works but not the most readable solution, and refactoring not so light, because there are several forms of usage to take into consideration: $1, more or less ${1[:/][^}]} or ${!1[:/][^}]} etc, while avoiding those in function, awk, perl etc.
For some, as variable names are case sensitive in bash and, I think, more or less seldom used, on could use A or _A instead of args_array but, to my taste, ${A[1]} or so is even less readable in a long source than ${args_array[1]}.
My situation:
There are at least 616 occurrences to take care of... carefully (some are in functions, awk or perl scripts etc)
for s in shift 'set --' '$@' '${@' '$*' '${*' '$1' '${1' '$2' '${2' '$3' '${3' '$4' '${4' '$5' '${5'; do
printf '%-10s: %d\n' "$s " "$(fgrep $s <script>|wc -l)"
done # |awk '{sum+=$NF};END{print sum}'
shift : 44
set -- : 189
$@ : 39
${@ : 2
$* : 7
${* : 0
$1 : 182
${1 : 79
$2 : 48
${2 : 3
$3 : 15
${3 : 0
$4 : 8
${4 : 0
$5 : 0
${5 : 0
| Get and set the script arguments from within a function in bash |
1,347,105,879,000 |
I would like to ask about passing parameters into functions.
I tried this:
function_name $var1 $var2
but usually (sometimes it printed error) it didn't make any difference whether I passed them or not. I mean it perfectly worked when I called it only with function_name. So my question is: Is it necessary to give there these parameters like in example above?
|
Bash doesn't check number of arguments passed to a function because there are no prototypes as in C. From https://www.gnu.org/software/bash/manual/bash.html#Shell-Functions:
Shell functions are a way to group commands for later execution using
a single name for the group. They are executed just like a "regular"
command. When the name of a shell function is used as a simple command
name, the list of commands associated with that function name is
executed. Shell functions are executed in the current shell context;
no new process is created to interpret them.
Functions are declared using this syntax:
name () compound-command [ redirections ]
or
function name [()] compound-command [ redirections ]
One check inside function whether a number of arguments is correct and return an error if it's not but it's not Bash responsibility. See:
#!/bin/bash
function f()
{
echo $1
}
function f1
{
echo $1
}
f
f 2
f1
f1 f1
| What happens if I pass too few parameters to a shell function? |
1,347,105,879,000 |
I know that in shell-scripting an "exit" usually means voluntarily or at least successfully terminating a session (or a process within a session) and that there are several different exit modes; the following are these I know:
1. A simple exit command
If I'm in the first shell-session (shell-session 0) it will usually cause the shell CLI window to close, but if I'm in some sub-session (like shell-session 1 or later) execution will usually just move my user back to the previous session (say 1 → 0).
2. An exit SOME_EXIT-CODE command
I find three main exit codes used in such exiting:
exit 0 (success).
exit 1 (general error such as "divide by zero" and other impermissible operations).
exit 2 (as in Bash 4.x.x - misuse of shell builtins, and an example would be an empty function; myFunc() {}).
I often find these added to the end of command sequences as indicators of their execution's outcome; sometimes as part of unit testing, as in:
domain="$1" && test -z "$domain" && exit 2
# Test if a user passes only one domain as a parameter, when executing this script
3. A non-added script exit
If I'm not wrong, when a Bash script ends to run, its "ending" is actually an "exiting" in the common *nix terminology - the script itself is a session from which the user exits back to the CLI sesssion. Here also, some exit code, might be given.
My question
Are there any other "exit modes" in shell scripting in general, and in Bash in particular?
|
an "exit" usually means voluntarily or at least successfully terminating
At least the POSIX text seems to use exit solely for voluntary termination of a process, as opposed to being killed for an outside reason. (See e.g. wait()) A process being killed by a signal hardly counts as a success, so any successful termination must in that sense be an "exit". Though I'd expect those terms to be used less strictly in informal use.
Are there any other "exit modes" in shell scripting in general, and in Bash in particular?
Mode has particular technical meanings in some contexts (e.g. chmod()), but I can't think of one here, so I'm not exactly sure what it is you're asking.
In any case, a shell script might exit terminate at least due to the following reasons:
The script runs to end of the script. The exit status of the script is that of the last command executed.
The script runs the exit builtin command without arguments. Again, the exit status is that of the last command executed.
The script runs the exit command with an argument. The exit status is the value of the argument.
The script references an unset variable while set -u/set -o nounset is in effect. The exit status depends on the shell, but is nonzero. (Bash seems to use 127.)(*)
The script runs a command that fails while set -e/set -o errexit is in effect. The exit status is that of the failing command. (But see BashFAQ 105 for issues with set -e.)
The script runs into a syntax error. The exit status of the shell is nonzero. (Bash seems to use 1.)(*)
The script receives a signal that causes it to terminate. Not all signals cause termination, and signals can be either ignored or a handler can be set within the script with the trap builtin command. This also applies to e.q. hitting Ctrl-C, which sends the SIGINT signal.(*)
In the technical sense, in cases 1 to 6, the shell process running the script exits voluntarily (i.e. the process calls exit()). On the other hand, from the point of view of the script itself, terminating due to set -e, set -u or a syntax error might well be called involuntary. But the shell script is not the same as the shell interpreter.
In 1 to 3, the custom is to use an exit status of zero for a successful completion, and a non-zero value for failures. The exact meaning of the non-zero values depends on the utility. Some might use only zero and one, some might use different non-zero statuses for different situations. For example, grep uses 1 to indicate no match was found, and values greater than 1 to indicate errors. Bash's builtins also use 2 to indicate errors like invalid options. Using a similar custom may be useful, but you'll need to document what the exit status of your script means. Note that the exit status is usually limited to 8 bits, so the range is from 0 to 255.
In 4 to 6, the situation is usually considered some sort of a failure, so the exit status is non-zero. In 7, there is no exit status. Instead, when a process terminates due to a signal, the wait() system call indicates the signal in question. If the parent process is a shell, it usually represents this with an exit status of 128 + <signal number>, e.g. 143 for a child terminated with SIGTERM.
(* Unlike scripts, interactive shells will not exit due to syntax errors or set -u or SIGINT.)
If I'm in the first shell-session it will usually cause the shell CLI window to close
A terminal emulator will usually close if the process it started exits. But that's up to the terminal emulator, not a function of the shell. A terminal emulator might decide to keep the window open to tell the user that their program terminated, and you could run something other than a shell within a terminal emulator, too.
if I'm in some sub-session,execution will usually just move my user back to the previous session.
If you use an interactive shell to start another shell, the parent shell continues when the child terminates. But again, this isn't related to shells, the same happens if you start an editor or just run any command from an interactive shell. When the child process terminates, the parent shell continues accepting commands from the user.
Bash does keep a variable SHLVL that is increased by one each time Bash starts, so in a sense it does have an internal idea of nested shells. But I don't think the phrase "sub-session" is very common, let alone any sort of numbering of such. (I think SHLVL initializes at 1.)
| What exit modes exist in shell-scripting in general and in Bash in particular? |
1,347,105,879,000 |
I need to run these commands very often:
sudo apt-get install <package>
sudo apt-get remove <package>
Can I make it simple like:
install <package>
remove <package>
I think I need to write a function like this:
function install(){
sudo apt-get install <package>
}
...and then need to copy paste to some location i don't know. Can anyone tell me how can I make such an install <package> command available all the time after boot?
|
Use shell aliases, they won't interfere with other scripts/commands, they are only replaced when the command has been typed interactively:
alias install="sudo apt-get install"
You may place this in your shell configuration file (~/.bashrc for example) and it will be defined in all your shell sessions.
| creating simple command for sudo apt-get install? |
1,347,105,879,000 |
I know that I can use unset -f $FUNCTION_NAME to unset a single function in bash / zsh, but how do I unset all functions?
|
In the zsh shell, you may disable all functions using
disable -f -m '*'
(literally, "disable each function whose name matches *").
You may then enable them again with the analogous enable call.
You may also use unset in a similar way to remove the functions completely from the current environment:
unset -f -m '*'
| How do I unset all functions in zsh? |
1,347,105,879,000 |
There are a couple of questions related to the fork bomb for bash :(){ :|: & };: , but when I checked the answers I still could not figure out what the exactly the part of the bomb is doing when the one function pipes into the next, basically this part: :|: .
I understand so far, that the pipe symbol connects two commands by connecting the stdandard output of the first to the standard input to the second, e.g. echo "Turkeys will dominate the world" | sed 's/s//'.
But I do not get it what the first function is pushing through its standard out, which gets pushed into the second one, after all there are no return values defined inside the function, so what is travelling through the human centipede if the man at the beginning has an empty stomach?
|
Short answer: nothing.
If a process takes in nothing on STDIN, you can still pipe to it. Simiarly, you can still pipe from a process that produces nothing on STDOUT. Effectively, you're simply piping a single EOF indicator in to the second process, that is simply ignored. The construction using the pipe is simply a variation on the theme of "every process starts two more". This fork bomb could also be (and sometimes is) also written as:
:(){ :&:; }; :
Where the first recursive call is backgrounded immediately, then the second call is made.
In general, yes, the pipe symbol (|) is used to do exactly what you mentioned - connect STDOUT of the first process to STDIN of the second process. That's also what it's doing here, even though the only thing that ever goes through that pipe is the single EOF indicator.
| What exactly is the function piping into the other function in this fork bomb :(){ :|: & };:? |
1,347,105,879,000 |
I just wrote a function in my ~/.bashrc that will let me create a folder for a new website with one command. The function looks like this:
function newsite() {
mkcd "$*" # mkdir and cd into it
mkdir "js"
mkdir "imgs"
touch "index.html"
touch "main.css"
vim "index.html"
}
Now what I would like to do is, instead of just touching index.html and main.css I'd like to create basic template files for index.html and main.css problem is I have absolutely no idea how to do that. I don't know much about writing to files using bash commands. Typically I'd just open the files in vim and go to town but I'd like to have something already started when I get into vim...
|
jw013's idea and HaiVu's answer are both correct. However for the sake of completeness for anyone who comes upon this question wanting the answer, here it is;
function newsite() {
mkcd "$*" # mkdir and cd into it
mkdir "js"
mkdir "imgs"
cat > index.html <<'EOI'
<html>
<head>
</head>
<body>
</body>
</html>
EOI
cat > main.css <<'EOI'
body {
font-family: Arial;
}
EOI
vim "index.html"
}
The <<'EOI' thing is called a heredoc, most scripting languages have them.
| Script to create files in a template |
1,347,105,879,000 |
I would like to create a bash function that, given a string, it count the depth level of nested parentheses, and return -1 if the parentheses are not balanced:
function countNested(){
str="$1"
n_left=$(echo $str | grep -o '(' | grep -c '(' )
n_right=$(echo $str | grep -o ')' | grep -c ')' )
# if the n_left is not equal to n_right return -1
[[ $n_left -ne n_right ]] && { echo -1; return -1 ; }
[[ $n_left -ge 1 ]] && echo $((n_left-1))
[[ $n_left -eq 0 ]] && echo 0
}
When I try it:
countNested '((((((5)))'
# output: -1
countNested '((((((((((((((((5+7))))))))))))))))'
# output: 15
I am using grep two times which seems to be expensive. Any ideas how can I increase the performance of that function?
|
You could do the check character-by-character, that'd let you detect cases where there is a right parenthesis without the corresponding left one. You could do it Bash, but if you care about performance, some other tool is probably better. E.g. with awk:
$ cat parens.awk
#!/usr/bin/awk -f
{
n = 0;
max = 0;
for (i = 1; i <= length($0); i++) {
c = substr($0, i, 1);
if (c == "(") n++;
if (c == ")") n--;
if (c == ")" && n < 0) {
printf "mismatching right parenthesis at position %d\n", i > "/dev/stderr";
exit 1;
}
if (n > max) max = n;
}
if (n != 0) {
printf "%d left parentheses left unclosed\n", n > "/dev/stderr";
exit 1;
}
# maximum nesting level
printf "%d\n", max;
exit 0;
}
The output is just the maximum nesting level, or a message to stderr if the input is invalid:
$ echo '((5))' | awk -f parens.awk
2
$ echo '((5)' | awk -f parens.awk
1 left parentheses left unclosed
$ echo '((5)))' | awk -f parens.awk
mismatching right parenthesis at position 6
$ echo '(5))(' | awk -f parens.awk
mismatching right parenthesis at position 4
$ echo '((5)(6))' | awk -f parens.awk
2
$ echo '((5)))' | awk -f parens.awk
mismatching right parenthesis at position 6
Also, the exit status is one on error, so you can also do:
if ! level=$( echo '((5)))' | awk -f parens.awk 2>/dev/null); then
echo invalid parenthesis
fi
(Or just remove the prints for the error messages if you don't care about them.)
If you want to do it in Bash, ${var:i:1} gives the character in var at position i, ${#var} gives the length of the variable.
| Bash function to count the number of nested parentheses |
1,347,105,879,000 |
I'm trying to call a self-defined function funk_a in strace but it doesn't seem to find it.
I confirmed that funk_a can be called by itself.
I appreciate any opinions.
$ source ./strace_sample.sh
$ funk_a
Earth, Wind, Fire and Water
$ funk_b
Get on up
strace: Can't stat 'funk_a': No such file or directory
$ dpkg -p strace|grep Vers
Version: 4.8-1ubuntu5
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
strace_sample.sh
#!/bin/bash
function funk_a {
echo "Earth, Wind, Fire and Water"
}
function funk_b {
echo "Get on up"
strace -o trace_output.txt -c -Ttt funk_a
}
Thank you.
|
strace can only strace executable files.
funk_a is a function, a programming construct of the shell, not something you can execute.
The only thing strace could strace would be a new shell that evalutes the body of that function like:
strace -o trace_output.txt -Ttt bash -c "$(typeset -f funk_a); funk_a"
(I removed -c as it makes no sense with -Ttt).
But you'll then see all the system called made by bash to load and initialise (and after to clean-up and exit) in addition to that one write system call made by that funk_a function.
Or you could tell strace to trace the pid of the shell while it evaluates the funk_a function:
strace -o trace_output.txt -Ttt -p "$$" &
funk_a
kill "$!"
Though, by the time strace attaches to the PID of the shell, the shell could very well have finished interpreting the function. You could try some synchronisation like
strace -o trace_output.txt -Ttt -p "$$" &
tail -F trace_output.txt | read # wait for some output in trace_output.txt
funk_a
kill "$!"
But even then depending on timing, trace_output.txt would include some of the system calls used interpret tail|read, or kill could kill strace before it has had the time to write the trace for the echo command to the output file.
A better approach could be to wrap the call to funk_a between two recognisable system calls like
strace -fo >(sed -n '1,\|open("///dev/null|d
\|open("/dev///null|q;p' > trace_output.txt
) -Ttt -p "$$" &
sleep 1 # give enough time for strace to start
exec 3< ///dev/null # start signal
funk_a
exec 3< /dev///null # end signal
| strace not finding shell function with "Can't stat" error |
1,347,105,879,000 |
function mv1 { mv -n "$1" "targetdir" -v |wc -l ;}
mv1 *.png
It does only move the first .png file it finds, not all of them.
How can I make the command apply to all files that match the wildcards?
|
mv1 *.png first expands the wildcard pattern *.png into the list of matching file names, then passes that list of file names to the function.
Then, inside the function $1 means: take the first argument to the function, split it where it contains whitespace, and replace any of the whitespace-separated parts that contain wildcard characters and match at least one file name by the list of matching file names. Sounds complicated? It is, and this behavior is only occasionally useful and is often problematic. This splitting and matching behavior only occurs if $1 occurs outside of double quotes, so the fix is easy: use double quotes. Always put double quotes around variable substitutions unless you have a good reason not to.
For example, if the current directory contains the two files A* algorithm.png and graph1.png, then mv1 *.png passes A* algorithm.png as the first argument to the function and graph1.png as the second argument. Then $1 is split into A* and algorithm.png. The pattern A* matches A* algorithm.png, and algorithm.png doesn't contain wildcard characters. So the function ends up running mv with the arguments -n, A* algorithm.png, algorithm.png, targetdir and -v. If you correct the function to
function mv1 { mv -n "$1" "targetdir" -v |wc -l ;}
then it will correctly move the first file.
To process all the arguments, tell the shell to process all arguments and not just the first. You can use "$@" to mean the full list of arguments passed to the function.
function mv1 { mv -n "$@" "targetdir" -v |wc -l ;}
This is almost correct, but it still fails if a file name happens to begin with the character -, because mv will treat that argument as an option. Pass -- to mv to tell it “no more options after this point”. This is a very common convention that most commands support.
function mv1 { mv -n -v -- "$@" "targetdir" |wc -l ;}
A remaining problem is that if mv fails, this function returns a success status, because the exit status of commands on the left-hand side of a pipe is ignored. In bash (or ksh), you can use set -o pipefail to make the pipeline fail. Note that setting this option may cause other code running in the same shell to fail, so you should set it locally in the function, which is possible since bash 4.4.
function mv1 {
local -
set -o pipefail
mv -n -v -- "$@" "targetdir" | wc -l
}
In earlier versions, setting pipefail would be fragile, so it would be better to check PIPESTATUS explicitly instead.
function mv1 {
mv -n -v -- "$@" "targetdir" | wc -l
((!${PIPESTATUS[0] && !${PIPESTATUS[1]}}))
}
| Why does a file move/copy function only move one file at a time when using the “*” wildcard? |
1,347,105,879,000 |
So I tried making a function in a script that creates a new variable for each argument when running the script. This is my code:
#!/bin/bash
# Creating function log
#ARG1=${1}
log() {
echo "You called DA LOG FUNCTION!!!1!!11one111!"
}
log
#echo "${1}"
#echo "$ARG1"
fcta() {
for ((x=0;x<1000;++x)); do
"a$x"=${1}
if [[ ${#} -gt 1 ]]; then
shift
else
x=1001
fi
echo "${a$x}"
# echo "${1}"
}
fcta $@
I get this:
vagrant@localhost vagrant]$./luser-demo10.sh 12 12 12
You called DA LOG FUNCTION!!!1!!11one111!
./luser-demo10.sh: line 25: syntax error near unexpected token `}'
./luser-demo10.sh: line 25: `}'
[04:11----------------------------------------
vagrant@localhost vagrant]$
So this is line 25
# echo "${1}"
} <----- LINE 25
fcta $@
EDIT: Thanks for telling me about the missing "done".
People asked what I was trying to do, well I asked another question for that, since this one has been answered (question was, why did I get a syntax error).
Thanks again.
|
In your function there is a do but no matching done to close the command list.
Try shellcheck to verify your scripts. This is a report of detected bugs and suspicious points in your script:
Line 16:
for ((x=0;x<1000;++x)); do
^-- SC1009: The mentioned syntax error was in this for loop.
^-- SC1073: Couldn't parse this arithmetic for condition. Fix to allow more checks.
^-- SC1061: Couldn't find 'done' for this 'do'.
Line 25:
}
^-- SC1062: Expected 'done' matching previously mentioned 'do'.
^-- SC1072: Unexpected keyword/token. Fix any mentioned problems and try again.
| Syntax error while calling a function |
1,347,105,879,000 |
I have a basic script to find the max of 2 numbers using a user-defined function; but, I need to convert it to accept 4 numbers, and I am having a hard time. Here is the script.
#!/bin/bash
echo $1 $2 | awk '
{
print max($1, $2)
}
function max(a, b) {
return a > b ? a: b
}'
You would simply execute it by doing: ./scriptname 1 2 (or whatever two numbers you want) and the output would be the max of the two numbers.
I think I can just do the following.
#!/bin/bash
echo $1 $2 $3 $4 | awk '
{
print max($1, $2, $3, $4)
}
function max(a, b, c, d) {
return a < b ? a: b
}'
I am having trouble with line 7, the "return" line. Any suggestions?
Thanks
-CableGuy
|
You can use the 2-argument function - multiple times:
$ cat scriptname
#!/bin/bash
echo $1 $2 $3 $4 | awk '
function min(a, b) {
return a < b ? a: b
}
{
print min(min(min($1,$2),$3),$4)
}'
then for example
$ ./scriptname 3 1.2 -0.4 77
-0.4
If you're required to write it as a 4-argument function, then I'd suggest something like
function min(a, b) {
return a < b ? a : b
}
function min4(a,b,c,d) {
return min(min(min(a,b),c),d)
}
{
print min4($1,$2,$3,$4)
}
| User-defined function for finding max of 4 numbers |
1,347,105,879,000 |
Since it overwrites my history when used in multiple terminals, I want to turn off the functionality fc -W. Unfortunately I have a habit of typing it often.
I think it's not possible to make an alias, since there is a whitespace in fc -W.
So I tried making a function, something like this:
# Make sure to never invoke fc -W
fc(){
for x; do
if [[ "${x}" == -W ]]; then
echo "I'm sorry Dave. I'm afraid I can't do that."
return
fi
done
fc "${@}"
}
However, now the call fc "${@}" calls itself, and I get infinite recursion. Typically I would avoid this by using e.g. /usr/bin/fc, instead of fc, however:
$ type fc
> fc is a shell builtin
How can I avoid the infinite recursion in this case? Or is there a better way to disable one flag of a command?
|
Use builtin:
builtin fc "$@"
This will ensure that the built-in fc command is called.
Style: The diagnostic message should go to the standard error stream and the function should return a non-zero status when failing:
echo 'Sorry, can not do that' >&2
return 1
| Turn off one flag of command and infinite recursion |
1,347,105,879,000 |
This would probably never be the BEST approach to something, but I'm wondering if it's even possible.
Something like:
awk '/function_i_want_to_call/,/^$/{print}' script_containing_function | xargs source
function_i_want_to_call arg1 arg2 arg3
Except actually working.
|
First you need to rigorously determine what command will produce the specific part you want to source. For a trivial example, given the file
var1=value1
var2=value2
you could set only var1 using head -n1 filename. This could be a pipeline of arbitrary complexity, if you wanted.
Then run:
source <( pipeline_of_arbitrary_complexity some_filename )
Works only in bash. To do it in POSIX, I think you'd need to make a temp file.
| Source only part of a script from another script? |
1,347,105,879,000 |
IMPORTANT: do not use eval! (I learned this later..)
In a function, eval expands sleep to its alias, so I prevent the endless loop this way:
function FUNCexecEcho() {
echo "EXEC: $@";
shopt -u expand_aliases
eval "$@";
shopt -s expand_aliases
};
alias sleep='FUNCexecEcho sleep ';
sleep 10
But then, all other aliases will stop working...
How to prevent expansion only to one arbitrary alias inside this function?
PS.: the endless loop only happens on the second time you execute sleep 10
|
Using eval is wrong in the first place. The shell has already evaluated what you pass to FUNCexecEcho, evaluating a second time is wrong and potentially dangerous. In your code, you're also discarding the exit status of the command.
FUNCexecEcho() {
echo "EXEC: $@"
"$@"
}
(no problem with aliases there unless you define an alias for "$@"). Compare the behaviour in:
FUNCexecEcho echo 'this;rm -rf "$HOME"'
with the two versions. With mine, it gives:
$ FUNCexecEcho echo 'this;rm -rf "$HOME"'
EXEC: echo this;rm -rf "$HOME"
this;rm -rf "$HOME"
I suggest you don't run it with yours if you don't have backups ;-)
| how to prevent alias expansion by `eval` to an arbitrary alias, and keep the endless loop protection on a function? |
1,347,105,879,000 |
I want to construct a function that will change its user input prompt based on its parameter.
my_fucntion takes 1 parameter as db_host after prompting the user for input:
function provide_host () {
echo "Enter NAME OR IP OF ${function_param1} DATBASE HOST: "
read global function_param1_db_host
}
So if I call the function as
function provide_host (primary)
it should prompt as
echo "Enter NAME OR IP OF PRIMARY DATBASE HOST: "
but if I use
function provide_host (secondary)
it prompts
"Enter NAME OR IP OF SECONDARY DATBASE HOST: "
My idea is that I have to use an if statement for that, but I'm not sure if I can use the function's parameter as a variable for promptin the user inside the function.
|
You can use $1 to get the first parameter:
function provide_host () {
echo "Enter NAME OR IP OF $1 DATBASE HOST: "
read global function_param1_db_host
}
or to convert it to upper case:
function provide_host () {
echo "Enter NAME OR IP OF ${1^^} DATBASE HOST: "
read global function_param1_db_host
}
Then call your function with:
provide_host primary
provide_host secondary
However, I'd do it slightly differently. Instead of trying to set a global variable, I'd prompt in stderr and return the IP from the function in stdout:
function provide_host() {
read -p "Enter NAME OR IP OF ${1^^} DATABASE HOST: " host >&2
printf "%s" "$host"
}
primary_db_host=$(provide_host primary)
secondary_db_host=$(provide_host secondary)
| BASH: is it possible to change a prompt of function based on its parameter? |
1,347,105,879,000 |
I have a findn function:
findn () {
find . -iname "*$1*"
}
Using this function has one downside that I cannot use -print0 | xargs -0 command (I am using mac) following findn filename to extend the functionality of the find command if the filename contains empty spaces.
So, is there anyway I can keep both the functions of handy -iname "*$1*" and | xargs command at the same time?
I was thinking of using an alias to do it, but it doesn't have to be an alias.
|
One way with GNU find or compatible (-iname is already a GNU extension anyway) could be to define the function as:
findn() (
if [ -t 1 ]; then # if the output goes to a terminal
action=-print # simple print for the user to see
else
action=-print0 # NUL-delimited records so the output can be post-processed
fi
first=true
for arg do
if "$first"; then
set -- "$@" '('
first=false
else
set -- "$@" -o
fi
set -- "$@" -iname "*$arg*"
shift
done
"$first" || set -- "$@" ')'
exec find . "$@" "$action"
)
Then you can use it as:
findn foo bar
To see the file names that contain foo or bar (change the -o to -a above if you want instead the ones that contain both foo and bar).
And:
findn foo bar | xargs -r0 cat
If you want to apply a command on each file found by findn.
For a variant that does both and and not:
findn() (
if [ -t 1 ]; then # if the output goes to a terminal
action=-print # simple print for the user to see
else
action=-print0 # NUL-delimited records so the output can be post-processed
fi
first=true
for arg do
if "$first"; then
set -- "$@" '('
first=false
else
set -- "$@"
fi
if [ "$arg" = ! ]; then
set -- "$@" !
else
case $arg in
(*[][*?\\]*)
# already contains wildcard characters, don't wrap in *
set -- "$@" -iname "$arg"
;;
(*)
set -- "$@" -iname "*$arg*"
;;
esac
fi
shift
done
"$first" || set -- "$@" ')'
exec find . "$@" "$action"
)
And then:
findn foo bar ! baz
For the filenames that contain both foo and bar and not baz.
In that variant, I also made it so that if the argument contained a wildcard character, it was taken as-is, so you can do:
findn foo ! 'bar*'
To look for files that do not start with bar. If you're using the zsh shell, you can make an alias:
alias findn='noglob findn'
To disable globbing on that command which allows you to write:
find foo ! bar*
You may want to make that a script (here a sh script is enough as that syntax is POSIX) instead of a function, so it can be called from anywhere instead of just your shell.
| find - how do i make an alias to do something like (find . -iname '*$1*')? |
1,347,105,879,000 |
I am using the following script, so as to call a function that is supposed to iterate over an array.
#!/bin/bash
function iterarr {
for item in "$1"
do
echo "$item"
done
}
myarr=(/dir1/file1.md /dir1/file2.md README.md)
iterarr "${myarr[@]}"
However, when I execute it, it gives the following output.
/dir1/file1.md
Why does it print only the first array entry?
edit: What is more, I would like to be able to use an additional argument (besides the array, so if I use '$@', how to I access the second argument?)
Working on Ubuntu 16.04.03 with ...
*$ $(which bash) --version
GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
|
iterarr "${myarr[@]}" will expand to iterarr '/dir1/file1.md' '/dir1/file2.md' 'README.md' and in your loop you only reference the first argument with "$1". Instead use "$@" to loop over all of the arguments.
#!/bin/bash
function iterarr {
for item in "$@"
do
echo "$item"
done
}
myarr=(/dir1/file1.md /dir1/file2.md README.md)
iterarr "${myarr[@]}"
If you want to process flags, or positional arguments then place them before the array and handle them first, shifting them when done will remove them from "$@",
#!/bin/bash
function iterarr {
echo "first argument is : '$1'"
shift
for item in "$@"
do
echo "$item"
done
}
myarr=(/dir1/file1.md /dir1/file2.md README.md)
iterarr firstarg "${myarr[@]}"
| Function to iterate over array |
1,347,105,879,000 |
My bash script sources a script file (call it file2.sh) according to an argument. (It is either sourced or not.) The script file2.sh contains a function "foo" (call it a modified or improved version of foo).
The script also sources another file (file1.sh) which contains the original function "foo". This one (file1.sh) is always sourced (and it has other functions that are required).
I need the function "foo" in file2.sh to override the "foo" in file1.sh (if file2.sh is sourced).
Furthermore, I need to do this in any scripts that my main script calls. Several of those called script files also source file1.sh. They are expecting the original "foo" function. But I need to make them call the improved "foo" without modifying them.
In other words, if my main script includes file2.sh, I want any script (which is called by my main script) that sources file1.sh to use "foo" from file2.sh instead. And I can't change any of the other files except my main script. (Or if I do change them, I need them to work correctly when file2.sh is not sourced by the main script.)
|
You can flag your function as 'read only' in file2.sh.
Note: this will cause warnings when file1.sh later tries to define (redefine) the function.
Those warnings will appear on stderr and could cause trouble. I don't know if they can be disabled.
Further note: this MIGHT cause the scripts to fail, if they are checking the return status of the function definition. I think that there is also a bash option that can be set that would cause a non-zero return status anywhere to abort execution of the script. Good luck!
Can you modify file1.sh? Simply using conditionals to check if the function is defined before defining it would be a more robust solution.
Here is an example of usage of 'readonly'.
hobbes@metalbaby:~/scratch$ mkdir bash-source-test
hobbes@metalbaby:~/scratch$ cd bash-source-test
hobbes@metalbaby:~/scratch/bash-source-test$ cat > file2.sh
#!/bin/bash
fname(){
echo "file2"
}
readonly -f fname
hobbes@metalbaby:~/scratch/bash-source-test$ cat > file1.sh
#!/bin/bash
fname(){
echo "file1"
}
readonly -f fname
hobbes@metalbaby:~/scratch/bash-source-test$ cat >top.sh
#!/bin/bash
if [ $1 ]; then
source file2.sh
fi
source file1.sh
fname
hobbes@metalbaby:~/scratch/bash-source-test$ chmod +x *.sh
hobbes@metalbaby:~/scratch/bash-source-test$ ./top.sh
file1
hobbes@metalbaby:~/scratch/bash-source-test$ ./top.sh hello
file1.sh: line 4: fname: readonly function
file2
hobbes@metalbaby:~/scratch/bash-source-test$
| Bash source -- select the right function when two sourced files have the same function name? |
1,347,105,879,000 |
I try to put find inside function and catch an argument passed to this function with the following minimal work example:
function DO
{
ls $(find . -type f -name "$@" -exec grep -IHl "TODO" {} \;)
}
But, when I execute DO *.tex, I get “find: paths must precede expression:”. But when I do directly:
ls $(find . -type f -name "*.tex" -exec grep -IHl "TODO" {} \;)
then I get all TeX files witch contain "TODO".
I try many thing in the DO function, such as \"$@\", '$@', I change the quotes marks, but the behavior still the same.
So, what to do to force find work inside function?
|
There are a few issues in your code:
The *.tex pattern will be expanded when calling the function DO, if it matches any filenames in the current directory. You will have to quote the pattern as either '*.tex', "*.tex" or \*.tex when calling the function.
The ls is not needed. You already have both find and grep that are able to report the pathnames of the found files.
-name "$@" only works properly if "$@" contains a single item. It would be better to use -name "$1". For a solution that allows for multiple patterns, see below.
The function may be written
DO () {
# Allow for multiple patterns to be passed,
# construct the appropriate find expression from all passed patterns
for pattern do
set -- "$@" '-o' '-name' "$pattern"
shift
done
# There's now a -o too many at the start of "$@", remove it
shift
find . -type f '(' "$@" ')' -exec grep -qF 'TODO' {} ';' -print
}
Calling this function like
DO '*.tex' '*.txt' '*.c'
will make it execute
find . -type f '(' -name '*.tex' -o -name '*.txt' -o -name '*.c' ')' -exec grep -qF TODO {} ';' -print
This would generate a list of pathnames of files with those filename suffixes, if the files contained the string TODO.
To use grep rather than find to print the found pathnames, change the -exec ... -print bit to -exec grep -lF 'TODO' {} +. This will be more efficient, especially if you have a large number of filenames matching the given expression(s). In either case, you definitely do not need to use ls.
To allow the user to use
DO tex txt c
your function could be changed into
DO () {
# Allow for multiple patterns to be passed,
# construct the appropriate find expression from all passed patterns
for suffix do
set -- "$@" '-o' '-name' "*.$suffix" # only this line (and the previous) changed
shift
done
# There's now a -o too many at the start of "$@", remove it
shift
find . -type f '(' "$@" ')' -exec grep -qF 'TODO' {} ';' -print
}
| find inside shell function |
1,347,105,879,000 |
In an attempt to go around an annoying aspect of tmux, I have the following code in my .bashrc file:
alias emcs="command emacs"
# Fix emacs in tmux
emacs () {
if [ $TERM != "xterm" ]
then
TERM=xterm emacs "$@"
else
emacs "$@"
fi
return;
}
The alias is simply for easier access to the original emacs command.
The function is supposed to replace emacs . . . with TERM-xterm emacs . . ., regardless of the arguments listed afterward.
My problem is that when I run emacs, it hangs on the command line. If I change the function to "emaacs" or anything other than "emacs" then it works flawlessly. Why is it hanging when I'm using the actual name of the command, and what can I do to make it work?
(If you are wondering why I am doing this, it's because tmux changes the terminal to screen, which for some reason changes the emacs colors where comments and variable names are the same color.)
|
Take a look at your function. In this example, I have changed some names to indict the guilty, and struck some irrelevancies to your problem:
recurse{} (
recurse "$@"
}
What do you think this will do when invoked?
To fix this, you can call out the explicit binary:
emacs () {
if [ $TERM != "xterm" ]
then
TERM=xterm /usr/bin/emacs "$@"
else
/usr/bin/emacs "$@"
fi
return;
}
Or you can rely on your path being properly set:
emacs () {
if [ $TERM != "xterm" ]
then
TERM=xterm command emacs "$@"
else
command emacs "$@"
fi
return;
}
| How do I make a working emacs alias function? |
1,347,105,879,000 |
When I type the set command in my system I've got this extract out :
__colormgr_commandlist='
create-device
create-profile
delete-device
delete-profile
device-add-profile
device-get-default-profile
device-get-profile-for-qualifier
device-inhibit
device-make-profile-default
device-set-kind
device-set-model
device-set-serial
device-set-vendor
find-device
find-device-by-property
find-profile
find-profile-by-filename
get-devices
get-devices-by-kind
get-profiles
get-sensor-reading
get-sensors
get-standard-space
profile-set-filename
profile-set-qualifier
sensor-lock
sensor-set-options
'
__grub_script_check_program=grub-script-check
_backup_glob='@(#*#|*@(~|.@(bak|orig|rej|swp|dpkg*|rpm@(orig|new|save))))'
_xspecs=([freeamp]="!*.@(mp3|ogg|pls|m3u)" [cdiff]="!*.@(dif?(f)|?(d)patch)?(.@([gx]z|bz2|lzma))" [bibtex]="!*.aux" [rgview]="*.@(o|so|so.!(conf|*/*)|a|[rs]pm|gif|jp?(e)g|mp3|mp?(e)g|avi|asf|ogg|class)" [oowriter]="!*.@(sxw|stw|sxg|sgl|doc?([mx])|dot?([mx])|rtf|txt|htm|html|?(f)odt|ott|odm)" [chromium-browser]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))" [tex]="!*.@(?(la)tex|texi|dtx|ins|ltx|dbj)" [netscape]="!*.@(?([xX]|[sS])[hH][tT][mM]?([lL]))"
.../..
_xinetd_services ()
{
local xinetddir=/etc/xinetd.d;
if [[ -d $xinetddir ]]; then
local restore_nullglob=$(shopt -p nullglob);
shopt -s nullglob;
local -a svcs=($( printf '%s\n' $xinetddir/!($_backup_glob) ));
$restore_nullglob;
COMPREPLY+=($( compgen -W '${svcs[@]#$xinetddir/}' -- "$cur" ));
fi
}
dequote ()
{
eval printf %s "$1" 2> /dev/null
}
quote ()
{
local quoted=${1//\'/\'\\\'\'};
printf "'%s'" "$quoted"
}
quote_readline ()
{
local quoted;
_quote_readline_by_ref "$1" ret;
printf %s "$ret"
}
I checked all files in my knowledge as /etc/profile, /etc/environment, and/or ~/.bashrc. I didn't find any generation script or code calling.
Do you have any advice where is come from ?
|
For the functions, Bash can tell you where they came from:
$ help declare
...
-F restrict display to function names only (plus line
number and source file when debugging)
$ shopt -s extdebug
$ declare -F quote_readline
quote_readline 150 /usr/share/bash-completion/bash_completion
(I found this mentioned in an answer on stackoverflow.)
For the environment variables, there are a bunch of good ways to find them here: How to determine where an environment variable came from
Most of those functions seem related to command line completion, my Ubuntu system has them in /usr/share/bash-completion/ as shown above.
FWIW, __colormgr_commandlist seems related to completion too, there's a script containing it here.
| How to know where shell variables and functions are set? |
1,347,105,879,000 |
I'm working on a bash function to check if a tmux session is running. The function works but if no session is running it outputs "failed to connect to server". How do I output that error to null without appending 1>&2 to every function call?
tmux_checker()
{
if [ -z $(tmux ls | grep -o servercontrol) ]
then
tmux new -d -s servercontrol
fi
}
tmux_checker #> /dev/null 2>&1 or 1>&2
|
Redirect the output in the function itself:
tmux_checker()
{
if [ -z $(tmux ls 2>/dev/null | grep -o servercontrol) ]
then
tmux new -d -s servercontrol
fi
}
tmux_checker
| Bash Script Function Output /dev/null |
1,347,105,879,000 |
So I am trying to create a simple function to replace the standard who command with my own, similar to a function I use to replace the standard cd command.
Goal: Replace original who command with who "$@" | fgrep -v <user> in order to hide a user from it.
Similar example:
function cd () {
builtin cd "$@" && ls
}
The problem is that who is not a built in command like cd, so the above example won't work.
In case it matters, no, this isn't for malicious purposes, just learning.
|
Like thrig commented, the command to run external commands is command.
Your new function could look like:
function who() {
command who "$@" | fgrep -v user
}
| How do I reference an original command, so I can replace it with a function |
1,430,404,323,000 |
I'm fairly new to bash scripting, and wondered what's the simplest way to make bash script functions in a script as the parameter when run via command line?
Example usage:
./myscript function1
./myscript function2
Example contents of myscript:
echo "Example myscript"
function1() {
echo "I am function number 1"
}
function2() {
echo "I am function number 2"
}
if [ $# -eq 0 ]; then
echo "Specify a function. E.g. function1"
exit 1;
fi
The script would only execute the specific function when called for on command line otherwise would show some example usage. It would need to be usable by cron and such processes as well as executed by a user.
|
All arguments passed to shell script was stored in $@, you can loop through them:
#!/bin/bash
echo "Example myscript"
function1() {
echo "I am function number 1"
}
function2() {
echo "I am function number 2"
}
if [ $# -eq 0 ]; then
echo "Specify a function. E.g. function1"
exit 1;
fi
for func do
[ "$(type -t -- "$func")" = function ] && "$func"
done
[ "$(type -t -- "$func")" = function ] make sure that we only make function call when $func is function.
Example:
$ ./test.sh function1
Example myscript
I am function number 1
$ ./test.sh function1 function2
Example myscript
I am function number 1
I am function number 2
| What's the best way to make a bash function in a script as a parameter when running via command line? |
1,430,404,323,000 |
I use the expression "overriding wrapper" to refer to a function foo that overrides some original function falls, and calls this original function (or a copy of its) during its execution.
I have found Stack Exchange threads about this (like this one), but in my case I have the additional requirement that both the original foo as well as the overriding foo are meant to be accessible through FPATH, and autoloaded. (The overriding version presumably would appear earlier in the search sequence, thus shadowing the original version.)
Is there a way to do this?
FWIW, in the particular scenario I'm dealing with, the overriding foo just assigns some none-standard values to some global variables that the original refers to for doing its thing.
|
You can use this function to load the code of a function from a file in the same way that autoload does it, without the restriction that the file name has to match the function name.
## load_from FILE FUNCTION_NAME
load_from () {
eval "$2 () { $(<$1) }"
}
Here's how the wrapper code looks like. $^fpath/somefunction(N) expands to the list of definitions of somefunction in the load path ($^fpath/somefunction expands to the list of /dir/somefunction for each /dir in $fpath, and the glob qualifier (N) restricts the expansion to existing files). Note that this only works if you have a single level of wrapper and the wrapper is in the fpath.
#autoload somefunction
local some_parameter=overridden_value
local autoload_files
autoload_files=($^fpath/somefunction(N))
load_from $autoload_files[2] somefunction_wrapped
somefunction_wrapped "$@"
| How to write an "overriding wrapper" for a function in FPATH? |
1,430,404,323,000 |
Hello ALL and thanks in advance.
I have searched the forum for my situation and have been unable to locate a solution. I've got a script that I am passing arguments/options/parameters to at the command line. One of the values has a space in it, which I have put in double quotes. It might be easier to provide an example. Forgive my usage of arguments/options/parameters.
$: ./test1.ksh -n -b -d "Home Videos"
My problem is setting a variable to "Home Videos" and it being used together. In my example, the -d is to specify a directory. Not all the directories have spaces, but some do in my case.
This is an example of the code I have that is not working as I expect it to.
#!/bin/ksh
Function1()
{
echo "Number of Args in Function1: $#"
echo "Function1 Args: $@"
SetArgs $*
}
SetArgs()
{
echo -e "\nNumber of Args in SetArgs: $#"
echo "SetArgs Args: $@"
while [ $# -gt 0 ]
do
case $1 in
-[dD])
shift
export DirectoryName=$1
;;
-[nN])
export Var1=No
shift
;;
-[bB])
export Var2=Backup
shift
;;
*)
shift
;;
esac
done
Function2
}
Function2()
{
echo "Directory Name: ${DirectoryName}"
}
Function1 $*
When I run this, I'm getting only Home for the DirectoryName instead of Home Videos. Seen below.
$ ./test1.ksh -n -b -d "Home Videos"
Number of Args in Function1: 5
Function1 Args: -n -b -d Home Videos
Number of Args in SetArgs: 5
SetArgs Args: -n -b -d Home Videos
Var1 is set to: No
Var2 is set to: Backup
Directory Name: Home
What I am expecting and I have not been able to get it to happen is:
$ ./test1.ksh -n -b -d "Home Videos"
Number of Args in Function1: 4
Function1 Args: -n -b -d "Home Videos"
Number of Args in SetArgs: 4
SetArgs Args: -n -b -d "Home Videos"
Var1 is set to: No
Var2 is set to: Backup
Directory Name: Home Videos <-- Without double quotes in the final usage.
Any help I can get on this will be greatly appreciated... I've tried escaping the double quotes, without any success.
Thank you for your time and efforts in helping me figure this out.
Regards,
Daniel
|
Using $* or $@ unquoted never makes sense.
"$*" is the concatenation of the positional parameters with the first character (or byte depending on the shell) of $IFS, "$@" is the list of positional parameters.
When unquoted, it's the same but subject to split+glob (or only empty removal with zsh) like any other unquoted parameter expansion, (some shells do also separate arguments in $* even if $IFS is empty).
Here you want to pass the list of arguments as-is to your function, so it's:
SetArgs "$@"
[...]
Function1 "$@"
Note that with ksh88, $IFS has to contain the space character (which it does by default) for that to work properly (a bug inherited from the Bourne shell, fixed in ksh93).
Also note that with some implementations of ksh (like older versions of zsh in ksh emulation),
export DirectoryName=$1
is a split+glob invocation case. export is one of those commands in Korn-like shells that can evaluate shell code through arithmetic evaluation in array indices), so it's one of those cases where it's important to quote variables to avoid introducing command injection vulnerabilities.
Example:
$ (exec -a ksh zsh-4.0.1 -c 'export x=$a' ksh 'foo psvar[0`uname>&2`]')
Linux
Note that [ $# -gt 0 ] is another split+glob invocation which doesn't make sense (less likely to be a problem at least with the default value of $IFS).
| Passing options/args/parameters with spaces from the script to a function within |
1,430,404,323,000 |
Want to add a function in the .vimrc to auto get the text between double quotes.
If current line is
add_file -vhdl -lib work "../src/abc.vhd"
The function will get ../src/abc.vhd
|
The straightforward solution is yanking the inner double-quotes text object (as per @muru's comment): First move inside the quote with f", then yi".
Alternatively, you can use lower-level functions to extract a pattern from the current line:
:echo matchstr(getline('.'), '"\zs[^"]\+\ze"')
This doesn't change cursor position, doesn't clobber registers.
What's better depends on the use of the text.
| Vim Function: How to get text between double quotes? |
1,430,404,323,000 |
Bash can print the current function name:
$ bash -c 'g(){ echo $FUNCNAME; }; g'
g
However Dash cannot use FUNCNAME:
$ dash -c 'g(){ echo $FUNCNAME; }; g'
It is possible to access the current function name with Dash?
|
With any POSIX shells:
defun() {
eval "
$1() {
FUNCNAME=$1
$(cat)
}
"
}
defun g <<\}
printf '%s\n' "$FUNCNAME"
}
g
Note that you can't call a function defined by defun inside body of a function defined by defun.
| Bash FUNCNAME equivalent in Dash |
1,430,404,323,000 |
What am I doing wrong...? It's fine if I do this on the command line and then call it but not when I load it from .profile. Linux Mint Qiana, Bash 4.*, if it matters.
function android() { command /opt/android-studio/bin/studio.sh "$@" & disown ; }
export -f android
I've tried shortening the command, extending it, removing the semi-colon and using a newline instead... I guess I haven't found the happy compromise yet. No errors when ran on the command line and the function does work as it currently is listed above.
Notes: By "load" I mean to open a new terminal session with the same user whose .profile I am editing... and I am using things like function, command and disown because I started with a bare-bones version of this function but it wasn't working so I started adding and removing stuff to try and get the correct combination of things. Everything ran fine on the command line.
|
Traditionally bash functions are placed in ~/.bashrc as this is read by interactive bashes. ~/.profile is only read by login bashes. New windows usually dont run login bashes.
| Exporting a function from .profile/.bashrc |
1,430,404,323,000 |
I can to define function in bash and use it:
foo() { echo $1; }
foo test
But if I want to collect my functions in one bash script all its unavailable:
init.bash
#!/bin/bash
foo() { echo $1; }
export -f foo # This not helps
Using:
./init.bash && foo test # error here
Is there any way to export script's functions to parent scope?
Without writing to .bashrc, it's too global
Same as .bashrc but for current bash instance only...
|
You could source the file init.sh. No need to export the function in that file.
$ cat init.bash
foo() { echo $1; }
And use it:
$ . ./init.bash && foo test
test
Sourcing a file would execute commands from it in the current shell context. As such, the functions would be available in the parent.
export would set the attribute for a variable that would be applicable for the current shell and subshells. Not the parent shell. You need to define the variable in the current shell context.
| Include a bash function into the parent script |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.