date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,430,404,323,000 |
I wrote this shell script which sort of confused me a bit...
function func
{
the variables received are
echo $0: $1 and $2
}
echo in the main script
func ball boy
The name of script is shell.txt I expected the result to be
func : ball and boy
However I got
./shell.txt :ball and boy
I have read that the positional parameters are "local" in nature then how did this result come?
|
In bash some variables are reserved, such as $0 which gives the command name -- in this instance it is the name of the script (hence ./shell.txt). Another example is $$ which will give the process ID. I believe that $FUNCNAME should print the name of the function being used.
Any variables in the format $1 $2 $3 etc will be whatever positional parameters you have passed to it.
If you removed the $0 variable and replaced it with $FUNCNAME you would get the output you are looking for.
Here is a short script, let's save it as passingVariables.sh (we use .sh to show that it is a shell script -- purely superficial but it helps keep things straight):
#!/bin/bash
echo "The first word is $1 and the second word is $2"
Now if I ran it like this:
./passingVariables.sh apple orange
It would spit out the following:
"The first word is apple and the second word is orange"
| Nature of the positional parameters |
1,430,404,323,000 |
I wrote the following bash function:
function to_company()
{
scp ${1} [email protected]://home/username
}
When I do:
$ to_company code_diff.txt
It asks for password and then fails with following message:
scp: //home/username: not a regular file
I tried giving //home/username/ & //home/username/${1} in the script but got the same result.
If I manually execute above command and pass code_diff.txt in place of ${1}, file is transferred without issues.
What mistake am I making?
|
Not that it should matter, but the remote path should be /home/username (single forward slash). And as sputnick pointed out, quote your ${1} with "${1}".
I've copied the same command and it works when I test it, so I suspect (given the "not a regular file" error) that you have an extra space between [email protected]: and //home/username.
Another thing to try is to add debugging (by supplying -v on the scp command) and see
if that gives any clues:
function to_company()
{
scp -v "${1}" [email protected]:/home/username
}
| Bash function to scp a file not working |
1,430,404,323,000 |
I want to find the last index of any character in the [abc] set in the abcabc string but the search should start from the end of the string:
" Returns the 0th index but I want the 5th.
let a=match('abcabc', '[abc]')
I skimmed through Vim's "4. Builtin Functions" (:h functions) but the only method that looked promising,the reverse method, only operates on lists. A limitation I don't understand because functions like len were designed to work even with strings, numbers, and lists.
To solve the problem I came up with my following function:
function! s:Rvrs(str)
let a=len(a:str)
let b=a - 1
let c=''
while b >= 0
let c.=a:str[b]
let b-=1
endwhile
return c
endfunction
So I can say let a=match(s:Rvrs('abcabc'), '[abc]').
|
I looked around, but did not find any built in function that looked like it would do what you want.
You might find the following functions useful though: (variations included for overlapping, and non-overlapping matches starting from the beginning or the end of the string; all of them support multi-character patterns with some restrictions or limitations around uses of \zs and/or \ze)
function! s:AllOverlappableMatches(str, pat)
" Restriction: a:pat should not use \ze
let indicies = []
let index = 0
let splits = split(a:str, '\ze'.a:pat, 1)
for segment in splits
if len(segment) == 0
call add(indicies, index)
else
let index += len(segment)
endif
endfor
return indicies
endfunction
function! s:AllOverlappableMatchesFromEnd(str, pat)
" Restriction: a:pat should not use \ze
return reverse(s:AllOverlappableMatches(a:str, a:pat))
endfunction
function! s:AllNonoverlappingMatches(str, pat)
" If a:pat uses \zs, the returned indicies will be based on that
" position.
" If a:pst uses \ze, subsequent matches may re-use characters
" after \ze that were consumed, but not 'matched' (due to \ze)
" in earlier matches.
let indicies = []
let start = 0
let next = 0
while next != -1
let next = match(a:str, a:pat, start)
if next != -1
call add(indicies, next)
let start = matchend(a:str, a:pat, start)
endif
endwhile
return indicies
endfunction
function! s:AllNonoverlappingMatchesFromEnd(str, pat)
" If a:pat uses \zs, the returned indicies will be based on that
" position.
let str = a:str
let indicies = []
let start = len(a:str) - 1
while start >= 0
let next = match(str, '.*\zs' . a:pat, start)
if next != -1
call add(indicies, next)
let str = str[ : next - 1]
endif
let start -= 1
endwhile
return indicies
endfunction
echo s:AllOverlappableMatchesFromEnd('abcabc', '[abc]')
" -> [5, 4, 3, 2, 1, 0]
echo s:AllOverlappableMatchesFromEnd('dabcabc', '[abc]')
" -> [6, 5, 4, 3, 2, 1]
echo s:AllOverlappableMatchesFromEnd('dab - cabc', '[abc]')
" -> [9, 8, 7, 6, 2, 1]
echo s:AllOverlappableMatchesFromEnd('dab - cabce', '[abc]')
" -> [9, 8, 7, 6, 2, 1]
echo s:AllOverlappableMatchesFromEnd('dab - cabc', '[abc]\{2}')
" -> [8, 7, 6, 1]
echo s:AllOverlappableMatches('dab - cabc', '[abc]\{2}')
" -> [1, 6, 7, 8] 0123456789
echo s:AllNonoverlappingMatches('dab - cabc', '[abc]\{2}')
" -> [1, 6, 8] 0123456789
echo s:AllNonoverlappingMatchesFromEnd('dab - cabca', '[abc]\{2}')
" -> [9, 7, 1] 0123456789A
echo s:AllNonoverlappingMatchesFromEnd('ab - cabca', '[abc]\{2}')
" -> [8, 6, 0] 0123456789
echo s:AllNonoverlappingMatchesFromEnd('abcabc', '[abc]\{2}')
" -> [4, 2, 0] 012345
echo s:AllNonoverlappingMatchesFromEnd(' ab c abcd', '[abc]\{2}')
" -> [7, 1] 0123456789
echo s:AllNonoverlappingMatchesFromEnd('abcabc', '[abc]\{2}')
" -> [4, 2, 0] 012345
echo s:AllNonoverlappingMatches( 'abcabcabbc', 'abc')
" -> [0, 3] 0123456789
echo s:AllNonoverlappingMatchesFromEnd( 'abcdabcabbc', 'abc')
" -> [4, 0] 0123456789A
" A multi-character, overlappable pattern
echo s:AllOverlappableMatchesFromEnd( 'aaaabcaaac', 'aaa')
" -> [6, 1, 0] 0123456789
| How to reverse-match a string in the Vim programming language? |
1,430,404,323,000 |
Like HERE I have a file.csv with numbers in quotes:
"0.2"
"0.3339"
"0.111111"
To round the number (3 decimals) this solutions works great:
printf "%.03f\n" $(sed 's/\"//g' file.csv)
But now I want to store sed 's/\"//g' file.csv as a variable
var_sed=$(sed 's/\"//g' file.csv);
printf "%.03f\n" ${var_sed}
Doesn't work. The output is
zsh: bad math expression: operator expected at `0.3339\n0.1...'
0.000
So the problem is that var_sed passes \n to printf "%.03f\n".
The only solution I know is:
var_sed=$(sed 's/\"//g' file.csv);
printf "%.03f\n" $(echo ${var_sed})
Maybe there is a cleaner way.
Also I want to put printf "%.03f\n" in a function like this:
printf_function () { printf "%.03f\n" $1;}
But this is not working:
printf_function () { printf "%.03f\n" $1;};
var_sed=$(sed 's/\"//g' file.csv);
printf_function ${var_sed}
also printf_function $(echo ${var_sed}) is not working.
____________________________________________________________
Why I try to save the file in a variable?
____________________________________________________________
The truth is that I actually want to put the sed command in a function. I'm sorry I put this as a variable to (try to) simplify the problem.
My script is
sed_01 () { sed 's/\"$// ; s/^\"// ; s/something_else//g' $1;};
printf_03 () { printf "%.03f\n" $1;};
printf_03 "$(sed_01 file.csv)"
as mentioned in the comments below, it works in bash.
output:
"0.200"
"0.334"
"0.111"
but in zsh the output: is:
printf_03: bad math expression: operator expected at `0.3339\n0.1...'
0.000
|
printf "%.03f\n" $(sed 's/\"//g' file.csv)
var_sed=$(sed 's/\"//g' file.csv);
printf "%.03f\n" ${var_sed}
This relies on word splitting to give the numbers to printf as separate arguments. The thing is, that zsh has dropped automatic word splitting as a silly remnant of the past, and doesn't do it for variable expansions. But it does do it for command substitutions, for whatever reason.
There's also no splitting when assigning from the command substitution to a regular scalar variable, so the newlines in the output of sed get saved in the variable.
That's why your code works with the command substitution, but not with the intermediate variable.
If you do need to load the values in a variable first (and you don't for just printing), consider storing them in an array (zsh):
lines=("${(f)$(sed 's/\"//g' file.csv)}")
printf "%.03f\n" $lines
in Bash, you'd use readarray/mapfile:
mapfile -t lines < <(sed 's/\"//g' file.csv)
printf "%.03f\n" "${lines[@]}"
Or if you want something that mostly works in either, you'll have to rely on word splitting again:
lines=( $(sed 's/\"//g' file.csv) )
printf "%.03f\n" "${lines[@]}"
Note the parentheses, they make that an array assignment, so word splitting again happens. ("mostly" including the usual caveats of a modified IFS affecting the behaviour, and filename globbing also happening.)
Then again, something like this might be easier done with just AWK:
$ awk '{gsub(/"/, ""); printf "%.03f\n", $1}' file.csv
0.200
0.334
0.111
| printf "%.3f" ${variable with newlines} - error with \n |
1,430,404,323,000 |
I have this function:
cyan=`tput setaf 6`
reset=`tput sgr0`
function Info()
{
echo "${cyan}$1${reset}"
}
And I use it in my other scripts as simple as Info some message.
However, when I use it to print all items of an array, it only prints the first item:
Info "${ArrayVariable[@]}" # this only prints the first item
echo ${ArrayVariable[@]}" # this prints all of them
How can I preserve all of the variables when using this syntax and this function?
|
In your function, $1 expands to the first argument. When you call your function using
Info some message
... then the value of $1 is some, while the value of $2 is message.
You can keep your function as it is and instead call it with
Info 'some message'
or
Info "$mymessage"
or
Info "${mymessagearray[*]}"
Quoting the whole message ensures that the message string is the first argument and that it, therefore, will be available in $1 inside the function.
In the case of the array mymessagearray above, using [*] in place of [@] gives you a single string with all the elements of the array delimited by the first character of $IFS (a space by default). This single string is quoted (the double quotes in the code), so it's all delivered to your function in $1.
The other way to do it is to expand $* in the string you are printing. The value "$*" is the value of all arguments delimited by the first character of $IFS (a space by default).
Info () {
echo "$cyan$*$reset"
}
Personally, I would opt for using quotes around the arguments instead, ensuring that the message is printed as is, without splitting it on whitespace or performing filename globbing on it (which would happen if you called the function with an unquoted value).
| How to preserve parameter expansion passed to a function? |
1,430,404,323,000 |
I have a directory with files file1.c, file2.c and file3.c. The command find outputs:
$find -name "*.c"
./file1.c
./file2.c
./file3.c
Then I would like to use find without the quotes around .*c. For this I use set -f:
$echo $- # check current options
himBHs
$set -f
$echo $- # check that f option is set
fhimBHs
$find -name *.c
./file1.c
./file2.c
./file3.c
$set +f # unset f option
I tried the same commands inside a function in .bashrc:
find() {
set -f
eval command find $@
set +f
}
but testing it gave the error:
$ . ~/.bashrc && find -name *c
find: paths must precede expression: file1.c
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression
What is the cause for this error in the function? find version: GNU 4.6.0
|
You didn't say, but you must be calling the function like this:
find -name *.c
But globbing hasn't been turned off yet, so the shell expands the *.c before the call. So the find command sees '-name' followed by three arguments, thus the error message.
You could use a backslash instead of quotes.
find -name \*.c
| set -f inside function |
1,430,404,323,000 |
The other question asks about the limit on building up commands by find's -exec ... {} +. Here I'd like to know how those limits compare to shells' inner limits. Do they mimic system limits or are they independent? What are they?
I'm a Bash user, but will learn of any Unix and Linux shells if only out of curiosity.
|
Does the system-wide limit on argument count apply in shell functions?
No, that's a limit on the execve() system call used by processes to execute a different executable to replace the current one. That does not apply to functions which are interpreted by the current shell interpreter in the same process. That also doesn't apply to built-in utilities.
execve() wipes the memory of the process before loading and starting the new executable. The whole point of functions and builtins is for that not to happen so the function can modify the variables and other parameters of the shell, so they will typically not use execve().
Do they mimic system limits
No.
or are they independent?
Yes.
What are they?
As much as the resource limits for the current shell process allows.
The bash manual says:
There is no maximum limit on the size of an array, nor any requirement that members be indexed or assigned contiguously.
This seem to apply, since function arguments are an internal shell array (not passed to the exec kernel function).
Historically, ksh88 and pdksh had a low limit on array indices, but not on number of function arguments. You could only access $1, ... $9 directly in the Bourne shell, but you could still pass as many arguments as you'd like to functions and for instance loop over all of them with for arg do... or pass them along to another function or builtin with "$@".
| Does the system-wide limit on argument count apply in shell functions? |
1,430,404,323,000 |
So I'm creating a function that does a for loop in all the files in a directory as a given argument and prints out all the files and directories:
#!/bin/bash
List () {
for item in $1
do
echo "$item"
done
}
List ~/*
However when I run the script it only prints out the first file in the directory.
Any ideas?
|
If you're trying to iterate over files in a directory you need to glob the directory like so:
#!/bin/bash
List () {
for item in "${1}/"*
do
echo "$item"
done
}
Then call it like:
$ list ~
Alternatively, if you want to pass multiple files as arguments you can write your for loop like this:
List () {
for item
do
echo "$item"
done
}
Which can then be called as:
$ list ~/*
What's wrong with your current function:
When you call it with a glob, it passes each file in the directory as a separate argument. Let's say your home directory contains file1, file2, and file3. When you call list ~/*, you are essentially calling:
list ~/file1 ~/file2 ~/file3
Then your for loop is only being passed positional parameter 1 so for item in ~/file1 and the other positional parameters are unused.
Also thanks Ilkkachu for pointing out that you also forgot a / in your hashbang, which I completely missed.
| For loop not working in a function with arguments |
1,430,404,323,000 |
In bash, there is a shell builtin command named caller whose function is described as follows by the help command:
Return the context of the current subroutine call
But, what is a context of a subroutine call?
Could you explain this to non-programmers and what it is good for knowing it?
|
Taken directly from the bash man page:
caller ... displays the line number and source filename of the current subroutine call.
In simple terms, it tells you where you just came from. Think of it like the fairy take where two kids are exploring the woods and leaving breadcrumbs along the path they take. The caller builtin points them at the last breadcrumb they dropped so they can get back to it. (ok, the kids are after all kinda stupid). Repeated use of this builtin cal help lead you all the way back to the command you actually ran, that ended up N levels deep in function calls.
It's basically a recording of the answers to "what function was I in before I got to this one?" at every level of function call.
| what is a context of a subroutine? |
1,430,404,323,000 |
I have gotten quite prolific with the use of the aliases, especially with all the different git commands and their order and interdependencies etc. So, I've created a few alias that run more complex scripts.
alias stash='f() { .... }; f'
Over all very straight forward. However, as a bit of a purest in my development style, I like "well formatted" code. The form is sometimes as important as the function. So, with a simple alias:
alias gca='git commit --amend '
I have no problem listing them straight up in the .bash_aliases file. But for some of the multi-command aliases I would sort of like them to be separate.
Typing help I see
function name { COMMANDS ; } or name () { COMMANDS ; }
but the only way I currently comprehend using the function is in the guise of an alias. But given the colorization of git diff --name-status post I commented on, I can incorporate a script in an SH file and for that example I passed data into it via xargs.
But is it possible (likely) to create "functions" via a script, so instead of them being listed as an "alias" they were listed as an actual function stored in an sh file?
hypothetical:
alias stash='f() { if [[ -z $1 ]]; then git stash list; else git stash $1; fi; }; f'
instead a stash.sh file would have:
function stash
{
if [[ -z $1 ]]; then
stash list;
else git stash $1;
fi;
}
is something like this possible so in both cases I would simply type "stash" at the prompt but how they are defined is quite different?
|
You could source into your environment a list of needed functions.
Create a file ~/.bash_functions and source it from ~./bashrc.
The same way as ./bash_aliases is sourced:
if [ -f ~/.bash_functions ]; then
. ~/.bash_functions
fi
Then, define as many (and as complex) functions you wish.
You could define aliases (inside ~/.bash_aliases to keep things in order) for the functions. But that is not really needed as you could call sourced functions directly. By defining:
stash() {
if [[ -z $1 ]]; then
stash list;
else
git stash $1;
fi;
}
You could call it simply by stash or stash this, no alias needed.
| Bash (Git) - Functions, Alias, and SH script files |
1,430,404,323,000 |
vishex ()
{
echo '#!/bin/bash' > $1;
chmod +x $1;
vi $1
}
The goal of the above function is to have an alias for fast and comfortable creation of bash scripts. I would like that at the opening of the file the cursor would be not standing in the Shebang line but on a line below. I've tried something like echo 'blabla\n', echo "blala\n", printf "blala\n" without any result.
|
Use this:
vishex ()
{
[ -e "$1" ] || echo -e '#!/bin/bash\n\n' > "$1";
chmod +x "$1";
vi "+normal G" +startinsert "$1"
}
[ -e "$1" ] checks if the script already exists. If yes echo will not override it.
-e in echo enables interpretation of backslash escapes, such as \n for a newline. Then it inserts 2 newlines after the shebang line.
+normal G runs the ex command G which jumps to the last line in the file.
+startinsert switches directly to insert mode (you can also leave that, as it's not in the question mentioned).
So, when executing vishex script it looks as follows:
#!/bin/bash
<- cursor is here
~
[...]
~
-- INSERT -- 3,1 All
| Cursor position in vi at opening of the file |
1,430,404,323,000 |
I've couple instances of script.sh running in parallel, doing the same thing, running in background.
I'm trying to use a function to kill all the current running scripts when executed.
So, for example, ./script.sh -start will start the script (which I can run few in parallel) and when I execute ./script.sh -kill will kill all instances of the script.
f() {
procName=`basename $0`
pidsToKill=`ps -ef | grep $procName | grep -v grep | awk '{ print $2 }'`
if [[ $pidsToKill ]]; then
for i in $pidsToKill; do
kill -9 $i
done
echo "Killed running scripts."
else
echo "No opened scripts to kill"
fi
}
For some reason, sometimes it kill couple of the scripts and sometimes returns an error.
I've figured a way to solve this, but I want to understand why this one doesn't work. Any ideas?
|
The script may be killing itself. You might try running the for loop inside a separate subshell ( for i in $pidsToKill; do kill -9 $i; done; echo All dead. ) & and then exit your script.
| Killing multiple instances of the script from the script itself |
1,430,404,323,000 |
I have a script I made for work that will call a function that takes an argument. I use the arguments to ssh into our servers. My question is: Is there a way to call the method so that if/when I get disconnected from our servers, It will automatically call the script? So for example, I can ssh into one of our servers. We have a reverse-ssh tunnel set up so after an hour the connection closes. I want it so that after the disconnection, the script will be called again prompting for a hostname.
#!/bin/bash
echo "Provide hostname: "
read host
createSSHFunction()
{
arg1=$1
ssh $host
}
createSSHFunction $host
|
You'll need to loop infinitely, but prevent the script from running again when you are done.
while ((1)); do script.sh; sleep 3; done
The three second sleep gives you an opportunity to break the loop. When you're done with ssh, exit. In three seconds, the script will start again. If you don't want that to happen, Ctrl-C will stop the loop.
| Call the script after disconnecting from server |
1,430,404,323,000 |
I want to generate a set of functions in my shell in a for loop, but I can't see how to access a variable inside the function body of the function I'm creating.
In essence, I would like the following
for f in foo ; do $f() { echo $f } ; done
to generate a function foo() { echo foo }, but instead I get foo() { echo $f }.
I've read the section on parameter expansion flags in the zsh manual, but whatever I write after $f() seems to be put verbatim into the generated function's body, so I had no luck.
Maybe this is the wrong way to do this? If not I want to know how I can expand $f inside the function when it's generated.
|
That's typically a case where you need eval.
$ for f (foo) eval "\$f() echo ${(qq)f}"
$ which foo
foo () {
echo 'foo'
}
| Expand variable in function definition in zsh |
1,430,404,323,000 |
mv or cp commands both expect source and destination as arguments.
In case you want to undo the change you made, or just change the source and destination you supplied before, what is the quickest way to do this?
I thought of creating a function that takes command src dest and switching src and dest, but I was wondering if there is a better way to do this.
|
Not a way using cp and mv, but using a feature of GNU bash with readline with the usual (emacs-like) keybindings:
Just like in emacs, you can transpose words with M-t (meta-, alt-), so if you're using bash, undoing mv file_a file_b could be as simple as pressing the up arrow and hitting M-t, which changes the above to mv file_b file_a.
(Now this isn't even a proper solution, I don't know whether this will work when the arguments to mv have spaces or other less usual, special characters. And, just like +Michael Mrozek said, it's not possible to undo cp this way. For a real undo, you also have to define exactly what you mean by "undo" (for example, what if cp overwrites an existing file? there will be no way to undo it unless you wrap cp around something that keeps backup copies!))
| Switching source and destination (or undoing the mv, cp operation) |
1,430,404,323,000 |
I am trying to make the zsh prompt reload a function everytime a new prompt loads. The function outputs a version of pwd but shorter, if the output of pwd was ~/Downloads/Folder the function would output ~/D/Folder. The function works but does not reload if I change directories. This is an issue with zsh and not with the function because the function works fine in ksh and csh. I do not use oh-my-zsh. The function is _collapsed_pwd and the file is $SHELLDIR/cpwdrc.
Here is my .zshrc file
export SHELLDIR="$XDG_CONFIG_HOME/shells"
export ZSH="$SHELLDIR/zsh"
export HISTFILE="$XDG_DATA_HOME"/zsh/history
autoload -U +X compinit && compinit
fpath=($ZSH/plugins/zsh-completions/src $fpath)
compinit -d $XDG_CACHE_HOME/zsh/zcompdump-artix-5.8
source $SHELLDIR/aliasrc
source $SHELLDIR/colorsrc # contians color substituions
source $SHELLDIR/cpwdrc # contains _collapsed_pwd
source $ZSH/plugins/fzf-tab/fzf-tab.zsh
source $ZSH/plugins/zsh-autosuggestions/zsh-autosuggestions.zsh
setopt prompt_subst
PS1=$(printf "${BOLD}${BLUE}%s${NORM}@%s:[${BLUE}%s${NORM}]:$ " $USER $(hostname) $(_collapsed_pwd) )
Here is the code for the function _collapsed_pwd
#!/bin/sh
## Collapsed Directory
_collapsed_pwd() {
pwd | perl -pe '
BEGIN {
binmode STDIN, ":encoding(UTF-8)";
binmode STDOUT, ":encoding(UTF-8)";
}; s|^$ENV{HOME}|~|g; s|/([^/.])[^/]*(?=/)|/$1|g; s|/\.([^/])[^/]*(?=/)|/.$1|g
'
}
|
You set the content of the prompt once and for all when .zshrc is processed. There is nothing in your code that says to change the content of the prompt when the current directory changes.
One solution is to put the code to change the prompt in a chpwd hook. Remove setopt prompt_subst since you won't be doing any evaluation of the content of PS1.
function set_prompt {
PS1=$(printf "${BOLD}${BLUE}%s${NORM}@%s:[${BLUE}%s${NORM}]:$ " $USER $(hostname) "${$(_collapsed_pwd)//\%/%%}" )
}
chpwd_functions+=(set_prompt)
cd .
Explanations:
cd . triggers the chpwd hook once when zsh starts so as to set the prompt initially.
The double quotes around the command substitution prevents it from being split into separate word if the output contains whitespace.
The ${…//\%/%%} substitution around the output of _collapsed_pwd changes % to %% because % would be interpreted as a prompt escape¹.
Alternatively, do set the prompt_subst option and set PS1 to a string that contains code which will be evaluated each time the prompt is displayed.
setopt prompt_subst
PS1='$(printf "${BOLD}${BLUE}%s${NORM}@%s:[${BLUE}%s${NORM}]:$ " $USER $(hostname) "${$(_collapsed_pwd)//\%/%%}" )'
You can simplify this a lot by using zsh's built-in features to include variable data in the prompt. To start with, here's a prompt that displays the last two components of the current directory:
unsetopt prompt_subst
PS1='%B%F{blue}%n%f%b@%m:[%F{blue}%2~%f]:%(!.#.$) '
To abbreviate directory components, I think you have to run some zsh code, either through a chpwd hook or through prompt_subst. To avoid complexity related to expansion, use a chpwd hook to set psvar and %v to refer to psvar in the prompt string.
unsetopt prompt_subst
PS1='%B%F{blue}%n%f%b@%m:[%F{blue}%1v%f]:%(!.#.$) '
function abbreviate_pwd {
psvar[1]=${(%):-%~}
while [[ $psvar[1] =~ /[^/][^/]+/ ]]; do
psvar[1]=${psvar[1][1,MBEGIN+1]}${psvar[1][MEND,-1]}
done
}
chpwd_functions+=(abbreviate_pwd)
cd .
¹ For extra robustness, $USER (see also $USERNAME automatically set by zsh) and $(hostname) (see also $HOST automatically set by zsh) should also be protected, but they normally don't contain any of the problematic characters, assuming that you don't change the value of IFS to include a character that appears in the host name.
| zsh does not reload functions in the prompt |
1,430,404,323,000 |
I make a lot of presentations that involve many screenshots, and I want an easier way to organize them by project. I'm trying to write a simple function that changes the location where screenshots are saved to the current working directory.
I've written a function in and saved it to ~/.my_custom_commands.sh.
That file currently looks like this:
#!/bin/bash
# changes location of screenshot to current directory
function shoothere() {
defaults write com.apple.screencapture location '. '
killall SystemUIServer
echo 'foo'
}
When I navigate to the directory where I want to save my screenshots and run the function, it does print foo but screenshots do not appear anywhere.
I've also tried replacing '. ' with $1 and running it as $ shoothere ., at which point I get an error Rep argument is not a dictionary. Defaults have not been changed. Googling this error message has gotten me precisely nowhere.
I'm on a Mac running Mojave 10.14.4.
|
This slightly different syntax appears to work for me; it's probably that . isn't correctly handled by the service MacOS has running in the background:
~/foo $ defaults write com.apple.screencapture location "$(pwd)"
~/foo $ defaults read com.apple.screencapture
{
"last-messagetrace-stamp" = "576625649.15493";
location = "/Users/[redacted]/foo";
}
To reset it back to default, you can use this:
$ defaults delete com.apple.screencapture location
killall SystemUIServer is not necessary at all, as soon as I ran the defaults write command, I was able to observe newly-captured screenshots appearing in the correct directory.
| MacOS: Changing screen capture location |
1,430,404,323,000 |
Consider this function:
function add_one(in_ar, each) {
for (each in in_ar) {
in_ar[each]++
}
}
I would like to modify it such that if a second array is provided, it would be
used instead of modifying the input array. I tried this:
function add_one(in_ar, out_ar, each) {
if (out_ar) {
for (each in in_ar) {
out_ar[each] = in_ar[each] + 1
}
}
else {
for (each in in_ar) {
in_ar[each]++
}
}
}
BEGIN {
split("1 2 3 4 5", q)
add_one(q, z)
print z[3]
}
but I get this result:
fatal: attempt to use scalar `z' as an array
|
There are 2 problems in your script
the variable z isn't initialized
the test if(out_ar) in your second code snippet is not suited for arrays
To solve the first problem, you need to assign an array element (like z[1]=1) since there is no array declaration in awk. (You can't use similar statement like declare -A as you would do in bash).
The second problem can be solved, provided you're using GNU awk, with the function isarray() or typeof().
So your code should look like this:
function add_one(in_ar, out_ar, each) {
if (isarray(out_ar)) {
for (each in in_ar) {
out_ar[each] = in_ar[each] + 1
}
}
else {
for (each in in_ar) {
in_ar[each]++
}
}
}
BEGIN {
split("1 2 3 4 5", q)
z[1]=1
add_one(q, z)
print z[3]
}
I recommend looking at this page and this page.
| Detect optional function argument (array) |
1,430,404,323,000 |
Trying to run:
function which_terminal {
return (ps -p$PPID | awk "'NR==2'" | cut -d "' '" -f 11)
}
inside .zshrc to get a varible with which terminal emulator is running so I can configure different themes for different terminal emulators.
when I run this commmand in command line I get exactly the emulator being used. But when I try to add this to the file I get a error like:
which_terminal:1: no matches found: (ps -p16632 | awk 'NR==2' | cut -d ' ' -f 11)
can't find where it's wrong, if anyone can help
|
In the shell, the return value of a function is like the exit status of a command: you can only return a small integer value indicating success (0) or a failure code (> 0).
This status has nothing to do with the output of a command. To run a command and store its output into a variable, use command substitution. To run a command in a function and gets its output as the output of the function, just run the command.
function which_terminal {
ps -p$PPID | awk "'NR==2'" | cut -d "' '" -f 11
}
Note that parsing the output of ps is unreliable and overkill. (Plus you got it wrong: you're parsing 'NR==2' to awk and ' ' to cut, both of which are invalid arguments; you need either single quotes or double quotes around each of these, not both.) The ps command has options to print out whatever field you want, e.g. comm for the name of the executable (truncated to 16 characters under Linux) or args for the full command line with arguments (and sometimes the full path to the command, depending on how it was invoked).
function which_terminal {
ps -p$PPID -o comm=
}
Or, to store the output into a variable:
parent_process_command=$(ps -p$PPID -o comm=)
The reason for the error you're getting is that zsh is trying to parse what follows return as an argument of a command (here it's an argument of the return keyword, but they're parsed in the same way). It sees an open parenthesis and decides that it's the start of a wildcard expression. That wildcard expression would match a file called ps -p16632, awk 'NR==2' or cut -d -f 11 (with leading/trailing spaces that don't render here), but since you (unsurprisingly) have no files with any of these names, zsh complains that it didn't find a match. return ? would return 3 if you happened to have a file called 3 in the current directory and no other file with a one-character name.
| Command don't return expected value inside .zshrc |
1,430,404,323,000 |
I have the following code in my ~/.zshrc:
nv() (
if vim --serverlist | grep -q VIM; then
if [[ $# -eq 0 ]]; then
vim
elif [[ $1 == -b ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo setl binary ft=xxd<cr>"
vim --remote-send ":argdo %!xxd<cr><cr>"
elif [[ $1 == -d ]]; then
shift 1
IFS=' '
vim --remote-send ":tabnew<cr>"
vim --remote "$@"
vim --remote-send ":argdo vsplit<cr>:q<cr>"
vim --remote-send ":windo diffthis<cr>"
elif [[ $1 == -o ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo split<cr>:q<cr><cr>"
elif [[ $1 == -O ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo vsplit<cr>:q<cr><cr>"
elif [[ $1 == -p ]]; then
shift 1
IFS=' '
vim --remote "$@"
vim --remote-send ":argdo tabedit<cr>:q<cr>"
elif [[ $1 == -q ]]; then
shift 1
IFS=' '
vim --remote-send ":cexpr system('$*')<cr>"
else
vim --remote "$@"
fi
else
vim -w /tmp/.vimkeys --servername VIM "$@"
fi
)
Its purpose is to install a nv function to start a Vim instance as well as a Vim server.
And if a Vim server is already running, the function should send the file arguments it received to the server.
So far, it worked well.
I have the following mapping in my ~/.vimrc:
nno <silent><unique> <space>R :<c-u>sil call <sid>vim_quit_reload()<cr>
fu! s:vim_quit_reload() abort
sil! update
call system('kill -USR1 $(ps -p $(ps -p $$ -o ppid=) -o ppid=)')
qa!
endfu
Its purpose is to restart Vim, by sending the signal USR1 to the parent shell.
I also have the following trap in my ~/.zshrc which restarts Vim when it catches the signal USR1.
catch_signal_usr1() {
trap catch_signal_usr1 USR1
clear
vim
}
trap catch_signal_usr1 USR1
So far, it worked well too.
But I have noticed that if I suspend Vim by pressing C-z, from the shell, even though the Vim process is still running, I can't resume it (with $ fg) because the shell doesn't have any job.
Here's a minimal zshrc with which I'm able to reproduce the issue:
catch_signal_usr1() {
trap catch_signal_usr1 USR1
vim
}
trap catch_signal_usr1 USR1
func() {
vim
}
And here's a minimal vimrc:
nnoremap <space>R :call Func()<cr>
function! Func()
call system('kill -USR1 $(ps -p $(ps -p $$ -o ppid=) -o ppid=)')
qa!
endfunction
If I start Vim with the function:
$ func
Then, restart Vim by pressing Space R, then suspend it by pressing C-z, once I'm back in the shell, I can see the Vim process running:
$ ps -efH | grep vim
user 24803 24684 10 03:56 pts/9 00:00:01 vim
user 24990 24684 0 03:56 pts/9 00:00:00 grep vim
But I can't resume it:
$ fg
fg: no current job
If I start Vim with the $ vim command instead of the $ func function, I can restart the Vim process, suspend it and resume it. The issue seems to come from the function $ func.
Here's my environment:
vim --version: VIM - Vi IMproved 8.1 Compiled by user
Operating system: Ubuntu 16.04.4 LTS
Terminal emulator: rxvt-unicode v9.22
Terminal multiplexer: tmux 2.7
$TERM: tmux-256color
Shell: zsh 5.5.1
How to start Vim from a function and still be able to resume it after suspending it?
Edit:
More information:
(1) What shows up on your terminal when you type Ctrl+Z?
Nothing is displayed when I type C-z.
(A) If I start Vim with the $ vim command here's what is displayed after pressing C-z:
ubuntu% vim
zsh: suspended vim
I can resume with $ fg.
(B) If I start Vim with the $ func function:
ubuntu% func
zsh: suspended func
I can also resume with $ fg.
(C) If I start Vim with the $ vim command, then restart Vim by pressing Space R:
ubuntu% vim
zsh: suspended catch_signal_usr1
Again, I can resume with $ fg.
(D) But, if I start Vim with the $ func function and restart it by pressing Space R:
ubuntu% func
ubuntu%
Nothing is displayed when I'm back at the prompt, and I can't resume Vim with $ fg.
(2) What does your shell say if you type jobs?
$ jobs has no output. Here's its output in the four previous cases:
(A)
ubuntu% jobs
[1] + suspended vim
(B)
ubuntu% jobs
[1] + suspended (signal) func
(C)
ubuntu% jobs
[1] + suspended (signal) catch_signal_usr1
(D)
ubuntu% jobs
ubuntu%
It seems the issue is specific to zsh at least up to 5.5.1, as I can't reproduce with bash 4.4.
|
The problem is starting a background job from a trap. The job seems to get “lost” sometimes. Changing vim to vim & makes the job be retained sometimes, so there may be a race condition.
You could avoid this by not starting the job from a trap. Set a flag in the trap, and fire up vim outside the trap, in the precmd hook. Here's an adaptation of your minimum example.
restart_vim=
catch_signal_usr1() {
trap catch_signal_usr1 USR1
restart_vim=1
}
precmd () {
if [[ -n $restart_vim ]]; then
restart_vim=
vim
fi
}
trap catch_signal_usr1 USR1
func() {
vim
}
You lose the ability of popping Vim up to the foreground while editing a command prompt, but that doesn't really work anyway since vim and zsh would be competing for the terminal.
In your real code, you may run into trouble because you're starting vim from a subshell. Don't run the nv function in a subshell: use braces { … }around the body, not parentheses. Uselocal IFSto make theIFS` variable local.
| How to start Vim from a trap and still be able to resume it after suspending it? |
1,430,404,323,000 |
This is what I want to achieve:
Function:
Func1() {
$1="Hello World"
}
Call function:
local var1
Func1 var1
echo $var1 (should echo Hello World)
I found this example which seems to work, but I guess using eval is not a good idea:
Func1() {
eval $1=$str1
}
How would be the correct way of doing this?
I'm coming from .Net and often use a parameter as a reference. For example, assigning a value back to the parameter which then can be used later on.
In the above example var1 should be assigned "Hello World"
(I'm using sh shell)
|
eval is fine if you use it properly:
Func1() {
eval "$1=\$str1"
}
Is safe as long as Func1 is only called with the variable names you intend it to be passed.
As always, you need to quote your variables. And here, $str1 doesn't need to be expanded before being passed to eval.
If Func1 may be passed arbitrary strings that are not under your control, then that's where you'd get issues with values like reboot;foo and need to sanitize it like:
Func1() {
case $1 in
"" |\
*[!abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_]* |\
[0123456789]* |\
BASH* | LD_* | PATH | IFS | ENV | SHELLOPTS | PERL5LIB | PYTHON_PATH |...)
echo >&2 "Can't set that variable"
return 1
esac
eval "$1=\$str1"
}
Coming up with a complete list of problematic variables is a doomed task though.
| Save return value from a function in one of its own parameters |
1,430,404,323,000 |
I'm trying to create a bash alias alias backlight='__backlight () { echo "$@"; cd ~/Code/MSI-Backlight; sudo nodejs ~/Code/MSI-Backlight/msi-backlight.js "$@"; }', it works fine with no parameters but breaks when I give it one. It works fine outside of an alias. Does anyone know what's wrong?
|
You should define it as a function and call it with the alias:
function __blacklight() {
echo "$@";
cd ~/Code/MSI-Backlight;
sudo nodejs ~/Code/MSI-Backlight/msi-backlight.js "$@";
}
alias backlight='__blacklight'
| bash: syntax error near unexpected token |
1,430,404,323,000 |
I tested this:
~$ test() { echo foo |sed -r s/.*(.)/\\1/g; }
~$ test
o
So far so good. But then:
~$ export -f test
~$ bash -c ''
bash: test: line 0: syntax error near unexpected token `('
bash: test: line 0: `test () { echo foo | sed -r s/.*(.)/\\1/g'
bash: error importing function definition for `test'
I know using quotes with sed solves the problem. But bash not exporting a function that runs is alarming and requires explanations, rules and cases.
I would expect bash to be able to handle its own quoting, so I think it can only be a bug.
|
I suspect you have 2 versions of bash on your system, and that when you're calling bash -c '', you're invoking a different version. That or your code was altered when you created the question.
As for why I think this, your code does not work on my system:
$ test() { echo foo |sed -r s/.*(.)/\\1/g; }
bash: syntax error near unexpected token `('
The issue is that you have no quotes around the sed expression, so bash is trying to interpret it as a shell expression. I'm guessing this behavior changed between bash versions, and that your login shell is a different version of bash than whatever is in your $PATH when calling bash -c ''.
You can check this by doing:
$ echo $SHELL
$ which bash
Another possible cause would be if you have some shell options set which are changing the behavior of bash's expression evaluation, and these options are not being used by the bash -c ''.
As for how to fix the issue, when I properly quote the sed expression, it works fine:
$ test() { echo foo | sed 's/.*\(.\)/\1/g'; }
$ export -f test
$ bash -c 'test'
o
(Note: I had to slighly tweak the sed command as it's not a valid command for my version of sed)
| Some bash functions run but can't be exported (no `export` failure either) |
1,430,404,323,000 |
Why do Unix-like systems execute a new process when calling a function rather than a dynamic library? Creating a new process is costly in terms of performance when compared to calling a dynamic library.
|
Unix-like systems don't "call functions by executing new processes". They (now) have shared libraries like pretty much all other relatively modern operating systems.
Shells on the other hand do execute other processes to do certain tasks. But not all. They have build-in functions,implemented directly in the shell (or via shared libraries) for the most common and simple tasks (echo for instance is implemented as a built-in by a lot of shells).
(The windows cmd shell is no different from Unix shells in this respect, BTW.)
Creating a process in modern Unix-like systems is certainly more expensive than doing an in-process function call, but not by such a huge margin. Kernels are optimized for fast forking, using techniques like copy on write for address space management to speed up "cloning" of processes, and sharing the text (code) pages of dynamic libraries.
If every executable on your machine that could be called from a shell script was implemented as a shared library, either:
starting your shell would take a lot of time (and memory) just to load all that stuff up front (even with caching, the dynamic linker has non-trivial work to do, and libraries have data sections, not only text sections - we're talking hundreds if not thousands of libraries here)
you would have to load each necessary library on-demand – possibly a bit faster than starting a process, but the advantage here is really thin. And the data part of your shared libraries becomes really hard to manage (the global state of your shell now depends on the state of a lot of unrelated code and data loaded in its address space).
So you probably would not gain much for typical usage, and stability/complexity becomes more of an issue.
Another thing is that the separate process model isolates each task very effectively (assuming virtual memory management & protection). In the "everything is a library" model, a bug in any utility library could pollute (i.e. corrupt) the entire shell. A bug in some random utility could kill your shell process completely.
This is not the case for the multi-process model, the shell is shielded from that type of bug in the programs it runs.
Something else: lower coupling. When I look at what's in my /usr/bin directory right now, I have:
ELF 64bit executables,
ELF 32bit executables,
Perl scripts,
Shell scripts (some of those run Java programs),
Ruby scripts and
Python scripts
... and I probably don't have the most fancy system out there. You simply can't mix the first two types in the same process. Having an interpreter in-process for all the other ones simply isn't practical.
Even if you look only at your "native binary" file format, having the interface between the "utilities" being simple streams and exit codes makes things simpler.
The only requirements on the utilities is to implement the operating system's ABI and system calls. You get (nearly) no dependency between the different utilities. That's either extremely hard, or plain impossible, for an in-process interface, unless you impose things like "everything must be compiled with version X of compiler Y, with such and such flags/settings.
There are things for which in-process calls do make a lot of sense performance wise, and those are already, very often, done as built-ins by the shells. For the rest, the separate processes model works very effectively, and its flexibility is a great advantage.
| Why do Unix-like systems execute a new process when calling a new function? |
1,430,404,323,000 |
I'm calling a function and I want to pass up to 100 paramters onto another function. I do not want to pass on the first 3 params, I start with param4 being the first param for the other program.
I am currently allowing for passing on up to 19 additional with
$function_under_test "$4" "$5" "$6" "$7" "$8" "$9" "${10}" "${11}" "${12}"
"${13}" "${14}" "${15}" "${16}" "${17}" "${18}" "${19}"
but this is not very substanable for larger sets of params.
I tried
declare -a pass_on_params
for ((a=2; a<$#; a++)); do
pass_on_params+=(${@[a]}) # line 8
done
echo "->" $pass_on_params
but I get
do_test.sh: line 8: ${@[a]}: bad substitution
Full code is:
do_test () {
function_under_test=$1
line_number=$2
expected="$3"
param1="$4"
declare -a pass_on_params
for ((a=2; a<$#; a++)); do
pass_on_params+=(${@[a]})
done
echo "ppppppppp" $pass_on_params
$function_under_test "$4" "$5" "$6" "$7" "$8" "$9" "${10}" "${11}" "${12}" "${13}" "${14}" "${15}" "${16}" "${17}" "${18}" "${19}"
if [ $result -eq $expected ]; then
printf '.'
else
printf 'F'
error_messages=$error_messages"Call to '$function_under_test $param1' failed: $result was not equal to $expected at line $line_number\n"
fi
}
Shell is bash
|
"${@:4}" works for me in bash. You can also assign to another array and do indexing on it:
foo=("$@")
second_function "${foo[@]:4}"
| How can I pass on parameters 4..99 to another function |
1,430,404,323,000 |
I have seen on many occasions a name of a function (frankly speaking I just call it function because of it typical appearance, they are though sometimes named commands or system calls but I do not know the idea behind labelling them differently),
which contains a number within the brackets part of it, like in exec(1) exec(2) exec(3).
What is the meaning behind putting numbers into them ?
|
exec here could be a system call or a bash built-in or something else from this . And respective man pages related to system call or bash built-in refer to the exec's man page with numbers in the brackets. So if I want to refer to manpage of bash built-in, I would say exec(1) and if I want to refer to manpage of system call exec() i would say exec(2)
The number referrs to particular manpage.
When you see exec(2) in a manpage. To know about that particular referred exec you should say man 2 exec
| What is the reason for having numbers within the brackets of a function ? [duplicate] |
1,430,404,323,000 |
I made some scripts containing some functions which by design needs sudo permission. I have added those path in .bashrc for Linux and .bash_profile for MacOS so that it can be called from anywhere.
But I do not want the user to type sudo each time they want to call those script functions. Is there any way I can imply sudo in a way that whenever these functions are called, terminal would assume its being called from root user?
I think I should just add sudo -i at the beginning of the script or maybe each function? Or is there any other alternative way of implying sudo? Also, would be great to know if you think it would be terrible or dangerous to imply sudo and if it is not recommended.
An example of dangerous-function script that contains some functions which, I am trying to accomplish without specifying sudo
#!/bin/bash
start-one()
{
## do dangerous stuff with sudo
systemctl start dangerous.service
}
start-two()
{
systemctl start dangerous1.service
}
start-launchwizard()
{
systemctl start dangerous2.service
}
## Calling functions one by one...
"$@"
I dont want to call them by sudo dangerous-function start-one
I just want to call them with dangerous-function start-one but still get the same result as the previous one.
|
The "$@" will expand to the list of command line arguments, individually quoted. This means that if you call your script with
./script.sh start-one
it will run start-one at that point (which is your function). It also means that invoking it as
./script.sh ls
it would run ls.
Allowing a user to invoke the script using sudo (or using sudo inside the script) would allow them to run any command as root, if they had sudo access. You do not want this.
Instead, you would need to carefully validate the command line arguments. Maybe something like
foo_func () {
# stuff
printf 'foo:\t%s\n' "$@"
}
bar_func () {
# stuff
printf 'bar:\t%s\n' "$@"
}
arg=$1
shift
case $arg in
foo)
foo_func "$@" ;;
bar)
bar_func "$@" ;;
*)
printf 'invalid sub-command: %s\n' "$arg" >&2
exit 1
esac
Testing:
$ sh script.sh bar 1 2 3
bar: 1
bar: 2
bar: 3
$ sh script.sh baz
invalid sub-command: baz
This would be safer to use with sudo, but you would still not want to execute anything that the user gives you within your various functions directly without sanitising the input. The script above does this by restricting the user to a particular set of sub-commands, and each function that handles a sub-command does not execute, eval, or source its arguments.
Let me say that again with other words: The script does not, and should not, try to execute the user input as code in any way. It should not try to figure out whether an argument corresponds to a function in the current environment that it can execute (functions may have been put there by the calling environment) and it should not execute scripts whose pathnames were given on the command line etc.
If a script is performing administrative tasks, I would be expecting to have to run it with sudo, and I would not want the script itself to ask me for my password, especially not if it's a script that I may want to run non-interactively (e.g. from a cron job). That is, a script performing administrative tasks requiring root privileges should (IMHO) be able to assume it's running with the correct privileges from the start.
If you want to test this in the script, you could do so with
if [ "$( id -u )" -ne 0 ]; then
echo 'please run this script as root' >&2
exit 1
fi
It then moves the decision of how to run the script with root privileges to the user of the script.
| sudo without sudo, implying sudo in script |
1,430,404,323,000 |
I've been writing a lot of one-off functions recently. On the occasions that I go "hmm, I should save this" I use type <function name> to show the code, and copy and paste it into .bashrc. Is there a faster way to do this, or some standard or command built for this purpose?
FWIW, I'm just doing this on my personal computer running Mint, so conveniences like copy and paste are easy. However, I'm also interested in answers specific to shell-only environments.
|
In Korn-like shells, including ksh, zsh, bash and yash, you can do:
typeset -fp myfunc
To print the definition of the myfunc function.
So you can add it to the end of your ~/.bashrc with:
typeset -fp myfunc >> ~/.bashrc
| Correct way to take a function from the current shell and save it for future use? |
1,567,691,275,000 |
# Print $1 $2 times
function foo() {
for (( i=0; i<$2; i++ )); do
echo -n $1
done
echo
}
# Print $1 $2x$3 times
function bar() {
for (( i=0; i<$3; i++ )); do
foo $1 $2
done
}
bar $1 $2 $3
The ideal output of foobar.sh @ 3 3 is
@@@
@@@
@@@
but the actual output seems to be just
@@@
Changing the variable in bar() from i to j yields the desired output. But why?
|
Because variables are "global" in shell-scripts, unless you declare them as local. So if one function changes your variable i, the other function will see these changes and behave accordingly.
So for variables used in functions --especially loop-variables like i, j, x, y-- declareing them as local is a must. See below...
#!/bin/bash
# Print $1 $2 times
function foo() {
local i
for (( i=0; i<"$2"; i++ )); do
echo -n $1
done
echo
}
# Print $1 $2x$3 times
function bar() {
local i
for (( i=0; i<"$3"; i++ )); do
foo "$1" "$2"
done
}
bar "$1" "$2" "$3"
Result:
$ ./foobar.sh a 3 3
aaa
aaa
aaa
$ ./foobar.sh 'a b ' 4 3
a ba ba ba b
a ba ba ba b
a ba ba ba b
| Loop function with arguments in another loop function with arguments |
1,567,691,275,000 |
I've got the following function aliases sourced in zsh and bash consoles:
compose() {
docker-compose $*
}
run() {
compose "run --rm app $*"
}
rails() {
run "rails $*"
}
In bash, running rails c starts a Ruby on Rails console through docker-compose successfully.
In zsh, running rails c results in a command not found error where the hyphens are replaced by underscores:
➜ rails c
No such command: run __rm app rails c
My zsh version:
➜ zsh --version
zsh 5.6.2 (x86_64-apple-darwin18.0.0)
|
It's not zsh that's replacing dashes with underscores, but probably that docker-compose program, or another program called by it.
The problem is that zsh, unlike bash, does not split unquoted variables with IFS by default.
If I define docker-compose as a function that prints each of its argument surrounded by {}, this is what I obtain:
$ cat example
docker-compose() {
echo -n docker-compose
for f in "$@"; do echo -n " {$f}"; done; echo
}
compose() { docker-compose $*; }
run() { compose "run --rm app $*"; }
rails() { run "rails $*"; }
rails c
$ zsh example
docker-compose {run --rm app rails c}
$ bash example
docker-compose {run} {--rm} {app} {rails} {c}
Notice how run, --rm, app, etc. are passed as separate arguments in bash and as a single argument in zsh.
That's because bash did split and trim the $* variable into multiple arguments using spaces (the default value of IFS) as the delimiter. The same effect could be obtained in zsh by using $=* instead of $*, or by the set -o SH_WORD_SPLIT option, but that will make the script zsh-only.
You should use "$@" instead of $* everywhere, unless you have some very special reason not to:
compose() { docker-compose "$@"; }
run() { compose run --rm app "$@"; }
rails() { run rails "$@"; }
| Why does zsh replace hyphens with underscores in these functions? |
1,567,691,275,000 |
I use Ubuntu 16.04 with Bash and I have a file with 10 functions. Each function does essentially a different task. In the end of each function, I call it this way:
x() {
echo
}
x
The calls add 10 more lines to the file, lines that I would wish to save from the file from aesthetic reasons, because I execute all functions in that file anyway (in the order that they are already sorted at).
Of course, just sourcing (executing in the current session) or bashing (executing in a subsession) isn't enough (gladly, because sometimes one wants to source/bash without running all functions).
The solution I'd prefer for this minor aesthetic problem would be to call all of functions, somehow, directly from the command line, right after the sourcing/bashing of the file.
Is it possible in Bash?
|
If you are only concerned with the line count of your file you could do something like this:
my_func () {
echo "Hello, world"
} && my_func
| Bash: How to call all functions of a file in one single call from the command line? |
1,567,691,275,000 |
I have the following function split in my .bash_profile file.
function split {
name="${$1%.*}"
ext="${$1##*.}"
echo filename=$name extension=$ext
}
Now I should expect that the command split foo.bar will give me
filename=foo extension=bar
But I get get -bash: ${$1%.*}: bad substitution error message. The same however works for usual shell variable in a shell script, say $x instead of $1 in .bash_profile (I think the same goes in .bashrc as well).
What's wrong and any remedy?
|
Drop the $ preceding the variable name (1) inside the parameter expansion:
name="${1%.*}"
ext="${1##*.}"
you are already referring to the variable with the $ preceding the starting brace {, no need for another one in front of the variable name.
| bash function : splitting name and extension of a file |
1,567,691,275,000 |
I do the following to make history more sensible (i.e. seeing when a command is run can be fairly critical when troubleshooting)
shopt -s histappend; # Append commands to the bash history (~/.bash_history) instead of overwriting it # https://www.digitalocean.com/community/tutorials/how-to-use-bash-history-commands-and-expansions-on-a-linux-vps
export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND" # -a append immediately, then -c clear history, then -r read history every time a prompt is shown instead of after closing the session.
export HISTTIMEFORMAT="%F %T " HISTCONTROL=ignorespace:ignoreboth:erasedups HISTSIZE=1000000 HISTFILESIZE=1000000000 # make history very big and show date-time
alias h='history'; # Note: 'h 7' will show last 7 lines
This is fine, but I want to be able to get the original history output if I need it. This works for ho ("history original"), but I can no longer do "ho 7"
alias ho="history | awk '{\$2=\$3=\"\"; print \$0}'" # 'history original'
So I tried the following, but this fails with errors:
function ho() { history $1 | awk '{\$2=\$3=\"\"; print \$0}'; } # 'history original'
How can I create an alias or function that will allow me to do ho 7 and I'll just see the last 7 lines?
|
You're almost there. You are defining a function, but using the alias keyword. Just remove the alias and you should be fine. Next, you are escaping the awk variables, but you aren't double-quoting, so the escapes are being passed to awk. This is what you're after:
ho() { history "$@" | awk '{$2=$3=""; print}'; }
| bash, pass an argument to the 'history' command |
1,567,691,275,000 |
I have a loop that checks for certain criteria for whether or not to skip to the next iteration (A). I realized that if I invoke a function (skip) that calls continue, it is as if it is called in a sub-process for it does not see the loop (B). Also the proposed workaround that relies on eval-uating a string does not work (C).
# /usr/bin/bash env
skip()
{
echo "skipping : $1"
continue
}
skip_str="echo \"skipping : $var\"; continue"
while read -r var;
do
if [[ $var =~ ^bar$ ]];
then
# A
# echo "skipping : $var"
# continue
# B
# skip "$var" # continue: only meaningful in a `for', `while', or `until' loop
# C
eval "$skip_str"
fi
echo "processed: $var"
done < <(cat << EOF
foo
bar
qux
EOF
)
Method C:
$ source ./job-10.sh
processed: foo
skipping :
processed: qux
Also see:
Do functions run as subprocesses in Bash?
PS1: could someone remind me why < < rather than < is needed after done?
PS2: no tag found for while hence for
|
The problem is that when your function is executed, it is no longer inside a loop. It isn't in a subshell, no, but it is also not inside any loop. As far as the function is concerned, it is a self-contained piece of code and has no knowledge of where it was called from.
Then, when you run eval "$skip_str" there is no value in $var because you have set skip_string at a time when $var was not defined. This should actually work as you expect, it's just seriously convoluted and risky (if you don't control input 100%) for no reason:
#! /usr/bin/env bash
while read -r var;
do
skip_str="echo \"skipping : $var\"; continue"
if [[ $var =~ ^bar$ ]];
then
eval "$skip_str"
fi
echo "processed: $var"
done < <(cat << EOF
foo
bar
qux
EOF
)
That... really isn't very pretty. Personally, I would just use a function to do the test and then operate on the test's results. Like this:
#! /usr/bin/env bash
doTest(){
if [[ $1 =~ ^bar$ ]];
then
return 1
else
return 0
fi
}
while read -r var;
do
if doTest "$var"; then
echo "processed: $var"
else
echo "skipped: $var"
continue
fi
## Rest of your code here
done < <(cat << EOF
foo
bar
qux
EOF
)
I could probably give you something better if you explained what your objective is.
Finally, you don't need < < after done, you need < <(). The < is the normal input redirection, and the <() is called process substitution and is a trick that lets you treat the output of a command as though it were a file name.
If you are using the function just to avoid repeating the extra things like echo "skipping $1", you could simply move more of the logic into the function so that you have a loop there. Something like this: link
| continue: only meaningful in a `for', `while', or `until' loop |
1,567,691,275,000 |
How do I pass an array to a function, especially if it is in the middle somewhere? Both "${b}" and "${b[@]}" seems to pass the first item only, so is there a way to both - call and retrieve it, correctly?
#/usr/bin/env bash
touch a b1 b2 b3 c
f()
{
local a="${1}"
local b="${2}"
local c="${3}"
ls "${b[@]}" # expected b1 b2 b3
}
a=a
b=("b1" "b2" "b3")
c=c
f "${a}" "${b}" "${c}"
f "${a}" "${b[@]}" "${c}"
rm a b1 b2 b3 c
|
In the bash shell, like in the ksh shell whose array design bash copied, "${array[@]}" expands to all the distinct element of the array (sorted by array index), and "$array" is the same as "${array[0]}".
So to pass all the elements of an array to a function, it's f "${array[@]}".
Now, a function's arguments are accessed via "$@", so your functions should be something like:
f() {
ls -ld -- "$@"
}
f "$a" "${b[@]}" "$c"
Another option is to pass your array by name and use named references (another feature bash copied from ksh (ksh93)):
f() {
typeset -n array="$1"
ls -ld -- "${array[@]}"
}
f b
Or for f to take 3 arguments: a filename, an array name and another filename:
f() {
typeset -n array="$2"
ls -ld -- "$1" "${array[@]}" "$3"
}
f "$a" b "$c"
In virtually every other shell with arrays (csh, tcsh, rc, es, zsh, yash, fish), you just use $array to expand to all the elements of the array. In every other shell, arrays are also normal (non-sparse) arrays. A few caveats though: in csh/tcsh and yash, $array would still be subject to split+glob, and you'd need $array:q in (t)csh and "${array[@]}" in yash to work around it, while in zsh, $array would be subject to empty-removal (and again "${array[@]}" or "$array[@]" or "${(@)array}" works around it).
| How do I pass an array as an argument? |
1,567,691,275,000 |
I have a bash script on centos7, and I need to execute some commands as different user. But it seems sudo works as expected outside function and didn't work inside bash function. I run script as ssh [email protected] 'bash -s' < script.sh
test(){
sudo -Eu root bash
echo "inside $(whoami)"
# other commands ...
}
test
sudo -Eu root bash
echo "outside $(whoami)"
Running this as
ssh [email protected] 'bash -s' < script.sh
Prints:
outside root
inside centos
root user is given as an example for reproducibility. What is the reason behind this results? How can I execute bunch of commands inside a function as different user?
|
Don't do this.
Getting the two lines of output in the wrong order should have been a hint that something was wrong.
When you execute your script using
ssh [email protected] 'bash -s' < script.sh
the following happens:
The script is passed to bash -s on the shells standard input stream.
The script defines the test function and calls it.
The sudo bash command in the function starts a root shell.
This shell inherits the standard input stream, which is the script.
The root shell continues reading the script from the point after the function call. It does this because this is where the stream is at at this point.
Now you have a root shell started from within the test function, which is executing instructions after the test call.
It starts a second root shell, which inherits the standard input stream (i.e. the shell script stream).
You now have a centos shell executing a root shell, executing a root shell.
The second root shell executes echo "outside $(whoami)" outputting outside root, which is the last line of the script.
There is nothing more to read, so the second root shell terminates.
So does the first root shell.
The original bash -s shell executes echo "inside $(whoami)" (because it's part of the function that it started to execute earlier), outputting inside centos.
The shell function call exits and since the rest of the script has already been read by the two root shells, the original shell has nothing more to read and terminates.
sudo is strictly for executing another command (or starting an interactive shell). The change in user is for that other command only. When the command terminates, you are back as the original user. You can not use sudo to "switch to another user" in the middle of a script and then run a part of that script as that other user (unless, of course, you deliberately write your script to be executed in the bizarre manner untangled above, but that sort of coding belongs in an obfuscated code contest).
To execute a set of commands as root in a script, you must give those commands to the sudo invocation. For example:
sudo bash -c 'echo "Now running as $USER"; echo "whoami outputs $(whoami)"'
After the sudo bash -c command exits, you are back as your original user. Always.
| How can I change user inside function |
1,567,691,275,000 |
I am wondering if we need to add shell title:
#!/bin/bash
on a script, second.sh, which only defines a function, and is called from another script, script.sh.
For example, with script.sh containing
#!/bin/bash
source second.sh
func1 "make amerika great again "
echo $I_AM_SAY
and second.sh containing only a function that is called from the first script:
X=soon
function func1 {
fun=$1
I_AM_SAY=$fun$X
}
Do we need to define second.sh instead as:
#!/bin/bash
X=soon
function func1 {
fun=$1
I_AM_SAY=$fun$X
}
or as:
#!/usr/bin/env bash
X=soon
function func1 {
fun=$1
I_AM_SAY=$fun$X
}
|
No, you don’t need a shebang line: the running shell sources the script directly, it doesn’t start a new shell (which is the whole point of sourcing a script), so neither it nor the kernel need to know which shell to use to run it.
If you want to prevent the second.sh script from being invoked at all, you can add a
#!/usr/bin/false
shebang line.
| Do we need to define the shell on file that include only functions? |
1,567,691,275,000 |
foo (){
sudo -- sh -c "cd /home/rob; echo \"$@\""
}
I'm trying to make a bash function in .bashrc that will sudo, change to a particular directory and then run a Python command. For demo purposes I have changed this to echo in the above example.
Even though I have quoted $@ to pass all arguments to my echo command, it only works with one:
$ foo 1
1
$ foo 1 2
2": -c: line 0: unexpected EOF while looking for matching `"'
2": -c: line 1: syntax error: unexpected end of file
What gives? How can I pass multiple arguments to this function?
|
Add set -v to your function and you will see what is happening:
$ f (){ set -v; sudo -- sh -c "cd /tmp; echo \"$@\""; }
$ f 1 2
+ sudo -- sh -c 'cd /tmp; echo "1' '2"'
What is happening? Your use of $@ has created two strings that have been single quoted to cd /tmp; echo "1 and 2". You can use $* if you dont want this splitting. See the bash man page section on Special Parameters:
@ ... When the expansion occurs within double quotes, each parameter
expands to a separate word. ... If ... within a word, the expansion of the
first parameter is joined with the beginning part of the original word,
and the expansion of the last parameter is joined with the last part of
the original word.
So in your case the single string (or word) "xxx$@yyy" expands to 2 strings 'xxx1' and '2yyy'. You can test this using set -- 1 2 3 to set $@ and then use printf ">%s<" "$@" to see the number of words that "$@" becomes (as the printf will repeat the format for each argument):
$ set -- 1 2 3
$ printf ">%s< " "x$@y"
>x1< >2< >3y<
$ printf ">%s< " "x$*y"
>x1 2 3y<
If you want to pass arguments to sh -c command one hack is to place them after the single command as arguments, but this may not work on all shells:
f (){ set -v; sudo -- sh -c 'cd /tmp; echo "$@"' sh "$@"; }
f 1 2
+ sudo -- sh -c 'cd /tmp; echo "$@"' sh 1 2
The extra sh argument is because the first arg is taken by the shell as the name of the process.
| Passing multiple arguments to sudo within function |
1,567,691,275,000 |
I sometimes use characters such as ! and $ in commit messages, they need to be manually escaped, but not if you use single quotes like this git commit -m 'My $message here!'. I tried to write a function to make gc and all text following as the message, but no luck in making it use single quotes. Everything I've tried, it still seems to use double-quotes in the end, making the $message part hidden, and ! won't work either.
I've tried this:
gc() {
git commit -m ''$*''
}
Also this:
gc() {
message="$*"
git commit -m ''$message''
}
Tried other things too that didn't work, when checking git log words starting with $ are not shown in the message. It works with single quotes, but it's using double anyway.
I want to run gc My $message here all text after gc should be the message, I can't force it to use single quotes, it ends up double. It would output My here only.
Is there a way to write this function so it would work the way I want? Not sure if this is possible, thanks!
If there is you're smarter than Bard, ChatGPT, Claude, and Bing AI, because I asked them all too, for several hours had no luck.
|
TL,DR: don't.
What you're asking for is impossible. When you write gc My $message here, this is an instruction to expand the value of the variable message and use the expansion as part of the arguments of the function.
You can do something like what you want by tricking the shell with an alias that adds a comment marker and calls a function that reads the comment from the shell history. This allows the function to receive arbitrary characters except newlines.
setopt interactive_comments
alias gc='gc_function #'
function gc_function {
emulate -LR zsh
setopt extended_glob
local msg=${history[$HISTCMD]#*[[:space:]]##}
git commit -m "$msg"
}
This is not (and cannot be) fully robust. The version I posted requires a space or other blank character after gc; if you use a different non-word-constituent character, the commit message might not be what you expect.
I do not recommend this. You're making it easy to write a one-line commit message, and hard to write a good commit message. A good commit message explains what the commit does and how. A good commit message is written in an editor. If you write one-line commit messages, you're doing it wrong and you will regret it.
| Run `git commit -m` with single quotes in zsh |
1,567,691,275,000 |
What I would like to do is:
f(){
ssh myserver &&
ls &&
echo 'it works!'
}
However, when I run this function. Only the ssh is executed.
|
Put the list of commands directly after ssh myserver:
ssh myserver 'ls && echo it works'
| I want a function to ssh into a server and then run a list of commands |
1,567,691,275,000 |
I'm preparing for the LPIC1, exam 102. This question came my way and I absolutely blanked. I knew it when I first took the quiz, now a month and a half later it's all blurred in my head.
What does
function a { echo $1; } ; a a b c output?
A. a
B. a b c
C. a b
D. a a b c
I've tried to reproduce this function by creating a file called 'script':
function a { echo $1;
};
a a b c
saved it, gave it execute permission then tried it out:
$ bash script
a
$
So the correct answer is A, but why? Is it necessary to put an ; after $1? And what's the second ; for? Can someone please explain the syntax of this script?
|
The commands should have been executed on a command line like:
function a { echo $1; } ; a a b c
The second semicolon separates the command list into
function a { echo $1; }
and
a a b c
The first command will create a function with the name 'a' which will echo the first positional parameter.
The semicolon after echo $1 is required to end the command list within the function, since there is no newline to do so.
(see also: man bash -> Compound Commands -> { list; } )
The second command a a b c will call that function (fist a) and hand over the 'a b c' as positional parameters to that function.
Since the function only echos the 1st positional parameter, the correct answer is 'A.'.
| Why are these particular semicolons necessary in this function definition and command-line? |
1,567,691,275,000 |
I am using Bash 4.4.20. I typically have main function in each bashscript. If I want to source this script from another function inside another bash script, will this conflict with the main function definition in both the scripts?
#A.sh
main() {
SomeFunction
}
SomeFunction(){
. B.sh
}
main "$@"
#B.sh
main(){
echo Hi
}
main "$@"
Is there any solution without renaming the main function?
|
The other script will redefine main(), yes. Though in this particular case, I'm not sure if it matters, since the main() from script A is running when script B redefines the function. I doubt a shell would allow the redefinition to change the behavior of an already-running function.
That is, given these scripts:
$ cat a.sh
main() {
echo a1
. ./b.sh
echo a2
}
main "$@"
$ cat b.sh
main() {
echo b
}
main "$@"
running a.sh with any shell I can find prints a1, b, a2. If a.sh were to call main again, then it would get the new behaviour, of course.
But even if it doesn't matter here, redefining functions on the fly like that would at least be really confusing.
The better question is, why do you need to source script B in the first place? It would seem clearer to have B as either a collection of functions to be loaded by sourcing, a library of sorts; or to have it as an independent utility, called as a regular program, not sourced.
In the first case, A would explicitly call the functions defined in A as needed.
| Sourced Bash script, each with main function |
1,567,691,275,000 |
I have modified a shell script i found here:
https://github.com/Slympp/ConanLinuxScript
But im having troubles with the function "conan_stop"
The script just terminates after
exec kill -SIGINT $pid
The script are sending the kill command successfully but after that it just terminates with no error code or anything.
All the variables in the script are defined earlier in the file.
Full function
function conan_stop {
pid=$(ps axf | grep ConanSandboxServer-Win64-Test.exe | grep -v grep | awk '{print $1}')
if [ -z "$pid" ]; then
echo "[$(date +"%T")][FAILED] There is no server to stop"
else
if [ "$discordBotEnable" = true ]; then
echo "[$(date +"%T")][SUCCESS] Discord bot is enabled"
if [ -n "$botToken" ] && [ -n "$channelID" ]; then
secLeft=$(($delayBeforeShutdown * 60))
while [ $secLeft -gt "0" ]; do
minLeft=$(($secLeft / 60))
echo "[$(date +"%T")][WAIT] Server will be shut down in $minLeft minutes"
python3 $discordScript $botToken $channelID "Servern kommer stängas ner om " $minLeft "minuter."
secLeft=$(($secLeft - 60))
sleep 60
done
python3 $discordScript $botToken $channelID "Servern stängs nu ner."
else
echo "[$(date +"%T")][ERROR] No Discord botToken or channelID found"
fi
fi
echo "[$(date +"%T")][SUCCESS] Existing PIDs: $pid"
exec kill -SIGINT $pid
isServerDown=$(ps axf | grep ConanSandboxServer-Win64-Test.exe | grep -v grep)
cpt=0
while [ ! -z "$isServerDown" ]; do
echo "[$(date +"%T")][WAIT] Server is stopping..."
((cpt++))
sleep 1
isServerDown=$(ps axf | grep ConanSandboxServer-Win64-Test.exe | grep -v grep)
done
echo "[$(date +"%T")][SUCCESS] Server stopped in $cpt seconds"
if [ "$discordBotEnable" = true ]; then
echo "[$(date +"%T")][SUCCESS] Discord bot is enabled"
if [ -n "$botToken" ] && [ -n "$channelID" ]; then
python3 $discordScript $botToken $channelID "Servern stängdes ner efter $cpt sekunder."
else
echo "[$(date +"%T")][ERROR] No Discord botToken or channelID found"
fi
fi
fi
}
|
exec replaces the shell with the given command, like the exec() system call. When the command (the kill, here) stops, the shell no longer exists, so there's no way for the script to continue.
The two exceptions are 1) when exec is given redirections, in which case it just applies them in the current shell, and 2) when the command can't be executed, in which case exec gives an error and returns a falsy exit code.
So, exec kill ... is almost the same as kill ... ; exit. Not exactly the same, but close enough in this case.
| Problem with shellscript crashing after "exec kill -SIGINT" |
1,567,691,275,000 |
I'm working with a set of scripts with functions treated as readonly.
The functions are more than just a list of commands, for example, there can be loops and change directories and even calls to other functions:
func() {
cd folder/
run command1
mkdir folder2/ ; cd folder2/
run command2
}
For a moment, pretending that I could change the scripts, to show you what I want one might look like this:
func() {
cd folder/
string[0]="command1" ; run command1 |& tee out0.log ; result[0]="$?" ; finished_command_number 0
mkdir folder2/ ; cd folder2/
string[1]="command2" ; run command2 |& tee out1.log ; result[1]="$?" ; finished_command_number 1
}
So, for commands which can be piped, but not for cd or loops, I want to store a string, store the stdout (stderr), store the exit code, and run another command afterwards. However, I cannot add this in, it must be done in my script which is invoking the one with func().
To just get the post command call functionality alone, I've tried copying the function and running it from my script with a trap foo debug, but that doesn't seem to propagate into functions. I don't think literally copying line by line will work because the cd and loops can't really be isolated since they are control statements rather than subshell commands.
If the strings can just be a function of the commands themselves, that's still useful.
|
#!/bin/bash
# test.sh
post() {
echo "post [$BASH_COMMAND] [$?]"
echo "== $RANDOM =="
}
set -o functrace
trap post debug
func() {
. check.sh
tryme |& tee out.txt
}
func
The output can be filtered by the lines marked with random. I should test this further to see how well it works with multiple processes, but it seems to work just fine with short lived commands. Notice the exit code lags by one command since debug is apparently called first.
#!/bin/bash
# check.sh
tryme() {
echo "one"
echo "two"
mkdir -p hello
cd hello/
echo "three"
false
echo "four"
}
===
$ bash test.sh
post [func] [0]
== 22542 ==
post [func] [0]
== 10758 ==
post [. check.sh] [0]
== 9115 ==
post [tryme 2>&1] [0]
== 11979 ==
post [tee out.txt] [0]
== 17814 ==
post [tryme 2>&1] [0]
== 22838 ==
post [echo "one"] [0]
== 5251 ==
one
post [echo "two"] [0]
== 18036 ==
two
post [mkdir -p hello] [0]
== 4247 ==
post [cd hello/] [0]
== 21611 ==
post [echo "three"] [0]
== 24685 ==
three
post [false] [0]
== 8557 ==
post [echo "four"] [1]
== 7565 ==
four
| Trap all commands in function |
1,567,691,275,000 |
Suppose, for example, that fpath is set to
( $HOME/.zsh/my-functions /usr/local/share/zsh/site-functions )
...and that both function-defining files $HOME/.zsh/my-functions/quux and /usr/local/share/zsh/site-functions/quux exist.
(I'll refer to these two versions of quux as "the user's quux" and "the site's quux", respectively.)
Furthermore, let's assume that I've run
autoload -U quux
This means that, if I now run quux, the one that will be run is the user's quux.
The word "overrides" in this post title refers to the fact that, in such a situation, the user's quux "overrides" the site's quux. (I could have "shadows" instead of "overrides".)
My question is: is there a way for the user's quux to, in turn, invoke the site's quux? (In a typical scenario, the user's quux would massage the arguments passed to the site's quux, and/or massage the output produced by it.)
I'm looking for solutions that do not require modifying anything under /usr/local/share/zsh/site-functions/quux.
IMPORTANT: The fpath used in this question is just an example. In general, all we know is that one function reachable through fpath overrides (shadows) some other such function.
I've experimented with dastardly schemes where, e.g., $HOME/.zsh/my-functions/quux takes the general form
# one-time initialization
local body
body=$( SOMEHOW <???> GET SOURCE CODE OF OVERRIDDEN FUNCTION )
eval "overridden_quux () {
$body
}"
# self-re-definition (MWAH-HA-HA-HA-HAAAA!)
quux () {
local massaged_args
massaged_args=( $( MASSAGE ARGS "$@" ) )
__overridden_quux "$massaged_args" | MASSAGE OUTPUT
}
# one-time re-invocation
quux "$@"
...but the results are very fragile, to say nothing of the ugliness of the approach.
|
The easy way is to force the loading of the original function, rename it, and redefine it in your .zshrc, rather than having a function with the same name in your fpath. Note that in zsh, you don't need complex tricks involving which, eval and wondering about quoting to rename a function: simply use the functions associative array.
autoload -Uz +X quux
functions[overridden_quux]=$functions[quux]
quux () {
… overridden_quux $@[@] …
}
If you want the function to be autoloaded from a file in fpath, it gets fiddly because you need to load the original without accessing the same fpath entry recursively. I don't have a better solution to propose than locally redefining fpath.
#autoload quux
functions[overridden_quux]=$(
fpath=("${(@)fpath:#$HOME/*}")
autoload -Uz +x quux
print -r -- $functions[quux]
)
quux () {
… overridden_quux $@[@] …
}
| How can a function call the function it "overrides"? |
1,567,691,275,000 |
I'm trying to add an alias in .bashrc file as follows:
...
alias cutf="_cutf"
...
_cutf() {
awk 'NR >= $2 && NR <= $3 { print }' < $1
}
(The function's goal is to show the content of the lines whose number is between $2 and $3 for the $1 file)
When I call cutf in a new bash session I get no output.
Am I missing something?
|
$2 and $3 are in single quotes. Shell doesn't expand variables in single quotes, so they get interpreted by awk. Switch to double quotes:
awk "NR >= $2 && NR <= $3 { print }" < "$1"
Note that you can achieve the same with
sed -n 'X,Yp' file
where X and Y are the line numbers, or similarly in Perl with
perl -ne 'print if X .. Y' file
which are both so short you probably don't need a function for them.
| Bash: How to create an alias in .bashrc for awk with parameters |
1,567,691,275,000 |
For context, I'm using zsh. Every time I use locate, I want to pass the -i and -A flags.
Usually, if I can get away with it, I create an alias with the same name as the existing command to do this. But according to this question, aliases can't accept arguments, so I have to use a function instead. Usually I stop there because the idea of a function with the same name as a command feels wrong to me, though I can't say why.
I was about to finally create such a function when I had this thought: this is a common pattern for me, wanting to default flags for command; is there an easier way of going about it? Perhaps zsh provides a better solution to this problem?
That brought me to another thought: is it an anti-pattern to override an existing command? I've always done it because it allows me to skip an association in my head: e.g., "Why doesn't ll have a man page? Oh yeah: ll really means ls -la. I need to do man ls, not man ll. Etc."
To summarize:
Is it alright/idiomatic to override an existing command with an alias/function?
Does zsh or some other tool provide a more direct way to default flags for a specific command?
|
I am really surprised by that other post you mentioned, as it can be very misleading. Just because an alias doesn't use parameters doesn't mean that aliases cannot set parameters. Of course you can put options in an alias, but it is just restricted, meaning, the alias is replaced in one place.
$ alias ls='ls -l'
$ ls
# will run: ls -l
$ ls foo
# will run: ls -l foo
The problem that other question poses is if you want to add options to a alias that has a argument.
so if you had an alias:
$ alias movetotrash='mv ~/Trash'
# no way to use inject anything inside 'mv' and '~/Trash'
So in your case
$ alias locate='locate -i -A'
# will expand to 'locate -i -A' and then whatever else you type.
As for your specific questions:
It is quite common for linux distributions to ship with default "options" via aliases, for example, ls frequently has a alias ls 'ls --color=auto' or for root login you might see alias mv 'mv -i'. So it can be considered a standard way of providing better defaults to users and uses the same name of the underlying binary. If a user doesn't want to use an alias and it is set in the standard environment, they can use unalias to unset the alias permanently, or when running a command using a backslash, such as \mv a b will prevent alias expansion for that execution (as does using the full path, like /usr/bin/mv a b)
I don't believe that zsh provides any extra capabilities in this area, and certainly nothing that would be a "standard". People often write wrapper shell scripts and sometimes shell functions. But for trivial software, alias is often the solution people use. If the program is complicated enough, it will usually gain an rc file for common user preferences.
I think one tool that tried to make options a bit easier was the popt library, which allowed users to create their own options to software, but the popt library isn't widely used, and I don't think it had the ability to set the default.
| How can I/Should I default flags when running a command? |
1,567,691,275,000 |
I'm currently running a tiling window manager and I want to be able to use a custom function that is equivalent to one I had when I was using tmux that allowed me to run a command in all visible shells in the current window (E.G. ta cd to/dir)
The command/function was called ta meaning "to all"
I've managed to create the following function:
function ta() {
local current_workspace="$(xdotool get_desktop)"
local to_execute="`if [[ \"$current_workspace\" = \"\$(xdotool get_desktop)\" ]]; then; $@; fi`"
for pts in $(ls /dev/pts | grep -o '[0-9]*'); do
echo "$to_execut" > /dev/pts/$pts
done
}
If I run the command manually like this:
te="`if [[ \"$current_workspace\" = \"\$(xdotool get_desktop)\" ]]; then; xdotool get_desktop; fi`"
echo "$te" > /dev/pts/1
I see the output 0 in my current shell
If I try to run my ta I get nothing, and I've noticed that even for the commands that do work, are only the commands that output something, so I it's actually executing the string I'm storing and then outputting that.
Does any one have any better suggestions?
|
Why what you tried doesn't work
Terminals are two-way communication channels between the terminal provider and the application(s) running in the terminal. The terminal device represents the side of the applications: writing to it is a request for the terminal provider to display what you wrote, and reading from it is a request for the terminal provider to send the user input. So when you run echo "$to_execut" > /dev/pts/$pts, that's just some random program displaying something on the terminal.
The provider's side of the terminal is typically not exposed as a device. How that part works depends on the system and on whether it's a physical terminal or a terminal emulator. On Linux, a terminal emulator opens the device /dev/ptmx. Because this device is multiplexed, you can't just open it and obtain an equivalent handle to what the terminal emulator has. The only way to pretend to be the terminal emulator without the cooperation of the terminal emulator program is to attach a debugger to the process.
Injecting input into a terminal is a bad idea anyway. What if there's a program running in the terminal? What if there's a partially typed command at the shell prompt?
How to type into multiple windows
You can use xdotool to simulate key presses in a window of a terminal emulator. It's cumbersome and dangerous (if the shell isn't waiting at an empty prompt, this could do anything) — just like what you originally tried to do.
This only applies to shells running in a GUI window. If you use tabs in the terminal emulator, only the foreground tab will receive input. If you use a multiplexer such as screen or tmux, only the window that's currently displayed in a GUI window, if any, will receive input.
Using signals
Send a signal to the shell. This is the normal way of getting a process to react. This has limitations: there are only a few different signals, and you can't attach any information to a signal, so the recipient has to perform some predefined action. On the upside, the process gets to choose exactly when to react, and with shells in particular, they'll wait until a “safe” time (not while executing a foreground command).
To make the shell react to a signal, use the trap builtin. There are only a few choices of potential signals:
SIGUSR1 and SIGUSR2 have no conventional meaning. They kill the process by default, so this is potentially dangerous if you can't be sure that you're only sending the signal to processes that .
SIGWINCH is sent by the terminal emulator when the window size changes. It does nothing by default. Shells handle it to update the LINES and COLUMNS variable, but you can set your own trap to run in addition to that. A limitation of SIGWINCH is that the trap doesn't run at all if zsh is currently running a foreground job.
So what you can do is arrange a location, say ~/.zsh_USR1, and have instances of zsh read from that file.
USR1_COMMAND_FILE=$HOME/.zsh_USR1
trap '[[ -r $USR1_COMMAND_FILE ]] && . $USR1_COMMAND_FILE' USR1
Keep in mind that sending SIGUSR1 to all zsh processes will kill any script that hasn't set up this trap! So only send it once you've identified which zsh processes are running in a window you want to target.
| execute command on all visible shells |
1,567,691,275,000 |
Basically something like
// declare
const my_func = function(param1, param2) { do_stuff(param1, param2) }
// use
my_func('a', 'b');
All in current interactive shell without using files
|
Functions are defined in the same way in an interactive bash shell as in a bash shell script.
Taking your example as a starting point:
my_func () { do_stuff "$1" "$2"; }
You would type that on the command line. Then call that (also on the command line) with
my_func 'something' 'something else' 'a third thing'
Note that you don't declare the argument list as you would do in a language like C or C++. It's up to the function to make intelligent use of the arguments that it gets (and up to you to document the function's use, would it be used for more serious work later).
This would call do_stuff with the first of the three arguments that I passed to my_func.
A function that does something slightly more interesting:
countnames () (
shopt -s nullglob
names=( ./* )
printf 'There are %d visible names in this directory\n' "${#names[@]}"
)
There is nothing stopping you from typing that into an interactive bash shell. This would make the countnames shell function available in the current shell session. (Note that I'm writing the function body in a sub-shell ((...)). I do this because I want to set the nullglob shell option without affecting the shell options set in the calling shell. The names array then also automatically becomes local.)
Testing:
$ countnames
There are 0 visible names in this directory
$ touch file{1..32}
$ ls
file1 file12 file15 file18 file20 file23 file26 file29 file31 file5 file8
file10 file13 file16 file19 file21 file24 file27 file3 file32 file6 file9
file11 file14 file17 file2 file22 file25 file28 file30 file4 file7
$ countnames
There are 32 visible names in this directory
Or using the zsh shell as a more sophisticated helper for this function:
countnames () {
printf 'There are %d visible names in this directory\n' \
"$(zsh -c 'set -- ./*(N); print $#')"
}
To "un-define" (remove) a function, use unset -f:
$ countnames
There are 3 visible names in this directory
$ unset -f countnames
$ countnames
bash: countnames: command not found
| Declare and use ad-hoc function in bash in interactive shell |
1,567,691,275,000 |
I have this script:
#!/bin/bash
BASE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" > /dev/null 2>&1 && pwd )"
USER_DEF=$(whoami)
function private {
read -p "Enter private chat name: " name
if [[ $name == '' ]] ; then
:
else
if [ -d "$BASE_DIR/chats/private/$name/" ] ; then
pass=$(cat "$BASE_DIR/chats/private/$name/pass")
read -s -p "Enter private chat password: " password
if [[ $password == $pass ]] ; then
chat "private" "$name"
count=$(find $BASE_DIR/chats/private/$name/ -type p)
if [[ "$count" == '' ]] ; then
rm -rf "$BASE_DIR/chats/private/$name"
fi
echo You exited private chat: $name
else
echo Wrong password
fi
fi
fi
unset $options
if [[ -e ./chats/public ]] ; then
options=($(find $BASE_DIR/chats/public -mindepth 1 -type d -printf '%f\n'))
fi
options+=("Enter private room")
options+=("Create public room")
options+=("Create private room")
options+=("Quit")
}
clear
read -r -p "Enter your name [$USER_DEF]: " UD
if [[ $UD = "" ]] ; then
USERNAME=$USER_DEF
else
USERNAME=$UD
fi
clear
echo Welcome back $USERNAME
echo We have this chat in public:
PS3='Please enter your choice: '
if [[ -e ./chats/public ]] ; then
options=($(find $BASE_DIR/chats/public -mindepth 1 -type d -printf '%f\n'))
fi
options+=("Enter private room")
options+=("Create public room")
options+=("Create private room")
options+=("Quit")
while true
do
int_count=1
for el in "${options[@]}"; do
echo "$int_count) $el"
int_count=$(expr $int_count + 1)
done
read -p "$PS3" optional
opt=${options[$(expr $optional - 1)]}
case $opt in
"Enter private room")
private # this is line 150
;;
"Create public room")
create_public
;;
"Create private room")
create_private
;;
"Quit")
echo "Bye, $USERNAME"
exit 0
;;
[a-zA-Z][a-zA-Z0-9]*)
public $opt
;;
esac
done
The problem is: if in menu I press 1,Enter and 1 again i receiving this error:
script.sh: line 150: private: command not found
this means what? that i'm not able to use it more than 1 time?
|
The problem is the line
unset $options
When $options is evaluated it contains, amongst other things, the word private so the shell undefines your function.
The correct syntax is
unset options
| receiving error 'script.sh: line 150: private: command not found' |
1,567,691,275,000 |
Is there a way to access docstrings in Bash? How does one include a docstring in a function's definition in Bash?
More specifically how do I add and access docstrings to the following function?
funky() {
echo "Violent and funky!"
}
|
If you mean "Bash's equivalent of Python's docstrings" I'm afraid I have to disappoint you that there is no such thing.
However.. I must say that implementing an equivalent of the "docstrings" feature would make for a very interesting homework in order to learn Bash's programmable-completion facility along with how to override a builtin command such as help to display either such "docstrings" or the normal builtin help's output.
| Accessing function documentation |
1,567,691,275,000 |
I have bash functions foo and bar in my ~/.bashrc.
Function foo calls an external command ext_command that itself takes as one of its arguments another command. I want to pass bar as that command, i.e. I'd want my .bashrc to look something like this:
bar() {
...
}
foo() {
ext_command --invoke bar
}
However, this doesn't work, because the external command, which is not a shell script, doesn't know bar. How can I solve this?
I was thinking to instead do
ext_command --invoke "bash -c 'bar'"
But the Bash in this invocation isn't run as an interactive shell, so it doesn't know bar either.
Hence, I believe one way to solve my problem would be to force Bash to be run as an interactive shell; unfortunately I don't know how to do that.
Another way that I would have thought should definitely work is to use
ext_command --invoke "bash -c 'source ~/.bashrc; bar'"
but for some reason this doesn't work and indeed simply running
bash -c 'source ~/.bashrc; bar'
in an interactive bash session gives
bash: bar: command not found
In any case, I don't like that solution, because I'd like foo to work no matter which file it is sourced from.
|
You generally have these ways to go:
Rewrite the function to command, ie. a script on its own. A common practice is to keep a ~/bin directory and include it in your $PATH.
Export the function to the environment and make the other shell get it from there. See Can I "export" functions in bash?
Stick to bar being a sourcable function, but sourcing it from ~/.bashrc may not be the best solution. You might put it in its own file in ~/bin and source it from there. This would make things simple.
If possible, feed the logic to the ext_command in your foo function somehow else, eg. through a here-doc.
| How to make my bash function known to external program |
1,567,691,275,000 |
I have the following command to remote into a local server and tail -f the latest log file for an application that I have.
The command works perfectly fine from the command line -
ssh user@hostname 'tail -f $(ls -1r ~/Development/python/twitter-bot/logs/*.log | head -1)'
The problem is that when I make it an alias (or even a function), it evaluates the completion of the ls -1r on my local machine and tries to pass that to the remote machine.
alias latestbotlogs="ssh user@hostname 'tail -f $(ls -1r ~/Development/python/twitter-bot/logs/*.log | head -1)'"
function latestbotlogs {
ssh user@hostname 'tail -f $(ls -1r ~/Development/python/twitter-bot/logs/*.log | head -1)'
}
What syntax do I need to use such that the entire command gets evaluated on the remote machine that I am accessing via SSH.
Thanks in advance!
|
For the alias you need some escapes
alias latestbotlogs="ssh user@hostname 'tail -f \\\$\\(ls -1r \\~/Development/python/twitter-bot/logs/*.log \\| head -1\\)'"
or
alias latestbotlogs='ssh user@hostname '\''tail -f $(ls -1r ~/Development/python/twitter-bot/logs/*.log | head -1)'\'
The second version is easier, you don't have to think about all the operators you have to quote.
The function should work as it is.
| SSH with Command Doesn't Run as an Alias |
1,567,691,275,000 |
In this is the scenario, need to call func1 from Main_Func. How do I call it?
Main_Func() {
<code>
}
Initialize_func() {
func1() {
<code>
}
}
|
For func1 to be defined, you will first have to have called Initialize_func at least once. Then you may call func1 as just func1.
Example:
outer1 () {
echo 'in outer1'
inner
}
outer2 () {
echo 'in outer2'
inner () {
echo 'in inner'
}
}
# First example explained below:
outer1
# Second example explained below:
outer2
outer1
Calling outer1 without calling outer2 in this example will not work since inner is not yet defined:
$ ksh93 script.sh
in outer1
script.sh[3]: inner: not found [No such file or directory]
Calling outer2 first and then outer1 works:
$ ksh93 script.sh
in outer2
in outer1
in inner
ksh will put your func1 function in the same "scope" as the other functions. It's not as in C++ or other object oriented languages that func1 somehow becomes a sub-function or method in some inner scope of Initialize_func.
This is regardless of whether you use the Bourne shell function syntax as above or define your functions using the function keyword of the Korn shell.
| How to call sub function of a different function from current function in ksh? |
1,567,691,275,000 |
I'm learning Bash, and I've written a basic function:
wsgc () {
# Wipe the global variable value for `getopts`.
OPTIND=1;
echo "git add -A";
while getopts m:p option;
do
case "${option}"
in
m)
COMMIT_MESSAGE=$OPTARG
if [ "$COMMIT_MESSAGE" ]; then
echo "git commit -m \"$COMMIT_MESSAGE.\""
else
echo "A commit message is required."
exit
fi
;;
p)
echo "git push"
exit
;;
\?)
echo "Invalid parameter."
exit
;;
esac
done
}
However, I'm struggling with a couple of things:
the if in m) isn't working, in that if I omit the argument, Bash intercedes and kicks me out of the session;
git add -A
-bash: option requires an argument -- m
Invalid parameter.
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Process completed]
After running: wsgc -m "Yo!" -p, I get kicked out of the session.
git add -A
git commit -m "Yo."
git push
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Process completed]
Any advice would be much appreciated.
|
the if in m) isn't working, in that if I omit the argument, Bash intercedes and kicks me out of the session;
You specify getopts m:p option. The : after the m means that you need an argument. If you don't provide it, it's an error.
After running: wsgc -m "Yo!" -p, I get kicked out of the session.
What do you mean by you get kicked out of the session? Does the shell vanish? Then that is because you sourced the script instead of executing it.
That being said, I would highly recommend to use getopt instead of getopts.
| Bash: Help honing a custom function |
1,567,691,275,000 |
function projectopen {
local di_files=(*.xcworkspace */*.xcworkspace *.xcodeproj */*.xcodeproj)
# open first exsit file
ls -d -f -1 $di_files 2>/dev/null \
| head -1 \
| xargs open
}
I write a shell function to quick open xcworkspace in terminal. But when I declare di_files as a local var, then the function is broken, and log
projectopen:1: number expected
I use zsh on Mac OS. Why this happen and how to fix it?
|
In older versions of zsh you cannot initialise an array with local (or typeset/declare) like that, you need to separate it, e.g.
local -a di_files # explicit array
di_files=( ... )
The feature to permit declaration and array assignment together was added in v5.1.
I believe the error you see is because zsh is treating the initialisation as scalar and () as a glob qualifier.
You can also probably replace your elaborate pipeline with the simpler
open "${di_files[1]}"
Finally, including handling for no matching files:
function projectopen {
setopt local_options nullglob
local di_files=(*.xcworkspace */*.xcworkspace *.xcodeproj */*.xcodeproj)
# open first existing file
[ -n "${di_files[1]}" ] && open "${di_files[1]}"
}
With the nullglob option each glob expansion which matches no files is replaced with an empty string (I suspect you may have nonomatch set, a related option).
| Declare as local var will break a function and log out "1: number expected" |
1,567,691,275,000 |
I have this function (defined inside my ~/.zshrc):
function graliases
{
if [[ "$#*" -lt 1 ]]
then
echo "Usage: graliases <regex>"
else
echo "$*"
grep -E '*"$*"*' ~/.dotfiles/zsh/aliases.zsh
fi
}
What this function should do, is search the file ~/.dotfiles/zsh/aliases.zsh with a regex, provided by parameters. To the regex, two stars are appended and prepended, which should make the finding independet of the position in the line. My idea works, if i use plain grep:
$ grep -E '*git rebase*' ~/.dotfiles/zsh/aliases.zsh
alias gr='git rebase'
alias gra='git rebase --abort'
alias grc='git rebase --continue'
alias gri='git rebase --interactive'
alias grs='git rebase --skip'
$ grep -E '*ls -la*' ~/.dotfiles/zsh/aliases.zsh
alias lnew='ls -ld *(/om[1,3])' # Show three newest directories. "om" orders by modification. "[1,3]" works like Python slice.
alias lsize='ls -l */**(Lk+100)' # List all files larger than 100kb in this tree
alias lvd='ls -ld **/*(/^F)' # recursively list any empty sub-directories
alias l='ls -lph' # size,show type,human readable
alias la='ls -lAph' # long list,show almost all,show type,human readable
alias lt='ls -lAtph' # long list,sorted by date,show type,human readable
The indentation in this grep example is as it should be, this is not an error. My function should basically do the same, just with the content between the two stars as parameters (in this case, git rebase and ls -la.
But it doesn't do the same, and i don't know why:
$ graliases git branch
git branch
alias lnew='ls -ld *(/om[1,3])' # Show three newest directories. "om" orders by modification. "[1,3]" works like Python slice.
alias findAllIPs="nmap -sP 192.168.1.* | grep -oE '192.168.1.[0-9]*'"
alias findLocalIP="ifconfig | grep -oE 'inet 192.168.1.[0-9]*'"
alias apls="apt list"
alias gcR='git reset "HEAD^"'
alias gdi='git status --porcelain --short --ignored | sed -n "s/^!! //p"'
alias ggf="git ls-files | grep -i"
alias gCl='git status | sed -n "s/^.*both [a-z]*ed: *//p"'
alias gpc='git push --set-upstream origin "$(git-branch-current 2> /dev/null)"'
alias gpp='git pull origin "$(git-branch-current 2> /dev/null)"
&& git push origin "$(git-branch-current 2> /dev/null)"'
alias gwig="git update-index --assume-unchanged"
alias gwuig="git update-index --no-assume-unchanged"
% graliases ls -la
ls -la
alias lnew='ls -ld *(/om[1,3])' # Show three newest directories. "om" orders by modification. "[1,3]" works like Python slice.
alias findAllIPs="nmap -sP 192.168.1.* | grep -oE '192.168.1.[0-9]*'"
alias findLocalIP="ifconfig | grep -oE 'inet 192.168.1.[0-9]*'"
alias apls="apt list"
alias gcR='git reset "HEAD^"'
alias gdi='git status --porcelain --short --ignored | sed -n "s/^!! //p"'
alias ggf="git ls-files | grep -i"
alias gCl='git status | sed -n "s/^.*both [a-z]*ed: *//p"'
alias gpc='git push --set-upstream origin "$(git-branch-current 2> /dev/null)"'
alias gpp='git pull origin "$(git-branch-current 2> /dev/null)"
&& git push origin "$(git-branch-current 2> /dev/null)"'
alias gwig="git update-index --assume-unchanged"
alias gwuig="git update-index --no-assume-unchanged"
According to 1, 2, $* is the correct variable for this use case. Even the line echo "$*" prints out the expected result. Unfortunately, i haven't found an explicit explanation of $* in the zsh manpage though.
Why does my function not work properly?
|
The grep pattern looks wrong. The rule of thumb of the command line is that everything inside single quotes is taken literally, whereas when not quoted or inside double quotes shell expand such string according to its rules (globing, splitting, parameter expansion etc.). In your case the command
grep -E '*"$*"*' ~/.dotfiles/zsh/aliases.zsh
means to pass to grep string *"$*"* literally, so that grep interprets this pattern as star, followed by double-quote, followed by dolar sign repeated zero or more times (*), followed by double quote repeated zero or more times (*). That is not what you expect.
You want to treat $* as a variable (double quoted in this case), so close single quoted string in front of it and open afterwards:
grep -E '*'"$*"'*' ~/.dotfiles/zsh/aliases.zsh
But I don't see the reason for those stars at all (you don't want to grep for stars, do you?), to me it looks that you can just simplify this to
grep -E "$*" ~/.dotfiles/zsh/aliases.zsh
| $* variable of zsh function leads to unexpected results |
1,567,691,275,000 |
I have the Linux script shown below. I can get it to return from the method decrypt nothing which I need to unzip a file. The method decrypt sends a string with the name of a zip file. Please give some advice. I mention that another methods it brings correctly the files.
m_mode_verbose=1
const_1="ceva"
val="valoare"
decrypt ()
{
PASSPHRASE="xxxx"
encrypted=$1
local decrypt1=`echo $encrypted | awk '{print substr($0,0,64)}'`
echo "$PASSPHRASE"|gpg --no-tty --batch --passphrase-fd 0 --quiet --yes --decrypt -o ${sspTransferDir}/${decrypt1} ${sspTransferDir}/${encrypted} 2> /dev/null
if [ $? -eq 0 ]
then
notify "pgp decrypt of file.pgp succeeded"
else
notify "pgp decrypt of file.pgp failed"
fi
# PASSPHRASE=”your passphrase used for PGP”
# echo "$PASSPHRASE"|gpg --no-tty --batch --passphras
#e-fd 0 --quiet --yes \
#–decrypt -o file.dat file.pgp 2> /dev/null
#if [ $? -eq 0 ]
#then
# echo "pgp decrypt of file.pgp succeeded"
#else
# echo "pgp decrypt of file.pgp failed"
#fi
# echo "testtest $decrypt1"
echo "valoare ="$decrypt1
val=$decrypt1
#eval $decrypt1
$CONST=$decrypt1
echo "local"$CONST
}
process_file()
{
f=$1
echo "Processing $f"
for encrypted in `cat $f`; do
echo "Name of the file: "$i
echo "Decrypted : " $decrypted
decrypted=$(decrypt ${encrypted}) #decrypted = decrypt(encrypted)
# decrypted=decrypt ${encrypted} ${decrypted} #decrypted = decrypt(encrypted)
echo "val ============== " $val
echo "Decrypted after method" $decrypted
unzip -o ${TransferDir}/${decrypted} -d ${ImportRoot}
echo "Path after unzip" $ImportRoot
#rm -f ${decrypted}
echo "After remove" $decrypted
path=${sspTransferDir}/${encrypted}
#rm -f ${sspTransferDir}/${encrypted}
echo "Path to remove" $path
echo "Const ="$CONST
done
}
#main
get_config;
file="output$lang.txt"
echo "file is $file"
get_file_list $file # fills $file with the list of encrypted files corresponding to language $language
process_file $file #process - decrypt,
|
To answer the title of your question, shell functions usually return data by printing it to stdout. Callers capture the return value with retval="$(func "$arg1" "$arg2" "$@")" or similar. The alternative is to pass it the name of a variable to store a value in (with printf -v "$destvar").
If your script isn't working, it might be from quoting problems. You're missing quotes on a lot of variable expansions.
e.g.
echo "valoare ="$decrypt1
# should be:
echo "valoare =$decrypt1"
Your version quotes the literal part, but then leaves the user-data open for interpretation by the shell. Multiple whitespace characters in $decrypt1 collapse to a single space in echo's output.
| Returning a variable from a function [closed] |
1,567,691,275,000 |
I would like to have an alias for the following code:-
g++ *.cc -o * `pkg-config gtkmm-3.0 --cflags --libs`;
but I want that when I enter the alias it should be followed by the file name *.cc and then the name of the compiled program *.
for example:
gtkmm simple.cc simple
should run
g++ simple.cc -o simple `pkg-config gtkmm-3.0 --cflags --libs`
|
What you need isn't an alias, but a function. Aliases do not support parameters in the way you want to. It would end just appending the files, gtkmm simple.cc simple would end like:
g++ -o `pkg-config gtkmm-3.0 --cflags --libs` simple.cc simple
and that's not what you try to achieve. Instead a function allows you to:
function gtkmm () {
g++ "$1" -o "$2" `pkg-config gtkmm-3.0 --cflags --libs`
}
Here, $1 and $2 are the first and second arguments. $0 is the caller itself:
gtkmm simple.cc simple
$0 $1 $2
You can test the function using echo.
You can find more functionalities about functions in the Bash online manual.
| Aliasing a command with special parameters [duplicate] |
1,567,691,275,000 |
Namely: I want to alias tail -f to less +F but let tail with any other parameter supplied work the same way as before.
|
This is slightly beyond the powers of what shell aliases provide (assuming bash). You could define a function:
function tail() {
if [ "$1" == '-f' ]; then
shift
less +F "$@"
else
command tail "$@"
fi
}
When you type tail, this will now refer to the function defined
above, which checks its first argument, if any, for equality with
-f, and if it matches, runs less +F on the rest of the original
arguments (shift removes the first of the original arguments,
-f). Otherwise, it calls the command tail with all of the original
arguments (calling the built-in command is necessary to avoid an
infinite loop; without it, tail would refer to the function being
defined, causing an infinite loop).
| Aliasing a command with parameter supplied to another command |
1,567,691,275,000 |
How do I make the following function work correctly
# Install git on demand
function git()
{
if ! type git &> /dev/null; then sudo $APT install git; fi
git $*;
}
by making git $* call /usr/bin/git instead of the function git()?
|
Like this:
# Install git on demand
function git()
{
if ! type -f git &> /dev/null; then sudo $APT install git; fi
command git "$@";
}
The command built-in suppresses function lookup. I've also changed your $* to "$@" because that'll properly handle arguments that aren't one word (e.g., file names with spaces).
Further, I added the -f argument to type, because otherwise it'll notice the function.
You may want to consider what to do in case of error (e.g., when apt-get install fails).
| Install-on-Demand Wrapper Function for Executables |
1,567,691,275,000 |
TL:DR How do I add a grep to a bash function while allowing a variable number of optional inputs?
I find that in repeated grepping of large outputs, I end up clogging the terminal with data. Sometimes, I want to do a grep, and quickly go to the top of it while capturing all the data. To this end, I have become accustomed with using "clear;clear; grep [options] [string] [file] ; top". Being lazy, I want to turn this into a function called "cgrep", which performs the two clears and the grep. What I have tried is:
cgrep () {
clear
clear
grep "$1" "$2" "$3"
}
This works, providing that I always use exactly one optional argument. While that isn't a problem most of the time, I became curious about how to feed it 0-N optional arguments without causing issues (such as reading the [string] as [file]).
I've read this post, and tried a similar implementation. However, this didn't quite suit my needs, and it would be better if I could have the optional parameters between cgrep and [string] to make it look more like a traditional grep.
|
In bash and sh "$@" will expand all positional parameters so:
cgrep () {
clear
clear
grep "$@"
}
Note that $@ has to be double quoted to prevent word splitting, globbing, empty removal, etc from being performed on each parameter.
3.4.2 Special Parameters
| Bash Function Grep Mod |
1,567,691,275,000 |
Let's say I write a bash function like so:
function.sh
usage () {echo "No arguments are needed";}
myfunction () {
if [[ $# -qt 0 ]] ;
then
usage
fi
echo "Hello World"
}
Then I source function.sh. However, I have another script with usage() defined there too and I have sourced it too.
I run myfunction -myWorld and I hit the usage() call and it calls the correct one somehow.
How does bash know which usage() to call?
|
The answer to your question, as it currently stands, is that Bash calls the last defined version of the function.
Using two modified versions of your example:
function1.sh
usage () { echo "Usage from function1.sh - No arguments are needed"; }
myfunction1 () {
if [[ $# -gt 0 ]] ;
then
usage
fi
echo "Hello World"
}
function2.sh
usage () { echo "Usage from function2.sh - No arguments are needed"; }
myfunction2 () {
if [[ $# -gt 0 ]] ;
then
usage
fi
echo "Hello World"
}
Then running
$ source function1.sh
$ source function2.sh
$ usage
will give the output
Usage from function2.sh - No arguments are needed
Note that in your script:
-qt should be -gt, and;
some spaces were missing in the usage() - around the echo
| When writing a bash script, how does the script know which usage() to call? |
1,567,691,275,000 |
I use Windows and Linux a lot, and sometimes I type cd\ from Windows muscle-memory, so I tried to alias that, alias cd\='cd /' but that doesn't work (presumably because \ is an the escape character in Linux). Is there a way, using an alias or a function that I could make typing cd\ => cd / ?
|
That would be hard, since the backslash is used to escape the next character, and at end of line, it starts a continuation line. So even if you could make a function called cd\, you'd need to run it as cd\\, or 'cd\'. And with aliases, escaping or quoting part of the name prevents alias expansion...
Anyway, you can't create those aliases or functions in Bash:
$ alias cd\\='echo foo'
bash: alias: `cd\': invalid alias name
$ cd\\ () { echo foo; }
bash: `cd\\': not a valid identifier
You can in Zsh, though, but you need the double-backslash...
% cd\\ () { echo foo; }
% cd\\
foo
Actually it even seems to accept the alias, but you can't use it:
% alias foo\\='echo bar'
% foo\\
zsh: command not found: foo\
% 'foo\'
zsh: command not found: foo\
Bash can run an external command with a backslash in the name, but that doesn't help with cd.
| bash, "\" in an alias or function |
1,567,691,275,000 |
This was my starting point: shell script - Executing user defined function in a find -exec call - Unix & Linux Stack Exchange
But I need to choose between 2 different versions of the function, based on an argument passed to the containing script. I have a working version, but it has a lot of duplicate code. I'm trying to implement it better, but I can't quite figure out how to do that in this context.
Here's the core code:
cmd_force() {
git fetch;
git reset --hard HEAD;
git merge '@{u}:HEAD';
newpkg=$(makepkg --packagelist);
makepkg -Ccr;
repoctl add -m $newpkg;
}
cmd_nice() {
git pull;
newpkg=$(makepkg --packagelist);
makepkg -Ccr;
repoctl add -m $newpkg;
}
if [[ $force == "y" ]] ; then
export -f cmd_force
find . -mindepth 2 -maxdepth 2 -name PKGBUILD -execdir bash -c 'cmd_force' bash {} \;
else
echo "Call this with the -f option in case of: error: Your local changes to ... files would be overwritten by merge"
export -f cmd_nice
find . -mindepth 2 -maxdepth 2 -name PKGBUILD -execdir bash -c 'cmd_nice' bash {} \;
fi
I don't think I should have to have two independent functions. There are only a few lines that differ. The actual functions have a lot more code, but it is completely duplicated between them.
I did not include my code for parsing the argument because I'm learning about getopt and haven't finished that part yet.
|
You can export force too and move the if [[ $force == "y" ]] into the function:
cmd() {
if [[ $force == "y" ]] ; then
git fetch;
git reset --hard HEAD;
git merge '@{u}:HEAD';
else
git pull;
fi
newpkg=$(makepkg --packagelist);
makepkg -Ccr;
repoctl add -m $newpkg;
}
export -f cmd
export force
find . -mindepth 2 -maxdepth 2 -name PKGBUILD -execdir bash -c 'cmd' bash {} \;
| Executing user defined function in a find -exec call AND choosing version of that function based on arguments |
1,567,691,275,000 |
I'm rocking ZSH as my main shell, but in my .zshrc, I'd like to set up a ssh command with expect so I can ssh into my dev boxes easier when I flash builds (there's literally no security needed it's all on an intranet of sorts). I can pass a password to ssh with !#/usr/bin/expect shell.
Is it kosher to do this?
password=sick_awesome_password6969
function expect_ssh () {
# I enter expect shell at the beginning of this function <==
#!/usr/bin/expect
set timeout 20
set cmd [lrange $argv 1 end]
set password [lindex $argv 0]
eval spawn $cmd
expect "password:"
send "$password\r";
interact
exit 0 # Then escape from it ? <==
}
default_boxssh_subnet=1
function bosh () {
if [[ ! $1 == *"."* ]];
then
# ssh [email protected].$default_boxssh_subnet.$1
expect_ssh 10.10.$default_boxssh_subnet.$1
else
# ssh [email protected].$1
expect_ssh 10.10.$default_boxssh_subnet.$1
fi
}
|
As already mentioned by choroba, that shebang becomes a regular commentary line.
Instead having separate expect script file, you can use heredoc:
function expect_ssh () {
expect <<'EOF'
set timeout 20
set cmd [lrange $argv 1 end]
set password [lindex $argv 0]
eval spawn $cmd
expect "password:"
send "$password\r";
interact
exit 0 # Then escape from it ? <==
EOF
}
| Change shells in a script function |
1,588,657,028,000 |
Iam new to linux and trying to pass a variable from one function to another in a same bash script.
Below is my code:
#!/bin/bash -x
FILES=$(mktemp)
FILESIZE=$(mktemp)
command_to_get_files(){
aws s3 ls "s3://path1/path2/"| awk '{print $2}' >>"$FILES"
}
command_to_get_filesizes(){
for file in `cat $FILES`
do
if [ -n "$file" ]
then
# echo $file
s3cmd du -r s3://path1/path2/$file | awk '{print $1}'>>"$FILESIZE"
fi
done
}
files=( $(command_to_get_files) )
filesizes=( $(command_to_get_filesizes) )
So in the above code, in the first function $FILES variable is there with the output.
$FILES is passed as a input to the second function command_to_get_filesizes
But am getting error as a Broken Pipe.
Can anyone pls help me to pass a local varibale from one function to another.
Output of $FILES is
2016_01
2016_02
2016_03
2016_04
Output
Kindly help!
|
It depends on your use-case on how to transfer data from one function into another one.
I could not reproduce your error - maybe it has something to do with aws or s3cmd. using backticks as subshell is deprecated - you should use $().
If you just want to pass data and you are not interested in storing them to your hard drive you could use global arrays (everything you don't declare otherwise is global):
#!/usr/bin/env bash
command_to_get_files() {
local ifs
# store the internal field separator in order to change it back once we used it in the for loop
ifs=$IFS
# change IFS in order to split only on newlines and not on spaces (this is to support filenames with spaces in them)
IFS='
'
# i dont know the output of this command but it should work with minor modifications
# used for tests:
# for i in *; do
for file in $(aws s3 ls "s3://path1/path2/" | awk '{print $2}'); do
# add $file as a new element to the end of the array
files+=("${file}")
done
# restore IFS for the rest of the script to prevent possible issues at a later point in time
IFS=${ifs}
}
# needs a non-empty files array
command_to_get_filesizes() {
# check if the number of elements in the files-array is 0
if (( 0 == ${#files[@]} )) then
return 1
fi
local index
# iterate over the indices of the files array
for index in "${!files[@]}"; do
# $(( )) converts the expression to an integer - so not found files are of size 0
filesizes[${index}]=$(( $(s3cmd du -r "s3://path1/path2/${files[${index}]}" | awk '{print $1}') ))
# used for testing:
# filesizes[${index}]=$(( $(stat -c %s "${files[$i]}") ))
done
}
command_to_get_files
command_to_get_filesizes
# loop over indices of array (in our case 0, 1, 2, ...)
for index in "${!files[@]}"; do
echo "${files[${index}]}: ${filesizes[${index}]}"
done
notes about bash arrays:
get the size of the array: ${#array[@]}
get the size of the first element: ${#array[0]}
get the indices of the array: ${!array[@]}
get the first element of the array: ${array[0]}
for more information about arrays have a look here.
another method would be to just echo the names and provide them as parameters to the other function (this is difficult with multi-word filenames)
Using temporary files would result in something like this:
#!/usr/bin/env bash
readonly FILES=$(mktemp)
readonly FILESIZES=$(mktemp)
# at script exit remove temporary files
trap cleanup EXIT
cleanup() {
rm -f "$FILES" "$FILESIZES"
}
command_to_get_files() {
aws s3 ls "s3://path1/path2/" | awk '{print $2}' >> "$FILES"
}
command_to_get_filesizes() {
while read -r file; do
s3cmd du -r "s3://path1/path2/${file}" | awk '{print $1}' >> "$FILESIZES"
done < "$FILES"
}
command_to_get_files
command_to_get_filesizes
| Passing a variable from one function to another in bash script |
1,588,657,028,000 |
I want to use direnv to automatically define a bash function when I switch to a particular directory. Here is the function definition.
seqchart () {
# Create a sequence diagram creation shorthand
f=$1
target_f=${f%.*}.svg
if [ -f "$f" ]; then
diagrams sequence $f ${target_f}
open -a firefox ${target_f}
else
echo "$0: No file specified"
fi
}
I put the above code in the .envrc.
When I cd to the folder, I can see
$ cd sequence_diagrams/
direnv: loading .envrc
But the bash function is not created.
How can I make it happen while keeping everything in a single .envrc?
|
From the FAQ, emphasis mine:
direnv is not loading the .envrc into the current shell. It’s creating a new bash sub-process to load the stdlib, direnvrc and
.envrc, and only exports the environment diff back to the original
shell. This allows direnv to record the environment changes accurately
and also work with all sorts of shells. It also means that aliases and
functions are not exportable right now.
Exporting the function using export -f seqchart might not work either, since I think bash doesn't provide a way to set environment variables of the same form as exported functions (BASH_FUNC_foo%%), and even if it did, I think it only reads those at startup.
| How to create a bash function from within a .envrc? |
1,588,657,028,000 |
I'm trying to monitor some files with entr.
My script based on their examples:
do_it(){ echo Eita!; }
while true; do ls folder/* more-folder/* | entr -pd do_it; done
>> entr: exec do_it: No such file or directory
However, this works:
while true; do ls folder1/* folder2/* | entr -pd echo Eita!; done
What am I doing wrong?
|
Answered by the programmers of entr (https://github.com/eradman/entr/issues/6): "A function can be exported to a subshell, but you cannot execute a function with an external program. If you want to execute shell functions, you can write a loop such as this:"
do_it(){ echo 'Eita!'; }
while true; do
ls folder1/* folder2/* | entr -pd -s 'kill $PPID'
do_it
done
| Entr: trying to trigger function while monitoring file change |
1,588,657,028,000 |
How can I "inject" a function argument to a defined variable like in this example?
mood="i am $1 happy"
happy ()
{
echo "$mood"
}
happy "very"
Current output:
i am happy
Desired output:
i am very happy
Thanks!
Edit:
The real world example is: I have a lot of translatable strings in another file, like so:
installing="Installing"
installation_started="The installation of <app> started at <date>"
installation_ended="The installation of <app> ended at <date>"
And a function:
apt_get_install ()
{
echo "$installing $1..."
echo "$installation_started"
apt-get -y install "$1"
echo "$installation_ended"
}
apt_get_install <app>
And then I want to inject <app> to the output.
|
This should work. Be very careful with eval, never use eval on user input, it will execute anything.
mood='i am $1 happy'
happy ()
{
eval echo "$mood"
}
happy "very"
| Pass function argument to defined variable |
1,588,657,028,000 |
I executed the following code in my Bash console in an Ubuntu 16.04 environment:
cat <<-'DWA' > /opt/dwa.sh
DWA() {
test='test'
read domain
find /var/www/html/ -exec cp /var/www/html/${domain} /var/www/html/${test} {} \;
sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php
sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php
mysql -u root -p << MYSQL
create user '${test}'@'localhost' identified by '${psw}';
create database ${test};
GRANT ALL PRIVILEGES ON ${test}.* TO ${test}@localhost;
MYSQL
}
DWA
DWA
Everything was redirected as I desired besides the code in the last row (the last DWA which serves as a function call).
Why was everything besides the last DWA function call copied but only this stream wasn't?
Maybe some conflict with the DWA before the last one?
|
The last DWA is being removed because you are using this as your delimiter. The delimiters tell your shell everything between these matching strings is part of my here doc. The delimiters are not part of the doc and are therefore stripped when the here doc is read. The reason the DWA prior to that remained is because the delimiter must be at the start of the line. I typically see people use EOF or EOL but this string can be whatever you want, so long as it is unique and does not appear within your document.
I recommend modifying to this:
cat <<-'EOF' > /opt/dwa.sh
#!/bin/bash
DWA() {
test='test'
read domain
find /var/www/html/${domain} -exec cp /var/www/html/${domain} /var/www/html/${test} {} \;
sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php
sed -i 's/${domain}/${test}'/g /var/www/html/test/wp-config.php
mysql -u root -p << MYSQL
create user '${test}'@'localhost' identified by '${psw}';
create database ${test};
GRANT ALL PRIVILEGES ON ${test}.* TO ${test}@localhost;
MYSQL
}
DWA
EOF
If you do actually want those variables to expand prior to being sent to dwa.sh you should unquote EOF
I find this page to be a very concise resource for here documents:
http://tldp.org/LDP/abs/html/here-docs.html
| cat heredocument copied everything besides function call |
1,588,657,028,000 |
What is FreeBSD's /bin/sh equivalent to bash's:
compgen -A function
which lists the names of the declared functions.
|
FreeBSD's /bin/sh is the Almquist shell, and it has no equivalent to that because the Almquist shell does not have programmable command completion in the first place.
However, if you were looking for an equivalent for typeset -F you would still be out of luck. The Almquist shell has no built-in command for listing the names of available shell functions.
This is in fact the same question as "ash: List functions" and "dash: List declared functions". The Debian Almquist shell, the FreeBSD Almquist shell, and the BusyBox Almquist shell are all the Almquist shell. Whilst there are differences amongst them (in particular the setvar builtin, the let builtin, and suchlike) they do not extend to a difference in this respect.
In fact, no flavour of the Almquist shell has this. So you probably do not need to ask this a fourth time about another Almquist shell. ☺
Further reading
Sven Maschek (2014-10-11). ash variants. in-ulm.de.
POSIX print function definition
| FreeBSD's sh: List functions |
1,588,657,028,000 |
I am unable to discern why timeout in a function call will cause a loop to stop. I have a "solution", but I am really very intrigued as to how / why this is happening! It seems to be something to do with cat being the command getting timed out?
TL;DR
while read -r line; do ... done < file gets terminated when a timeout occurs on cat, producing the wrong output and exit code. The loop does not loop through every line in the file.
If instead an array is created first of all lines in the file, and then ... is executed in a for line in "${all_lines[@]}"; do, all lines are processed and the output of timeout with respect to exit codes is correct.
Suppose the script grade.sh intends to read all of tests.txt and execute soln.sh, making sure that soln.sh terminates. To demonstrate a "working" example, soln.sh will first sleep.
tests.txt
first
second
third
fourth
fifth
grade.sh
#!/usr/bin/env bash
while read -r line; do
echo "Test: $line"
output="$(timeout 2 ./soln.sh "$line")"
timed_exit=$?
echo " Soln Output: $output"
echo " Timed exit: $timed_exit"
done < "tests.txt"
soln.sh
#!/usr/bin/env bash
if [[ "$1" == "third" ]]; then
sleep 3
fi
echo "[soln running $1]"
expected output
Test: first
Soln Output: [soln running first]
Timed exit: 0
Test: second
Soln Output: [soln running second]
Timed exit: 0
Test: third
Soln Output:
Timed exit: 124
Test: fourth
Soln Output: [soln running fourth]
Timed exit: 0
Test: fifth
Soln Output: [soln running fifth]
Timed exit: 0
If we change soln to do something that will continue forever (waiting for input), the loop ends instead
soln.sh
#!/usr/bin/env bash
if [[ "$1" == "third" ]]; then
cat $(find . -name iamnothere.txt) | wc -l
fi
echo "[soln running $1]"
output terminates early, extra 2, wrong exit code
Test: first
Soln Output: [soln running first]
Timed exit: 0
Test: second
Soln Output: [soln running second]
Timed exit: 0
Test: third
Soln Output: 2
[soln running third]
Timed exit: 0
Hacky fix is to loop through every line first and use a for loop which will bypass this.
"fixed" grade.sh
#!/usr/bin/env bash
all_lines=()
idx=0
while read -r line; do
all_lines[idx]="$line"
(( idx++ ))
done < "tests.txt"
for line in "${all_lines[@]}"; do
echo "Test: $line"
output="$(timeout 2 ./soln.sh "$line")"
timed_exit=$?
echo " Soln Output: $output"
echo " Timed exit: $timed_exit"
done
expected output
Test: first
Soln Output: [soln running first]
Timed exit: 0
Test: second
Soln Output: [soln running second]
Timed exit: 0
Test: third
Soln Output:
Timed exit: 124
Test: fourth
Soln Output: [soln running fourth]
Timed exit: 0
Test: fifth
Soln Output: [soln running fifth]
Timed exit: 0
is this a feature or a bug or am I missing something?
It seems to me like cat is somehow overriding timeout, since the rest of the script gets to execute.
|
cat $(find . -name iamnothere.txt) | wc -l
assuming that iamnothere.txt does not exist becomes
cat | wc -l
which consumes standard input, the same standard input that the while loop is reading the lines from. for avoids this by not using standard input like while does. This can be observed by using a bare cat for the second line case, as this shows that the third line has been read by that cat:
$ cat lines
first
secon
third
$ cat looper
#!/bin/sh
while read line; do
x=$(timeout 2 ./doer "$line")
echo "$line out=$x code=$?"
done < lines
$ cat doer
#!/bin/sh
if [ "$1" = secon ]; then
cat
else
echo "$1 pid$$"
fi
$ ./looper
first out=first pid42079 code=0
secon out=third code=0
$
| timeout causes while read loop to end when `cat` is timed out |
1,588,657,028,000 |
I am trying to execute few commands in a server by logging in using sshpass command like below.
SSH_ARGS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q'
sshpass -p 'password' ssh ${SSH_ARGS} user@IP 'sudo sed -i "/^server 0.rhel.pool.ntp.org/ { N; s/^server 0.rhel.pool.ntp.org\n/server xxx.xx.xx.xx iburst\n&/ }" /etc/ntp.conf'
sshpass -p 'password' ssh ${SSH_ARGS} user@IP 'sudo sed -i "/^#server 0.rhel.pool.ntp.org iburst/ { N; s/^#server 0.rhel.pool.ntp.org iburst\n/server xxx.xx.xx.xx iburst\n&/ }" /etc/ntp.conf'
echo "File /etc/ntp.conf is now edited"
sshpass -p 'password' ssh ${SSH_ARGS} user@IP 'sudo cat /etc/ntp.conf | grep "server xxx.xx.xx.xx iburst"'
if [ $? = 0 ];
then
sshpass -p 'password' ssh ${SSH_ARGS} user@IP 'sudo service ntpd status;sudo service ntpd restart'
else
echo "File /etc/ntp.conf is not updated in IP"
fi
So instead of repeating sshpass everytime, I would like to put sshpass -p 'password' ssh ${SSH_ARGS} user@IP as a variable or function. How to do that?
I have tried ::
SSH_ARGS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q'
sshpp=`sshpass -p 'password' ssh ${SSH_ARGS} user@IP`
$sshpp 'sudo service ntpd status'
and this ::
SSH_ARGS='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q'
sshpp() {
sshpass -p 'password' ssh ${SSH_ARGS} user@IP`
}
sshpp 'sudo service ntpd status'
I tried this also, but not working.
How will I achieve this instead of repeating sshpass every time?
|
What you seem to be looking for is an alias:
alias sp='sshpass -p "password" ssh $SSH_ARGS user@IP'
You can therefore run your commands like:
sp 'sudo sed -i "/^server 0.rhel.pool.ntp.org/ { N; s/^server 0.rhel.pool.ntp.org\n/server xxx.xx.xx.xx iburst\n&/ }" /etc/ntp.conf'
Note that there may be ways to simplify your life a bit more by making scripts to do these repetitive tasks, but that is beyond the scope of this question.
| How to use ssh in a function(bash)? |
1,588,657,028,000 |
I wrote a bash script for listing python processes, ram usage and PID and status in human readable form with colored lines. But I have some trouble with working time of the script. Because of the repeatedly written ps commands working time is taking too much time.
SCRPITS=`ps x | grep python | grep -v ".pyc" | grep -v grep | awk '{print $NF}'`
prepare_text () {
if [[ ${2%.*} -gt ${RAMLIMIT} ]]; then
# RED
TEXT=`printf "\033[31m%-62s %'10d %2s %5s %6s\n\033[0m" "${1}" "${2}" "KB" "${3}" "${4}"`
elif [[ ${2%.*} -gt ${RAMLIMIT}/2 ]]; then
# YELLOW
TEXT=`printf "\033[33m%-62s %'10d %2s %5s %6s\n\033[0m" "${1}" "${2}" "KB" "${3}" "${4}"`
else
# GREEN
TEXT=`printf "\033[32m%-62s %'10d %2s %5s %6s\n\033[0m" "${1}" "${2}" "KB" "${3}" "${4}"`
fi
TEXTBODY+=${TEXT}
}
display () {
printf "$SUBJECT\n"
printf "%-62s %13s %5s %8s\n" "PROCESS" "RAM USAGE" "PID" "STATUS"
printf "===========================================================================================\n"
printf "${TEXTBODY}\n"
}
for SCRIPT in ${SCRPITS}
do
USAGE=`ps aux | grep ${SCRIPT} | grep -v "grep" | awk '{print $6}'`
PID=`ps aux | grep ${SCRIPT} | grep -v "grep" | awk '{print $2}'`
STATUS=`ps aux | grep ${SCRIPT} | grep -v "grep" | awk '{print $8}'`
prepare_text ${SCRIPT} ${USAGE} ${PID} ${STATUS}
done
display
exit $?
I decided to change that approach and I rearrange all script for shortening work time as below:
OIFS=$IFS #save original
IFS='\n'
SCRIPTS=`ps aux | grep python | grep -v ".pyc" | grep -v grep | awk '{print $NF,",",$5,",",$2,",",$8}'`
IFS=${OIFS}
prepare_text () {
if [[ $((${2%.*})) -gt ${RAMLIMIT} ]]; then
# RED
TEXT=`printf "\033[31m%-62s %'10d %2s %5s %6s\n\033[0m" "${1}" "${2}" "KB" "${3}" "${4}"`
elif [[ $((${2%.*})) -gt ${RAMLIMIT}/2 ]]; then
# YELLOW
TEXT=`printf "\033[33m%-62s %'10d %2s %5s %6s\n\033[0m" "${1}" "${2}" "KB" "${3}" "${4}"`
else
# GREEN
TEXT=`printf "\033[32m%-62s %'10d %2s %5s %6s\n\033[0m" "${1}" "${2}" "KB" "${3}" "${4}"`
fi
TEXTBODY+=${TEXT}
}
display () {
printf "$SUBJECT\n"
printf "%-62s %13s %5s %8s\n" "PROCESS" "RAM USAGE" "PID" "STATUS"
printf "===========================================================================================\n"
OIFS=$IFS
IFS=","
set ${SCRIPTS}
for SCRIPT in ${SCRIPTS}
do
prepare_text $1 $2 $3 $4
done
printf "\n\n"
IFS=${OIFS}
printf "${TEXTBODY}\n"
}
display
exit $?
Now I can get what information I want from ps at once but I have some problem with formatting and displaying that information.
I can't figure out how can I get each argument from ${SCRIPTS}, split them and pass to prepare_text function.
I guess I misunderstand something.
|
I suggest you extract the info that you need from ps, nothing else, and let awk (not bash) do the rest: grepping, comparisons, formatting. Example:
ps -ax --no-headers -o pid,vsz,stat,command |
awk -v lim=23000 '
# let awk do the grepping
/bash/ && !/awk/ {
# save first 3 fields
pid=$1
vsz=$2
stat=$3
# rest is command line, possibly spanning multiple fields
for (i=4;i<=NF;++i) $(i-3)=$i
NF-=3
# decide on color
if (vsz>lim) col="\033[31m"
else if (vsz>lim/2) col="\033[33m"
else col="\033[32m"
# printout
printf("%s%-62s %10d KB %5s %6s%s\n",
col, $0, vsz, pid, stat, "\033[0m")
}'
Tweak values, and add in headers as needed.
| How to split Bash array into arguments |
1,588,657,028,000 |
Overview:
I save my variable in a config file and call them later.
Each entry with the name FailOverVM has a number beside it like FailOverVM1 and I want to check to see if it has data and generate a function named FailOverVM1() that later in the script starts $FailOverVM1Name, which happens to be 'Plex Server'
I can manually do it like StartVM1() and i works but I may expand to 15 later and want it to adjust accordingly.
To clarify I can start the VM with a Case statement later and have but I can't wrap my head around the variable that in itself is a variable. I hope I didn't confuse anyone. Maybe im making this WAY more complicated than it is or needs to be.
#!/bin/bash
. "${BASH_SOURCE%/*}/configlocation.conf"
. $Configuration
checkVM1=$(nc -vvz $FailOverVM1IP $FailOverVM1Port 2>&1)
VMCount=$(grep "FailOverVM.Name" /media/VirtualMachines/Current/Configuration.conf | wc -l)
pinggateway=$(ping -q -w 1 -c 1 `ip r | grep default | cut -d ' ' -f 3` > /dev/null && echo ok || echo error = error)
STATE="error";
while [ $STATE == "error" ]; do
#do a ping and check that its not a default message or change to grep for something else
STATE=$(ping -q -w 1 -c 1 `ip r | grep default | cut -d ' ' -f 3` > /dev/null && echo ok || echo error)
#sleep for 2 seconds and try again
sleep 2
done
for i $VMCount; do
if [ -z "$FailOverVM$VMCountName" ];
echo "$FailOverVM$VMCountName"
fi
done
StartVM1(){
if [[ $checkVM1 = "Connection to $FailOverVM1IP $FailOverVM1Port port [tcp/*] succeeded!" ]]; then
echo '$FailOverVM1Name is up'
else
echo "$FailOverVM1Name down"
su -c 'VBoxManage startvm $FailOverVM1Name -type headless' vbox
fi
}
Where I'v gotten so far in a test script
#!/bin/bash
. "${BASH_SOURCE%/*}/configlocation.conf"
. $Configuration
Pre='$FailOverVM'
post="FailOverVM"
name="Name"
VMCount=$(grep "FailOverVM.Name" $Configuration | wc -l) #Count entires in config file wirn FailOverVM*Name
while [[ $i -le $VMCount ]]
do
#if [ -z $Pre$i"Name" ];then #If the variable $FailOverVM*Name is not blank
$post$i=$Pre$i$Name
echo "$post$i" #print it
#else
# echo $Pre$i"Name" "was empty"
#fi
((i = i + 1))
done
Output:
./net2.sh: line 11: FailOverVM=$FailOverVM: command not found
FailOverVM
./net2.sh: line 11: FailOverVM1=$FailOverVM1: command not found
FailOverVM1
./net2.sh: line 11: FailOverVM2=$FailOverVM2: command not found
FailOverVM2
./net2.sh: line 11: FailOverVM3=$FailOverVM3: command not found
FailOverVM3
./net2.sh: line 11: FailOverVM4=$FailOverVM4: command not found
FailOverVM4
./net2.sh: line 11: FailOverVM5=$FailOverVM5: command not found
FailOverVM5
./net2.sh: line 11: FailOverVM6=$FailOverVM6: command not found
FailOverVM6
The problem here is there is no $FailOverVM without a number beside it, and what is up with "command not found FailOverVM5" (or any other number) I didn't know I issued one. But the biggest problem is its not grabbing the variable $FailOVerVM* form the config file. I need that for the func loop.
New modified script with @dave_thompson_085 help
#!/bin/bash
. "${BASH_SOURCE%/*}/configlocation.conf"
. $Configuration
for i in ${!FailOverName[@]}; do
selip=FailOverIP[${i}]
selport=FailOverPort[${i}]
checkVM[$i]=$(nc -vvz ${!selip} ${!selport} 2>/devnull)
echo ${!selip}
echo ${!selport}
echo FailOverName[${i}]
done
StartVM() { # first argument to a function is accessed as $1 or ${1}
selname=FailOverName[${i}]
if [[ checkVM[$i] =~ 'succeeded' ]]; then # only need to check the part that matters
echo number $i name ${!selname} already up
else
echo starting number $i name ${!selname}
echo su -c "VboxManager startvm '${!selname}' -headless" vbox # note " because ' $
fi
}
#done
StartVM 1 # and
StartVM 2 # etc
Output
root@6120:~/.scripts# ./net2.sh -v
192.168.1.6
32400
FailOverName[1]
192.168.1.5
80
FailOverName[2]
192.168.1.7
80
FailOverName[3]
192.168.1.1
1030
FailOverName[4]
starting number 4 name finch
su -c VboxManager startvm 'finch' -headless vbox
starting number 4 name finch
su -c VboxManager startvm 'finch' -headless vbox
root@6120:~/.scripts#
Config file
#
FailOverVM1IP='192.168.1.6'
FailOverVM1Port='32400'
FailOverVM1Name='Plex Server'
FailOverVM1NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk'
FailOverVM1LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk'
FailOverVM2IP='192.168.1.7'
FailOverVM2Port='32402'
FailOverVM1Name='Plex Server2'
FailOverVM2NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk'
FailOverVM2LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk'
FailOverVM3IP='192.168.1.8'
FailOverVM3Port='32403'
FailOverVM3Name='Plex Server3'
FailOverVM3NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk'
FailOverVM3LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk'
FailOverVM4IP='192.168.1.9'
FailOverVM4Port='32404'
FailOverVM4Name='Plex Server4'
FailOverVM4NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk'
FailOverVM4LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk'
FailOverVM5IP='192.168.1.10'
FailOverVM5Port='32405'
FailOverVM5Name='Plex Server5'
FailOverVM5NASHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk'
FailOverVM5LocalHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk'
FailOverIP[1]=192.168.1.6 FailOverName[1]=robin FailOverPort[1]=32400
FailOverIP[2]=192.168.1.5 FailOverName[2]=bluejay FailOverPort[2]=80
FailOverIP[3]=192.168.1.7 FailOverName[3]=sparrow FailOverPort[3]=80
FailOverIP[4]=192.168.1.1 FailOverName[4]=finch FailOverPort[4]=1030
VM1LogDirLogDir='/media/VirtualMachines/Logs/Plextstart'
PlexServerIP='192.168.1.6'
PlexPort='32400'
mydate=`date '+%Y-%m-%d_%H%M'`
rsyncfrom=
NASPlexvmHDD='/media/VirtualMachines/Current/Plex\ Server/Plex\ Server.vmdk'
LocalPlexvmDHDD='/home/vbox/VirtualBox\ VMs/Plex\ Server/Plex\ Server.vmdk'
PlexVMname='Plex Server'
PlexStartLogDir='/media/VirtualMachines/Logs/Plextstart'
RouterIp='192.168.1.1'
So it sees all the vms but is only executing the last and twice at that.
#!/bin/bash
. "${BASH_SOURCE%/*}/configlocation.conf"
. $Configuration
for i in ${!FailOverName[@]}; do
selip=FailOverIP[${i}]
selport=FailOverPort[${i}]
checkVM[$i]=$(nc -vvz ${!selip} ${!selport} 2>&1)
echo ${!selip}
echo ${!selport}
#echo ${i}
#done
StartVM() { # first argument to a function is accessed as $1 or ${1}
selname=FailOverName[${i}]
if [[ $checkVM[$i] =~ 'succeeded' ]]; then # only need to check the part that matters
echo number $i name ${!selname} already up
else
echo starting number $i name ${!selname}
echo su -c "VboxManager startvm '${!selname}' -headless" vbox # note " because ' prevents the variable expansion
fi
}
StartVM
done
Note: Checking of if VM is already running doesn't function yet but that wasnt the question I asked so this meets the criteria.
|
Asides: you can eliminate the wc -l by using grep -c FailoverVM.Name configfile.
But if you want to use numbers over 9 decimal (not e.g. 123456789abcdef) your pattern needs to be FailoverVM[0-9][0-9]?Name or FailoverVM[0-9]{1,2}Name in -E extended mode.
Also for i $VMCount is a syntax error; I assume you mean for i in $(seq $VMCount).
You can read a variable indirectly in bash with ! (bang) and another variable containing the name:
for i in $(seq $VMCount); do
selname=FailoverVM${i}Name
selip=FailoverVM${i}IP
selport=FailoverVM${i}Port
echo name ${!selname} is IP ${!selip} and port ${!selport}
done
which is less of a blunderbuss than eval but still clumsy. But you cannot set a variable this way, so you should use an array for that. And you cannot do this for functions, so instead write one function that accepts an argument to tell it which (set of) variables to use:
for i in $(seq $VMCount); do
selip-Failover${i}IP
selport=Failover${i}Port
checkVM[$i]=$(nc -vvz ${!selip} ${!selport} 2>/devnull)
}
StartVM() { # first argument to a function is accessed as $1 or ${1}
selname=FailoverVM${1}Name
if [[ checkVM[$1] =~ 'succeeded' ]]
# only need to check the part that matters
then echo number $1 name ${!selname} already up
else echo starting number $1
su -c "VboxManager startvm ${!selname} -headless" vbox
# note " because ' prevents the variable expansion
fi
]
...
StartVM 1 # and
StartVM 2 # etc
OTOH if you can change the config to use array variables for everything like this
FailoverIP[1]=10.255.1.1 FailoverName[1]=robin
FailoverIP[2]=10.255.2.2 FailoverName[2]=bluejay
etc
that would make everything much simpler. And then you don't need to re-grep the file to count the entries, you can just use e.g. ${#FailoverName[@]}
| Func name as variable in loop |
1,588,657,028,000 |
FuzzyTime()
{
local tmp=$( date +%H )
case $((10#$tmp)) in
[00-05] )
wtstr="why don't you go to bed"
;;
[06-09] )
wtstr="I see your very eager to start the day"
;;
[10-12] )
wtstr="and a very good day too you"
;;
[13-18] )
wtstr="Good Afternoon"
;;
[19-21] )
wtstr="Good Evening"
;;
[22-23] )
wtstr="it is getting late, it's time to party or go to bed"
;;
*)
wtstr="guess the planet your on has more than a 24 hour rotation"
echo 'case value is:' $tmp
;;
esac
}
The case variable represent hours in a 24-hour context, however it seems numbers 08 and 17 cause an issue. I resolved the 08 by using $((10#$tmp)) but now 17 is an issue; any advice? This is my first bash script ever so sorry in advance if this is a silly question.
|
[] denotes character ranges:
[10-12] means digits 1 2 and the range between digits 0-1 -- this will match a single digit in range 0-2.
Use simple comparisons with if-elif-else-fi:
if [ "$tmp" -ge 0 ] && [ "$tmp" -le 5 ]; then
echo "<0,5>"
elif [ "$tmp" -ge 6 ] && [ "$tmp" -le 9 ]; then
echo "<6,9>"
#...
else
#...
fi
(Or you could iterate over an array of range limits if you want every interval, but you might as well hardcode it in this case--as you are trying to do).
Edit: requested array version:
FuzzyTime(){
local needle=$1 #needle is $1
: ${needle:=$( date +%H )} #if no needle is empty, set it to "$(date +%H)
local times=( 0 6 10 13 19 22 24 0 )
local strings=(
"why don't you go to bed"
"I see your very eager to start the day"
"and a very good day too you"
"Good Afternoon"
"Good Evening"
"it is getting late, it's time to party or go to bed"
"guess the planet your on has more than a 24 hour rotation"
)
local b=0
# length(times) - 2 == index of the penultimate element
local B="$((${#times[@]}-2))"
for((; b<B; b++)); do
if ((needle >= times[b] && needle < times[b+1])); then break; fi
done
echo "${strings[$b]}"
}
FuzzyTime "$1"
test:
$ for t in {0..27}; do FuzzyTime "$t"; done
0 -- why don't you go to bed
1 -- why don't you go to bed
2 -- why don't you go to bed
3 -- why don't you go to bed
4 -- why don't you go to bed
5 -- why don't you go to bed
6 -- I see your very eager to start the day
7 -- I see your very eager to start the day
8 -- I see your very eager to start the day
9 -- I see your very eager to start the day
10 -- and a very good day too you
11 -- and a very good day too you
12 -- and a very good day too you
13 -- Good Afternoon
14 -- Good Afternoon
15 -- Good Afternoon
16 -- Good Afternoon
17 -- Good Afternoon
18 -- Good Afternoon
19 -- Good Evening
20 -- Good Evening
21 -- Good Evening
22 -- it is getting late, it's time to party or go to bed
23 -- it is getting late, it's time to party or go to bed
24 -- guess the planet your on has more than a 24 hour rotation
25 -- guess the planet your on has more than a 24 hour rotation
26 -- guess the planet your on has more than a 24 hour rotation
27 -- guess the planet your on has more than a 24 hour rotation
| case statement not behaving as expected (fuzzytime() function) |
1,588,657,028,000 |
I want to use an argument in the function I created in my .profile file.
I want to ask for input if no argument is given, otherwise set a variable to $1.
When I check $1 to see if it is empty, I get the following error:
sh[7]: 1: Parameter not set.
From the following line:
if [ ! -n "$1" ]; then
I'm using sh not bash.
EDIT: Ok, here is the first line of code until the end of the if statement:
HOST=`hostname`
cd /opt/dirpath
ll *.sto
if [ x"$1" = x ]; then
# Ask for input
echo "File: \c"; read outFile
else
outFile=$1
fi
I'm editing someone else's code to work with or without arguments.
|
I think there's more to this:
Either that's not the command you're using - or else somewhere else in the function you're doing it differently. That error comes from ${1?}. Or it comes from your test, but only if you first do set -u.
To fix that, stop doing that. Do set +u; fn_name, and see what happens. And if you have any ${1?} expansions in there, that error will not go away until you give the function an argument.
Here are some code examples of how you might reproduce that error:
sh -c 'fn(){ [ ! -n "${1?}" ]; }; fn'
sh: 1: 1: parameter not set
...or...
sh -uc 'fn(){ [ ! -n "$1" ]; }; fn'
sh: 1: 1: parameter not set
...but...
sh -uc 'fn(){ [ ! -n "$1" ]; }
set +u; fn; echo "$?"'
0
...and...
sh -c 'fn(){ [ ! -n "${1?}" ]; }
fn some args; echo "$?"'
1
If the function is setting -u then you should probably edit that out. Or else if your .profile is doing so, same goes. In most cases, set -u is not a desirable persistent shell setting, and this is because that mode is designed to kill shells. It does not provide any simple means of handling the kinds of errors it generates - which is what you're trying to do with [ test ].
| sh - Using Arguments in .profile functions |
1,588,657,028,000 |
In bash I can do:
foo() { echo bar; }
export -f foo
perl -e 'system "bash -c foo"'
I can also access the function definition:
perl -e 'print "foo".$ENV{"BASH_FUNC_foo%%"}'
How do I do the same in fish?
Edit:
With this I can get the function definition:
functions -n | perl -pe 's/,/\n/g' | while read d; functions $d; end
If I can put that in an enviroment variable accessible from Perl, I ought to be able to execute that before executing the command. So similar to:
setenv funcdefs (functions -n | perl -pe 's/,/\n/g' | while read d; functions $d; end)
perl -e 'system($ENV{"funcdefs"},"foo")'
But it seems setting funcdefs ignores the newlines: $ENV{"funcdefs"} is one horribly long line.
The odd part is that it seems fish does support environment variables containing newlines:
setenv newline 'foo
bar'
echo "$newline"
Can I encourage fish to put the output from the command into a variable, but keeping the newlines?
|
Ugly as hell, but works:
function foo
echo bar;
end
setenv funcdefs (functions -n | perl -pe 's/,/\n/g' | while read d; functions $d; end|perl -pe 's/\n/\001/')
perl -e '$ENV{"funcdefs"}=~s/\001/\n/g;system ("fish", "-c", $ENV{funcdefs}."foo")'
| Accessing fish functions from perl |
1,588,657,028,000 |
I'm writing a function, adding it to ~/.zshrc on my Mac. It's in order to more quickly handle commands to youtube-dl.
I have this:
function dlv()
{
cd /Users/admin/Downloads
youtube-dl -f 'best' "$1"
}
But when I make a request I have to input the youtube link with quote marks.
dlv "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
instead of dlv https://www.youtube.com/watch?v=dQw4w9WgXcQ
How can I achieve this?
|
Well, zsh could quote the URLs for you via functions and zle - the line editor:
autoload -Uz url-quote-magic
zle -N self-insert url-quote-magic
autoload -Uz bracketed-paste-magic
zle -N bracketed-paste bracketed-paste-magic
and then when you type or paste a URL in your terminal it'll be automatically quoted.
Another way (that also uses zle) would be to assign a shortcut e.g. Ctrl+Alt+y to a function that builds up the command line for you, i.e. it inserts the command and its options before the quoted URL:
dlv () {
cmd='youtube-dl -f best '
jump=$(( ${#${(qq)BUFFER}} - ${#BUFFER} ))
BUFFER=${cmd}${(qq)BUFFER}
CURSOR+=$(( ${#cmd} + jump ))
}
zle -N dlv
bindkey '^[^y' dlv
You use it like this: you type or paste the URL, then hit Ctrl+Alt+y which quotes the URL, adds youtube-dl -f best in front of it and positions the cursor at the end of line. Then hit Enter
Add the above to your .zshrc to make it permanent.
Other people prefer having a widget/plugin that quotes everything after certain commands (see here or here)... so yes, there are ways to have it quoted for you.
| How to write a function that takes an argument string that does not need to be quoted? |
1,588,657,028,000 |
I have a zsh alias:
gitbs() {
git branch | grep -- $1
}
And I would like to pass the result into git checkout, for example:
git checkout | gitbs state
How can I make this work?
|
A shell pipe passes the output of a command to the input of another command. This won't help you here: you want to pass the output of a command as a command line argument of another command. The tool for that is command substitution. So the basic idea is
git checkout "$(gitbs state)"
(It's still a pipe under the hood, but the reader side of the pipe is the shell itself: it reads the output and then constructs a command line including that output.)
However, the output of gitbs state is not the right format to pass to git checkout: it has extra spaces and sometimes punctuation characters on the same line as the branch name. (Also color formatting codes, but only when the output is a terminal or when git calls a pager automatically, not when the output is a pipe.) Also, if there is no matching branch or more than one, you'll get a somewhat weird error message from git checkout.
To fix this, you can change gitbs to produce the raw branch name(s) as output. Here's a version that keeps the pretty formatting intended for humans if the output is a terminal, and just prints one branch name per line otherwise. It uses git for-each-ref to enumerate branch names. The conditional expression -t 1 tests whether standard output is a terminal.
gitbs () {
if [[ -t 1 ]]; then
git branch
else
git for-each-ref --format='%(refname:lstrip=2)' 'refs/heads/*'
fi | grep -- "$1"
}
With this definition of gitbs, git checkout "$(gitbs state)" will work.
Note the double quotes around the command substitution. Without double quotes (git checkout $(gitbs state)), the output is split into separate arguments at whitespace, so if multiple branches match, the resulting command will be something like git checkout foobar1 foobar2, which will not check out foobar1 but instead will overwrite the current version of the file foobar2 with the version from foobar1 if a file named foobar2 exists.
To avoid this pitfall, it may be better to define a different version of gitbs which requires a single matching branch. You get the benefit of a clearer error message if there are zero or more than one matching branches, although there's still an extra message about the current branch from git checkout. This function puts the list of matching branches in an array
gitbs1 () {
local branches
branches=($(git for-each-ref --format='%(refname:lstrip=2)' 'refs/heads/*' | grep "$1"))
if ((#branches == 0)); then
echo "No branch contains '$1'" >&2
return 3
fi
if ((#branches > 1)); then
echo "Multiple branches match '$1':" >&2
print -lr $branches >&2
return 3
fi
echo $branches
}
Then you can safely write git checkout $(gitbs1 state).
If you turn on the option glob_complete (i.e. setopt glob_complete in your .zshrc), then you can type
git branch *foo*Tab
and *foo* will be replaced by the name of the matching branch if there is one. If there are multiple matching branches, you'll get the same kind of menu or cycling behavior as for ordinary (prefix) completion.
| How to pass zsh alias function to pipe |
1,588,657,028,000 |
I have a somewhat difficult time figuring out how - if possible - to return from a higher function, let me show you a POSIX code tidbit:
sudoedit_err ()
{
printf >&2 'Error in sudoedit_run():\n'
printf >&2 '%b\n' "$@"
}
sudoedit_run ()
{
# `sudoedit` is part of `sudo`'s edit feature
if ! command -v sudo > /dev/null 2>&1; then
sudoedit_err "'sudo' is required by this script."
return 1
fi
# primary non-empty arguments check
if ! { [ $# -ge 3 ] && [ -n "$1" ] && [ -n "$2" ] && [ -n "$3" ]; } then
sudoedit_err "Low number of arguments.\\nExpected: \$1 = editor type; \$2 = editor name; \$3, (\$4), ... = file(s).\\nPassed $#: $*"
return 1
fi
...
Important notes first:
These functions are sourced to my shell directly from the .bash_aliases file = which is sourced by my .bashrc in effect.
What I would like: The sudoedit_err be able to return directly, which I am not able to do, I am quite sure I just missed a class of POSIX scripting. üò†Ô∏è
In spite, my default shell is Bash, the solution must be POSIX-compliant.
What I found out:
One can't use exit 1 instead of return 1 = it would exit the terminal.
|
A couple of people have suggested a subshell, which I think is a good idea. Using that, you can introduce a wrapper function that invokes a second function in a subshell. With that, any function that that second function calls can invoke exit to terminate the subshell.
Here's an example based on your original post:
sudoedit_err() {
printf >&2 'Error in sudoedit_run():\n'
printf >&2 '%b\n' "$@"
exit 1
}
_sudoedit_run() {
# `sudoedit` is part of `sudo`'s edit feature
if ! command -v sudo > /dev/null 2>&1; then
sudoedit_err "'sudo' is required by this script."
fi
# primary non-empty arguments check
if ! { [ $# -ge 3 ] && [ -n "$1" ] && [ -n "$2" ] && [ -n "$3" ]; } then
sudoedit_err "Low number of arguments.\\nExpected: \$1 = editor type; \$2 = editor name; \$3, (\$4), ... = file(s).\\nPassed $#: $*"
fi
}
sudoedit_run()
{
(_sudoedit_run "$@")
}
You wouldn't want to call the wrapped function directly since that'd terminate your shell.
| Return from higher function, how - if possible? |
1,588,657,028,000 |
I'm quite new on the shell scripting front and was wondering whether it is is possible to call a function which itself than calls another function with none, one or multiple arguments. The first argument would be the name of the function to call, every other argument is an argument for the function to call.
As a background, I want to write a shell script to use some built-in OpenFOAM functions, namely runParallel and runApplication, which, for clarification, I called runSerial in the example above. Those functions do different things, as the name suggests they run a command in either serial or parallel mode.
A simulation in OpenFOAM is made up of multiple function calls and all I want to do is shorten the code so that instead of this
#!/bin/sh
# $n_core is a user input how many cores to use
printf 'On how many cores do you want to run the simulation?'
read -r n_core
if [ $n_core -eq "1" ]; then
runSerial "functionOne" # no arguments here
runSerial "functionTwo" "arg1"
runSerial "functionThree" "arg1" "arg2"
...
else
runParallel "functionOne" # no arguments here
runParallel "functionTwo" "arg1"
runParallel "functionThree" "arg1" "arg2"
...
fi
I was wondering whether I could replace that with something like this
#!/bin/sh
runSerialOrParallel()
{
if [ $n_core -eq "1" ]; then
runSerial "$1" "$2" ...
else
runParallel "$1" "$2" ...
fi
}
# $n_core is a user input how many cores to use
printf 'On how many cores do you want to run the simulation?'
read -r n_core
runSerialOrParallel "functionOne" # no arguments here
runSerialOrParallel "functionTwo" "arg1"
runSerialOrParallel "functionThree" "arg1" "arg2"
At the moment I'm stuck with the question on how to account for the arguments for the function which my runSerialOrParallel function should call itself. So if I want functionTwo to be run in either serial or parallel, with one argument for functionTwo itself, how do I make that happen inside runSerialOrParallel?
Any help would be greatly appreciated and please forgive me if there is a profane answer to that question which I could easily have found myself but didn't.
cheers!
(I hope the edit cleared some things up, my bad ..)
|
In Bourne-like shells "$@" (note that the quotes are important!) expands to all the arguments of the script, or function if expanded inside a function, so here:
runSerialOrParallel()
{
if [ "$n_core" -eq 1 ]; then
runSerial "$@"
else
runParallel "$@"
fi
}
Would make runSerialOrParallel invoke runSerial or runParallel with the same arguments it received itself. If the first argument is meant to be a function name and the following ones more arguments to pass to that function, then your runSerial function could be something like:
runSerial() {
printf 'Running "%s" with %u argument%s:\n' "$1" "$(($# - 1))" "${3+s}"
"$@"
}
(note that whether the first argument is a function, external command or builtin makes no difference here).
Or:
runSerial() {
funcName=${1?No function specified}
shift # removes the func name from the arguments
printf 'Running "%s" with %u argument%s:\n' "$funcName" "$#" "${2+s}"
"$funcName" "$@"
}
(the ${2+s} expands to s if the second argument (initially third) is specified to turn "argument" into plural "arguments" when at least two arguments to $funcName are specified).
$ runSerial echo foo bar
Running "echo" with 2 arguments:
foo bar
$ runSerial echo foo
Running "echo" with 1 argument:
foo
$ n_core=1
$ runSerialOrParallel echo foo
Running "echo" with 1 argument:
foo
| call function by name with arguments |
1,588,657,028,000 |
I am trying to create output from my Bash script which includes hostname, SSH protocol, and root login information.
I would like to do it using Bash functions. I have a .sh file but it does not work. Where is the problem in this code?
Server Version: Red Hat 7
The expected output would be:
xyz|hostname|Protocol X|Root Access Denied
I would like to begin the output with "xyz" in order to later parse my output.
#!/bin/bash
host(){
local tmpfile=$(mktemp)
hostname > "$tmpfile"
printf '%s' "$tmpfile"
}
protocol(){
local infile="$1"
cat /etc/ssh/sshd_config | grep Protocol
}
rootlogin(){
local infile="$1"
if [[ $(sudo cat /etc/ssh/sshd_config | grep -i "PermitRootLogin yes" | grep -v "#PermitRootLogin yes") = "PermitRootLogin yes" ]];
echo $host
else
echo "Root Access Denied"
fi
}
}
tmpfile=$( host )
{
host "$tmpfile"
protocol "$tmpfile"
rootlogin "$tmpfile"
} > fonk.out
rm -f "$tmpfile"
|
The poor indentation and layout of your script muddles the question, but the basic answer is
printf '%s|%s|%s|%s\n' "$(field1)" "$(field2)" "$(field3") "$(field4)"
Refactored into this, and with indentation etc cleaned up, your script becomes
#!/bin/bash
host(){
hostname
}
protocol(){
# avoid useless use of cat; get rid of unused parameter
# ... do you need sudo here too?
grep Protocol /etc/ssh/sshd_config
}
rootlogin(){
# straighten out massive spaghetti pretzel; remove unused parameter
# ... can you avoid sudo cat here?
if sudo cat /etc/ssh/sshd_config |
grep -v "#PermitRootLogin yes" |
grep -i -q "PermitRootLogin yes"
then
# Fix quoting
echo "$host"
else
echo "Root Access Denied"
fi
}
printf '%s|%s|%s|%s\n' "$1" "$(host)" "$(protocol)" "$(rootlogin)" >fonk.out
The last line is somewhat speculative; your current script doesn't seem to print the first field at all, and it's not clear what it's supposed to contain.
This no longer uses a temporary file, but one antipattern in your attempt was to create the temp file in an unrelated function. When you really do need a temp file, it would probably be a good idea to create it separately, then use it as a parameter everywhere. Like
tmp=$(mktemp) || exit
# arrange for temp file to be removed in case of errors or signals, too
trap 'rm -f "$tmp"' EXIT
trap 'exit' ERROR HUP QUIT TRAP TERM
function1 "$tmp"
function2 "$tmp"
: etc
| Using Multiple Function to get an output in a single Line |
1,588,657,028,000 |
unalias removes / disables an alias for the current session, that is, an alias is temporally disabled. If an alias is wrong, undesired or no more useful, I simply delete it from .bashrc or .bash_alias and source ~/.bashrc or close and reopen my terminal.
A usage I have found for unalias was when, after creating an alias in my .bash_aliases, I decided to change the alias to a function.
That is, I have changed alias dothis="action" to dothis () { echo "some text"; action1; action2; }.
But source ~/.bashrc kept on returning a syntax error near unexpected token (' that I couldn't fix, until I figured out that the error was coming from the fact I was using the same name for the original alias and the newly created function (sounds like an obvious error but not so at first sight). The error was gone after I unaliased the original alias: unalias dothis.
Besides this case, in which situations one would need / want to unalias?
|
If an alias is wrong, undesired or no more useful, I simply delete it from .bashrc or .bash_alias and source ~/.bashrc or close and reopen my terminal.
"Why would I want to wash my hands if I can just take a shower"?
Oftentimes that is an impossible or an undesirable action. For instance, suppose you had a bunch of processes running in the background in the current shell which will die if you close it, or imagine you were working on a remote machine, so relaunching the session will require you to re-establish the connection, type in your credentials, and in some cases struggle with two-step authentification.
Also, if you are just "visiting" a system which you don't have a set-up environment in (to troubleshoot someone's problems, for instance), and you are not particularily fond of their idea of making ls into an alias for less for instance, it is so much easier to say unalias ls rather than argue with the user about re-launching the session and editing their configs, or sufferring the bindings you don't like.
P.S.
I simply delete it from .bashrc or .bash_alias and source ~/.bashrc
That will not rid you of the existing aliases, unless you do unalias -a first.
| When and why unalias? |
1,588,657,028,000 |
I'm sure this is fairly elementary, but I can't figure it out.
My script:
#!/bin/bash
sez ()
{
echo $1
spd-say "$1"
}
sez "does this work"
sez "this does work"
What I'm trying to make happen is use spd-say in a function to make the computer talk to me.
The echo portion of my function works. It outputs both lines of text that I feed to it in the expected order. However, the spd-say part doesn't. It only ever says the last line. I'm assuming it's because the second command is "overwriting" the output of the first because it's trying to run them in parallel to the same output. I've tried adding ;wait, &&, and various other things to the end of the sez command, on the next line after, within the function on the spd-say command, etc, but everything I'm trying isn't helping.
What am I doing wrong?
|
I had a similar problem and I found a workaround. Instead of using spd-say I used espeak directly.
| Using spd-say in a bash script function |
1,588,657,028,000 |
I'm no so advanced in bash so can not make my function work properly.
Here is the code:
archive()
{
for f in $PWD
do
for ((i=1; i++;))
do
7za a "$1".7z $f -pSECRET -mhe
done
done
}
In order this function should take arbitrary amount of parameters like
archive foo file1.txt file2.jpg file3.asc ...
Unfortunately I've not figured out how to solve this myself.
And one more thing. For me it's still hard to make function inside function but would be perfect if someone show me how use dynamic password instead of constant one:
gpg --gen-random 1 "$1" | perl -ne'
s/[\x00-\x20]/chr(ord($^N)+50)/ge;
s/([\x7E-\xDB])/chr(ord($^N)-93)/ge;
s/([\xDC-\xFF])/chr(ord($^N)-129)/ge;
print $_, "\n"'
Ultimate desired output for command archive foo file1.txt file2.png:
7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
p7zip Version 9.20 (locale=ru_RU.UTF-8,Utf16=on,HugeFiles=on,8 CPUs)
Scanning
Creating archive .7z
Compressing file1.txt
Compressing file2.png
Everything is Ok
Password for file foo.7z is X;~2\$82uZx@^22nFd}!jrn2]`[GceWx
|
Why are you iterating over $PWD? that is not a list.
To iterate over all arguments to a script or function, use
for ARG in "$@"; do
or the short form
for ARG;
You can use "shift" to save the first parameter to a variable, then use the loop as above to iterate over the rest of the parameters.
For the GPG part, you just need to define another function with your GPG code, then call it inside your "archive" function just as if it were a normal shell command:
gen_password () {
gpg --gen-random 1 "$1" | perl -ne'
s/[\x00-\x20]/chr(ord($^N)+50)/ge;
s/([\x7E-\xDB])/chr(ord($^N)-93)/ge;
s/([\xDC-\xFF])/chr(ord($^N)-129)/ge;
print $_, "\n"'
}
archive () {
ARCHIVE_NAME="$1"
PASSWORD=$(gen_password 32)
shift
for ARG; do
7za a "$ARCHIVE_NAME" "$ARG" -p"$PASSWORD" -mhe
done
echo "Created 7z archive with password '$PASSWORD'"
}
| Function for archiving arbitrary files with encryption |
1,588,657,028,000 |
I have had this function I use it very often and it works fine.
Here it is:
cdx () { cd `dirname $1` ; }
However, this does not work with spaces. When I use it like this for example
cdx ~/desktop/folder/file\ file
It returns
usage: dirname path
But what I am passing is, essentially dirname path. So what am I supposed to do to fix this? (It also does the same thing when there are spaces in a folder names)
My first thought was using quotes, like cdX "directory\ whatever" but it did not work either.
|
If you simply write $i, spaces will turn your variable content in multiple arguments. If you want to preserve spaces you have to quote things.
For your example you probably want:
cdx () { cd -- "$(dirname -- "$1")" ; }
And remember always quote your variables.
| Why doesn't my function work with spaces? (cd, dirname) [duplicate] |
1,588,657,028,000 |
I am writing a bash function that takes a number of strings, with
each string to be printed it a separate line.
But have to compose the frmt variable to printf appropriately depending
on the number of strings being passed to the function.
print ()
{
case $1 in
h|-h|-\?|--help)
printf "Prints a text string.\n"
printf "\$1 TEXT Sentence to print.\n"
local -r f=0
;;
*)
local -r f=1
;;
esac
local -r frmt="%s\n"
printf $frmt "$@"
}
Had the idea of repeating a string by the number of function argumunts using printf, but was not successful
local -r frmt=$(printf '%s\n' $(seq $#))
|
No need to build a format string according to number of arguments.
To quote the link by Mr. Thompson:
The format argument is reused as necessary to convert all the given arguments. For example, the command ‘printf %s a b’ outputs ‘ab’.
Write test code.
When working with things like this, break it down as much you can, an build up form there. For example in your case:
TESTING
#! /bin/bash -
test1() {
printf 'test1\n'
printf 'ARG: "%s"\n' "$@"
}
test2() {
printf 'test2\n'
shift
printf 'ARG: "%s"\n' "$@"
}
test3() {
printf 'test3\n'
printf '2-ARGS: "%s" "%s"\n' "$@"
}
test1 first a b "c d e"
printf '\n'
test2 first a b "c d e"
printf '\n'
test3 a b c d e
RESULT
test1
ARG: "first"
ARG: "a"
ARG: "b"
ARG: "c d e"
test2
ARG: "a"
ARG: "b"
ARG: "c d e"
test3
2-ARGS: "a" "b"
2-ARGS: "c" "d"
2-ARGS: "e" ""
This should give an idéa
The printf '%s\n' $(seq $#)
local -r frmt=$(printf '%s\n' $(seq $#))
This would result in, for example:
printf '%s\n' $(seq 3)
1
2
3
As always: break down what you test to the lowest component, here: printf '%s\n' $(seq $#) (or really seq $#) - and test in out in shell.
| printf format depending on number of parameters |
1,588,657,028,000 |
So for example adding an ending to the command would display the ending:
function work* () {
echo "$1";
}
export -f work*
$ working
ing
|
Perhaps something like the following would do it for you:
function work() {
echo "${1#work}"
}
function err_work() {
[ "${1#work}" != "$1" ] && work $*
}
trap "err_work \$BASH_COMMAND" ERR
The err_work function is then invoked on all command errors, to discover that the failing command starts with "work", and thereby invoke the work function without ending instead, with the initial command line as arguments.
Of course, bash will complain about the initial command before invoking the trap, so that will looks a bit ugly; perhaps there is a way to turn that off, though I haven't found any
| Is there a way to use a function with an ending? |
1,478,106,400,000 |
I've got the following situation:
I'm writing a script that will read its parameters either from a config file (if exists and parameter present) or asks the user to input said parameter if it's not present.
Since I'm doing this for a handful of parameters I thought writing a function would be the way to go.
However, as far as I understand it, the function returns the result value by echoing it or by assigning it to a global variable. I do want to echo to the screen in the function though, so it'll have to be option two. So i tried this:
# parameters: $1=name of parameter, $2=user prompt, $3=value read from config.cfg
function getParameter {
# If the value couldn't be read from the config file
if [ -z "$3" ]; then
# Echo the prompt
echo "$2"
# Read the user input
read parameter
# If it's empty, fail
if [ -z "$parameter" ]; then
echo "Parameter $1 not found"
exit
# Else, try to assign it to $3 <---- This is where it fails
else
$3="$parameter"
fi
fi
}
I call it like this:
getParameter "Database username" "Please enter database username" $database_username
The config.cfg file is sourced before the function is called and $database_username is one of the optional parameters there.
Now this obviously doesn't work. I can't assign to $3 and since I want the method to be generic, I can't do MY_VARIABLE=$parameter either.
Does anyone have any suggestions how I can achieve all of the below:
Get variable value from either config.cfg or read it from the user input
Do this in a generic fashion, i.e. don't repeat the above code (without a function) for each parameter
|
Alright, looks like I solved my own problem:
function getParameter {
if [ -z "$3" ]; then
# User read -p to show a prompt rather than using echo for this
read -p "$2`echo $'\n> '`" parameter
# Print to sdterr and return 1 to indicate failure
if [ -z "$parameter" ]; then
>&2 echo "Parameter $1 not found"
return 1
else
echo $parameter
fi
else
echo $3
fi
}
By using echo -p I was able to show a prompt on the console and still be able to return a string from the function by using regular echo. That way, by calling the function with database_username=$(getParameter ...) I can assign it to a variable.
| Bash function assign value to passed parameter |
1,478,106,400,000 |
Using a vim function, I would like to check if a program is running using pgrep, and if it is not running then do something. In particular, I want to achieve something like this:
function! checkifrunning(mystring)
if 'pgrep "mystring"' shows that something is NOT running
--do something--
endif
endfunction
My guess is that I need to use the 'system()' function, but I'm not sure how. Can somebody help?
EDIT: I would like a solution that uses pgrep in particular, and not some other way
|
function! checkifrunning(mystring)
if !system('pgrep "' . a:mystring . '"')
" --do something--
endif
endfunction
Technically ! operates on Numbers, and converts a String to a Number first if given a String. However, if there's no process running, the output of pgrep will be empty, which when converted to a Number is 0. If there are process running, conversion to a Number would give non-zero.
Instead of 'pgrep "' . a:mystring . '"', you could also do 'pgrep ' . shellescape(a:mystring).
| VIM: function that checks if external program is running |
1,478,106,400,000 |
I am trying to write a bash script with a function which you use to send an email from the command line to an address and include a Cc address, a subject line, and an input file. For example, if the function is called "m," the typed command would look like:
m [email protected] [email protected] SubjectLine TextFile.txt
Below is what I have done so far in vi. I am sure I am declaring the variables wrong, and much more, probably. I am very new to this.
m()
{
mail -s="$1" -t="$2" -s="$3" #I am still unclear about positional parameters
}
|
#!/bin/bash
m() {
to_addr="$1"
cc_addr="$2"
subject="$3"
body="$4"
cat "$body" | mail -s "$subject" -c "$cc_addr" "$to_addr"
}
if [[ "$#" -eq 4 ]]; then
m "$1" "$2" "$3" "$4"
else
echo "Incorrect number of paramaters. Aborting."
echo "Example syntax: $0 [email protected] [email protected] "Message Subject" /path/to/messagebody.txt"
exit 1
fi
| Script to send mail using function |
1,478,106,400,000 |
I've tried several modifications to see why it's not working, but I can't find the answer.
Here is my code, this is in french but this is just a normal fonction that ask to the user if he's ready to start.
#!/bin/ksh
function start
{
echo "Vous etes sur le point de lancer la generation, etes-vous pret(e)? [OUI/NON]"
read touche
case $touche in
[Oo] | [Oo][Uu][Ii] )
echo "Demarage du bash..."
;;
[Nn] | [Nn][Oo][Nn] )
echo "Annulation du bash..."
exit
;;
esac
}
start
This is what I get :
sh start.sh
: unfindable command
»art.sh: line 3: syntax error close to the « symbol
'tart.sh: line 3: `function start
|
My file was saved in DOS Format, I used dos2unix start.sh to convert it.
| A basic function that doesn't work [duplicate] |
1,478,106,400,000 |
I'm making a .bashrc function to get me around the system faster. I used a case statement to state where I want to go.
function da() {
case "$1" in
home) cd ~
;;
eolas) cd /home/eolas/
;;
esac
}
I want to import the cases from a JSON file, example:
{
"cases": {
"home": "~",
"eolas": "/home/eolas/",
"jdan": "/home/jdan/",
"kl": "/.kl/"
}
}
I have search for a way to do this but so far haven't found one, is this possible in bash? And if so how?
|
Not exactly elegant, but seems to work:
in='
{
"cases": {
"home": "~",
"eolas": "/home/eolas/",
"jdan": "/home/jdan/",
"kl": "/.kl/"
}
}
'
case="$(echo "$in" |
perl -pe 's/"cases". \{/case "\$1" in/;
s/: /) eval cd /;
s/,/;;/;
s/[{}]//g')"
eval "
function da() {
$case
esac
}
"
| Bashrc function case statement to import cases from JSON |
1,478,106,400,000 |
I want to detect online network/shell services in my Solaris.
I write following script for this purpose:
compare_ser()
{
if [ "$1" != "" ]; then
echo "True" >> Solaris.txt
fi
}
export -f compare_ser
svcs network/shell | cut -d ' ' -f1 | grep "online" | xargs -n1 bash -c 'compare_ser $@'
when i run svcs network/shell | cut -d ' ' -f1 | grep "online" | xargs -n1 echo in terminal, I get the following output:
online
online
but my script don't show anything.
What's its problem?
|
Use this:
svcs network/shell | cut -d' ' -f1 | grep "online" | xargs -n1 -I{} bash -c 'compare_ser {}'
The {} interpolates each value generated through xargs. Your $@ attempts to interpolate command line arguments - of which there are none.
| Problem with entire function in a script |
1,478,106,400,000 |
I currently have two bash functions, one which uploads, and another that downloads a file. I would like to create a bash script that allows users to specify which of the two they would like to do.
The issue I am having is the upload and download function run no matter what. For example:
function upload() {
var=$1
#something goes here for upload
}
function download() {
var=$1
#something here for download
}
main() {
case "$1" in
-d) download "$2";;
-u) upload "$2";;
*) "Either -d or -x needs to be selected"
esac
}
I cannot get main() to run only and suppress download and upload until needed.
|
You need to call the main function too, and pass the script's command line arguments to it:
#!/bin/sh
upload() {
echo "upload called with arg $1"
}
download() {
echo "download called with arg $1"
}
main() {
case "$1" in
-d) download "$2";;
-u) upload "$2";;
*) echo "Either -d or -u needs to be selected"; exit 1;;
esac
}
main "$@"
No need for the ksh-style function foo declarations here, use foo() instead, as it's standard and more widely supported.
| How to create a main bash script that allows users to input which of 2 functions they want to run? |
1,478,106,400,000 |
Can somebody explain me, why the return value of a bash function is always echoed in the function and then consumed by my_var=$(my_func arg1 arg2 ..), when i look through code examples.
my_func ()
{
echo "$1 , $2, $3"
}
my_var=$(my_func .. .. ..);
instead of using this, which would not open a subshell
declare g_RV #-- global return value for all functions
myfunc ()
{
g_RV="$1 , $2, $3"
}
myfunc .. .. ..; my_var=$g_RV;
because i use the second way, and wonder, if this would fail against the first method in some cases. There must be a cause, because everyone is opening a subshell.
EDIT: because of Kusalananda and Paul_Pedant comments
I added a recursive function call with g_RV - LAF function , list all files of a directory $1, or do it recursive if $2 INT>0
and then function calls inside a function to other functions (look at sum functions!
declare g_RV
shopt -s dotglob nullglob
#-- Recursive call with g_RV
#---------------------------
#-- call: LAF [directory STR] [recursive INT (>0 .. true, 0 .. false)] ... List all files in a folder or inclusive all files in subfolders (recursive
LAF ()
{
local file files="" dir_path=$1
if [[ ${dir_path:0:1} != "/" ]]; then dir_path=$(readlink -f "$dir_path"); fi
for file in "$dir_path/"*; do
if [[ -d $file ]]; then (($2)) && { LAF "$file"; files+=$g_RV; } #-- recursive call
else files+="${file}"$'\n'; fi
done
g_RV=$files
}
LAF "/tmp" 1; my_var1=$g_RV; echo "$my_var1";
#-- function calling other functions with g_RV
#---------------------------------------------
sum_A ()
{
g_RV=$(($1+$2))
}
sum_B()
{
g_RV=$(($1+$2))
}
sum_C ()
{
local -i sum=0;
sum_A 5 6; sum+=$g_RV
sum_B 12 18; sum+=$g_RV
g_RV=$sum
}
sum_C; my_var2=$g_RV; echo "sum: $my_var2";
|
Standard tools and utilities return information via stdout, qualifying the result with the exit status (0=ok, otherwise error). Your functions should do the same so that they can be used in the same consistent manner.
Your example shows a single global variable for all function returns. As soon as you take this approach you cannot interpolate functions in pipelines or even in strings without increasing complexity and reducing readability.
Consider
f() { sed 's/^/> /'; }
g() { nl; }
echo try | f | g # " 1 > try"
If you have a global variable for each function's return (or worse, the same global variable for both) you would need to jump through hoops to achieve the same effect.
You're running a shell, and by the time you consider all the overhead the number of subprocesses is - or should be - largely irrelevant. If you need speed don't write in shell script.
| return value of a function |
1,478,106,400,000 |
I find this function online. It's does creating a directory and changing to directory.
But I want to know every part of it.
function mkdircd () { mkdir -p "$@" && eval cd "\"\$$#\""; }
|
You can pass in a list of names. It will create directories for each of them, then cd into the last one.
This does not need eval. I would write it like this:
mkdircd () { mkdir -p "$@" && cd "${!#}"; }
${!#} uses indirect expansion: $# is the number of parameters, so ${!#} is the value of the last parameter
| Please explain below bash function |
1,478,106,400,000 |
As https://stackoverflow.com/a/13864829/ said,
$ if [ -z ${aaa+x} ]; then echo "aaa is unset"; else echo "aaa is set"; fi
aaa is unset
can test if a variable aaa is set or unset.
How can I wrap the checking into a function? In bash, the following nested parameter expansion doesn't work:
$ function f() { if [ -z ${$1+x} ]; then echo "$1 is unset"; else echo "$1 is set"; fi };
$ f aaa
bash: ${$1+x}: bad substitution
Thanks.
|
The -v test in bash will be true if the named variable has been set.
if [ -v aaa ]; then
echo 'The variable aaa has been set'
fi
From help test in bash:
-v VAR True if the shell variable VAR is set.
As a function:
testset () {
if [ -v "$1" ]; then
printf '%s is set\n' "$1"
else
printf '%s is not set\n' "$1"
fi
}
As a script for sourcing:
if [ -v "$1" ]; then
printf '%s is set\n' "$1"
else
printf '%s is not set\n' "$1"
fi
Using this last script:
source ./settest variablename
| How can I wrap this checking of variable set/unset into a function? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.