date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,478,106,400,000
I came across a bash_profile file that makes use of the export -f statement in the following manner: # Run xelatex and BibTeX function xelatex_bibtex { xelatex -shell-escape "${1}.tex" && bibtex "${1}" && xelatex -shell-escape "${1}.tex" && xelatex -shell-escape "${1}.tex" } export -f xelatex_bibtex However, functions defined without export -f seem to work perfectly fine: # Search for synonyms function syn { wn "${1}" -synsn } What's the role of export -f and what's considered good practice when creating convenience function in bash_profile with respect to using export?
Its role is exactly analogous to that in the case of variables - i.e. to export the definition to inherited environments. So $ foo() { echo bar; } $ foo bar Start a child shell $ bash Now: $ foo Command 'foo' not found, did you mean: command 'roo' from snap roo (2.0.3) command 'fio' from deb fio command 'fgo' from deb fgo command 'fog' from deb ruby-fog command 'woo' from deb python-woo command 'fox' from deb objcryst-fox command 'goo' from deb goo command 'fop' from deb fop See 'snap info <snapname>' for additional versions. whereas the same child shell after exporting the function: $ export -f foo $ bash $ foo bar
Role of export -f statement when creating functions in bash_profile
1,478,106,400,000
In my script, I found some operations on an array are reusable. So I am considering to refactor the reusable code into a function or a script. However how can I write a function or script so that I can provide an array as a positional parameter to a function or script? or achieve something similar? Thanks.
Using a name reference in a recent (>=4.3) version of bash: foo () { local param1=$1 local -n arr=$2 printf 'array: %s\n' "${arr[@]}" } myarray=( some items go here ) foo something myarray The name of the array variable is passed as the second parameter to the function. The function declares a name reference variable that receives this name. Any access to that name reference variable will access the variable whose name was passed to the function. This obviously works with more than one array. Note that in the example above, you can not pass a variable named arr to the function, so some care has to be taken to avoid name clashes (ksh93, which also supports name references, don't have this issue due to different scoping). Note that this approach does not work when calling another shell script. When calling another shell script, the array has to be passed on the command line of that other script, which means it has to be passed as a set of strings. To pass a single array this way is relatively easy, and Hauke Laging shows the basics of how to do this in his answer. If you have full control over the contents of the arrays, you may be able to encode their data as single strings, possibly by delimiting their elements using some delimiter and then parsing these strings in the target script to re-form the arrays. Another possibility would be to adopt a JSON "interface" between your scripts, meaning you would encode the data as JSON, pass it on the scripts standard input (or similar), and decode the document using jq. This does feel rather cumbersome though, and it would add a massive overhead.
how can I can provide an array as a positional parameter to a function or script?
1,478,106,400,000
declare -f Shows function definition in both bash & zsh. $ declare -f VCS_INFO_adjust () { # undefined builtin autoload -XUz } VCS_INFO_bydir_detect () { # undefined builtin autoload -XUz } VCS_INFO_check_com () { setopt localoptions NO_shwordsplit case $1 in (/*) [[ -x $1 ]] && return 0 ;; (*) (( ${+commands[$1]} )) && return 0 ;; esac return 1 } .... declare -F show functions name is bash but not in zsh. blueray@blueray-PC:~$ declare -F declare -f __expand_tilde_by_ref declare -f __get_cword_at_cursor_by_ref declare -f __git_eread declare -f __git_ps1 declare -f __git_ps1_colorize_gitstring declare -f __git_ps1_show_upstream declare -f __grub_dir declare -f __grub_get_last_option .... What might be the reason behind it?
In Zsh, declare -F declares a double-precision floating point variable: $ declare -F myvar $ echo $myvar 0.0000000000 To list all function names in Zsh, use typeset -f +. In zsh, the $functions special associative array maps function names to their definition so ${(k)functions} which expands to the keys of that associative array will also expand to the list of function names. Bash and Zsh are different shells, you can’t expect them to behave in exactly the same way.
`declare -F` does not work in zsh
1,478,106,400,000
I have aliased pushd in my bash shell as follows so that it suppresses output: alias pushd='pushd "$@" > /dev/null' This works fine most of the time, but I'm running into trouble now using it inside functions that take arguments. For example, test() { pushd . ... } Running test without arguments is fine. But with arguments: > test x y z bash: pushd: too many arguments I take it that pushd is trying to take . x y z as arguments instead of just .. How can I prevent this? Is there a "local" equivalent of $@ that would only see . and not x y z?
Aliases define a way to replace a shell token with some string before the shell event tries to parse code. It's not a programming structure like a function. In alias pushd='pushd "$@" > /dev/null' and then: pushd . What's going on is that the pushd is replaced with pushd "$@" > /dev/null and then the result parsed. So the shell ends up parsing: pushd "$@" > /dev/null . Redirections can appear anywhere on the command line, so it's exactly the same as: pushd "$@" . > /dev/null or > /dev/null pushd "$@" . When you're running that from the prompt, "$@" is the list of arguments your shell received so unless you ran set arg1 arg2, that will likely be empty, so it will be the same as pushd . > /dev/null But within a function, that "$@" will be the arguments of the function. Here, you either want to define pushd as a function like: pushd() { command pushd "$@" > /dev/null; } Or an alias like: alias pushd='> /dev/null pushd' or alias pushd='pushd > /dev/null
$@ in alias inside script: is there a "local" $@?
1,478,106,400,000
I have a script using a pretty long pipe quite a lot of times. The middle of each pipe is the same command chain. Only the beginning and the end defers all the time it is used. Different-command-1 | command A | command B | command C | diff-cmd-2 is there a way to call this commands as a function within the pipe? Like: same-commands() { command A | command B | command C } Different-command-1 | same-commands | diff-cmd-2 Different-command-3 | same-commands | diff-cmd-4 this in my case would save quite a lot of lines in my script, but I can not quite figure out how this could work.
The commands in a function run with the same stdin and stdout as the function itself, so we can just put a pipeline in a function, and the stick the function in another pipeline, as it if were any other command: func() { tr a x | tr b x } echo abc | func | tr c x This prints xxx.
Calling a function within a pipe
1,478,106,400,000
Example $ echo "This is great" This is great $ num2=2 $ num3="Three" $ echo $num2 2 $ echo $num3 Three Redefining echo: $ echo(){ command echo "The command was redefined"; } $ echo $num2 The command was redefined $ echo $? The command was redefined So is this true? Are all commands in Unix shell functions and we can manipulate them like that?
No, all Unix commands are not shell functions, but they may be overridden by a shell function. Shell function names are not restricted by the names of built-in or external utilities. Just as you may have several external utilities with the same name under different paths, you may also have a shell function or an alias with that same name. If there is a function called foo and a external or built-in utility called foo as well, then the function foo will be called. To force the shell to pick the built-in utility, use builtin foo To avoid using the function foo, use command foo To guarantee that an external utility is used instead of a shell function, alias or built-in utility, use the full path to it, e.g. /bin/echo Aliases and functions occupy the same namespace, so you can't have an alias and a function with the same name. In short: Unix commands may be aliases, or shell functions, or utilities built into the shell, or external utilities (compiled binaries or scripts written in any scripting language). Some "commands" are not commands but keywords, such as for and if etc. These may also be overridden by aliases and shell functions for extra exciting shell experiences (i.e. don't do that).
Are functions equivalent to built in commands in the bash / shell scripting language?
1,478,106,400,000
Directory Structure: one.pdf ./subdir/two.pdf ./anothersubdir/three.pdf When I type: find ./ -type f -name "*.pdf" it retrieves all the pdfs, including subdirectories. Bash Function function getext {find ./ -type f -name "$1"} With this function in bashrc, typing: getext *.pdf It only retrieves "one.pdf" but not the rest. Question: what happens here with the function? What's missing from it compared to standard input to only get the first file and stop? Thank you for your help.
For the same reason that you quote "*.pdf" in the arguments to find inside your function, you need to quote it when you call the function: getext "*.pdf" Otherwise, the shell will attempt to match *.pdf to filenames in the current directory resulting in it being expanded - in this case to one.pdf - before being passed to your function.
Bash function does not work recurrsively [duplicate]
1,478,106,400,000
I know !! re-runs commands but what exactly would occur if I re-ran a command that had a variable in the command?
Well, let's try it: $ foo=bar $ echo $foo bar $ foo=qux $ !-2 echo $foo qux $ history ... 219 foo=bar 220 echo $foo 221 foo=qux 222 echo $foo 223 history So it appears that the command is added to history before variable expansion occurs.
Why must you be careful when using Bash's built in command history function to re-run previous commands that contain variables?
1,478,106,400,000
I am working on a highly portable script that users shall source to their shells, forcing me to use POSIX scripting. There are many useful functions in the script, one of them is special though, as it returns true or false status to the calling function. Now, I used to use return 0 in cases like this. But it appears a much more readable true or false for return 1, respectively, works too. The question is if these work the very same way or if there is a difference. Thanks.
true is portable, but doesn't by itself return. return true is not portable, and also doesn't work reliably. If you have a function like so: f() { true } Then yes, it will portably return from the function with an exit status of zero. When falling off from the end, the exit status of the last command is returned as the exit status of the function. But of course this doesn't return with a truthy status, as the true in the middle by itself doesn't affect the control flow. g() { # silly example condition if [ 1 = 1 ]; then true fi echo something else false } But you could add an explicit return, as return without arguments also uses the exit status of the last command: The value of the special parameter '?' shall be set to n, an unsigned decimal integer, or to the exit status of the last command executed if n is not specified. If n is not an unsigned decimal integer, or is greater than 255, the results are unspecified. h() { # silly example condition if [ 1 = 1 ]; then true return fi echo something else false } Now, if you were to try return true, it might appear it'd do what you wanted in some shells, but in the end it probably doesn't. At least in zsh, it appears to take the argument as an arithmetic expression, so return true would use the value of the variable true, or zero by default if it's not set. % i() { return 1+1; } % i; echo $? 2 % unset true % j() { return true; } % j; echo $? 0 % true=123 % j; echo $? 123 If you can arrange for true to be always set to 0, and false to be set to 1, and trust the users of your script library to not mess with those, then you could use return true and return false in zsh, or return "$true" and return "$false" in a standard shell. I wouldn't recommend that with those names but maybe something like true_return_value or false_return_value could be used. Otherwise, you could use true; return and false; return in a standard shell, but I'm not sure if that's any more readable than return 0 and return 1. The fact that the truthy return value is zero should be familiar to shell programmers in any case.
Does `return 0` equal `true` (in sourced script to shell's environment)?
1,478,106,400,000
The behaviour of my shell environment changed: Earlier, when pasting a function definition e.g. function exampleFunc { echo hello } to the shell, it would display as formatted and register the new function definition. Now, for some reason, it displays with > before each line other than the first. function exampleFunc { >echo hello >} I've found that functions containing for loops now fail to be registered. What might be an explanation for this? How might I revert back to the previous mode? Ubuntu 20.04 This change occurred after installing nushell, but maybe unrelated.
This has nothing to do with you installing nushell. It also does not stop the shell from functioning correctly. The > is the default value of the shell's secondary prompt (PS2). The secondary prompt is displayed whenever the shell requires further input after the user has pressed the Enter key without completing the current command. This happens only when the shell is in interactive mode. The POSIX standard says this about PS2: Each time the user enters a <newline> prior to completing a command line in an interactive shell, the value of this variable shall be subjected to parameter expansion and written to standard error. The default value is > . In your specific example, the function definition is the command that still needs to be completed. It's not until the user enters the closing curly brace, }, at the end of the definition that the shell can execute the command. You will also get the secondary prompt if you are pasting in the commands in the bash shell if "bracketed paste" has been disabled for the Readline library. By default, the bracketed paste mode is enabled, meaning the shell will process a pasted chunk of text in one go rather than as individual lines. This behaviour may be disabled (for future shell sessions) by adding the following line to your ~/.inpturc file: set enable-bracketed-paste off Bracketed paste mode is also disabled by default if the terminal is "dumb" or if you are using a release of the bash shell older than 5.0. Different shells may have a different default value in $PS2. The zsh shell, for example, lets you know what command is currently not complete by dynamically updating the prompt: $ function foo { function> for arg do function for> print -r $arg function for> done function> } $ foo 1 2 3 1 2 3 ... while the bash shell uses a static > string: $ function foo { > for arg do > printf '%s\n' "$arg" > done > } $ foo a b c a b c The nushell shell seems to use ::: as its equivalent to the secondary prompt in POSIX-like shells. However, the nushell shell does not even attempt to be a POSIX shell. Unsetting the PS2 variable in bash would potentially lead to confusion. For example, if you think you have just invoked a long-running command but have in fact forgotten a closing quotation mark, you would not have any indication that the shell was waiting for you to complete the command.
> symbol appearing when interactively defining function in bash
1,478,106,400,000
I have defined a test alias as: alias testalias='python3 -c "f = open(\"/tmp/testopenfile\", \"w+\"); f.write(\"hi\n\")"' It works fine when I run it directly through terminal. I can cat /tmp/testopenfile normally. I have also defined a helper function to background and mute error and output of programs. I thought of using this function with some aliases that takes long time or that are on while loop (not this one as its just example). It is defined like this: detach is a function detach () { $1 > /dev/null 2> /dev/null & } I am using bash, and I tried to combine those two things. When I tried detach testalias, it doesn't seem to work (/tmp/testopenfile doesn't seem to be created). It looks like testalias is directly is passed and not evaluated. What is the hack for making this evaluate before passing. Also, this code creates the file: python3 -c "f = open(\"/tmp/testopenfile\", \"w+\"); f.write(\"hi\n\")" 1>/dev/null 2>/dev/null &
Replace the alias with a function. testalias() { python3 -c 'f = open("/tmp/testopenfile", "w+"); f.write("hi\n")' } detach () { "$@" > /dev/null 2> /dev/null & } Bash only looks at the first word of a command for alias expansion, so the function gets the literal argument testalias. (I think zsh has "global" aliases which would be expanded anywhere on the command line, but I doubt you'd want e.g. echo testalias to expand the alias contents.) Alias expansion also happens early in the parsing process, way before $1 is expanded, so when the function runs, $1 expands to just the same testalias, and stays like that. It would probably give you an error about not finding the command testalias, except that stderr was redirected to /dev/null, so you don't see the error. In fact, an alias in a function is expanded when the function is parsed, not when it's used. $ alias foo="echo abc" $ f() { foo; } $ alias foo="echo def" $ g() { foo; } $ f abc $ g def With testalias a function, it's found when the testalias command is looked up.
How to evaluate bash alias before being passed to bash function?
1,478,106,400,000
I'm running an Abaqus job in Ubuntu command line using two files (file1.inp and file2.f) as follows: abaqus job=file1 user=file2.f Since I'm doing this a lot with different files, I wanted to make it easier as: myfunc file1 file2.f where myfunc is a bash function that takes the files names and run abaqus command abaqus job=file1 user=file2.f. I appreciate any assistance to figure it out.
myfunc () { abaqus job="$1" user="$2" } This calls abacus with arguments constructed from the two arguments given to the function. With a bit of error checking (making sure that the correct number of arguments is passed): myfunc () { if [ "$#" -ne 2 ]; then printf '%s: Expecting 2 arguments, got %s\n' "${FUNCNAME[0]}" "$#" >&2 return 1 fi abaqus job="$1" user="$2" } You could even give your function the name abaqus, but then you will have to make sure that you call the actual abacus command with command abaqus job="$1" user="$2" inside the function so that you don't get an infinite recursion.
Run a bash function that takes two files names as variables from command line
1,478,106,400,000
I have a script which writes some content into a file with cat and EOF. While this thing works within a bash script, it doesn't work if I put it inside a function. Working code: cat << EOF | sudo tee /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp EOF It's syntax highlighting (which seems fine); Not working code: function someFunctions { cat << EOF | sudo tee /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp EOF } someFunctions This one's syntax highlighting (which doesn't seem fine); My editor (Atom) shows everything as green, which means it loses it's syntax highlighting but I couldn't find what's wrong. How can I fix this?
The EOF here-doc marker must either be at the beginning of the line, or a full TAB character indented: someFunctions { sudo tee /etc/network/interfaces <<-'EOF' auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp EOF } I've removed the function keyword as it's deprecated, and the cat as it adds no value. I've also used <<-'EOF' (instead of <<EOF) so that leading TAB characters are stripped, and the content of the here-doc is not evaluated for variables and other substitutions. If you want variable substitution don't quote the EOF, and use <<-EOF instead.
sudo redirection doesn't work when in a function
1,478,106,400,000
I stored the following script in a file and created an alias to that file in the user's bashrc, then sourced that bashrc: #!/bin/bash domain="$1" && test -z "$domain" && exit 2 environment() { read -sp "Please enter the app DB root password:" dbrootp_1 && echo read -sp "Please enter the app DB root password again:" dbrootp_2 && echo if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched. Please try again." && exit 2 fi read -sp "Please enter the app DB user password:" dbuserp_1 && echo read -sp "Please enter the app DB user password again:" dbuserp_2 && echo if [ "$dbuserp_1" != "$dbuserp_2" ]; then echo "Values unmatched. Please try again." && exit 2 fi } environment When I run alias domain I get this output: + /user/nwsm/nwsm.sh x /user/nwsm/nwsm.sh: line 12: syntax error near unexpected token `}' user/nwsm/nwsm.sh: line 12: `}' Why is the syntax error? I didn't recognize a syntax error. Do you? Maybe there is another problem. I don't see any alien characters in Nano (nor in Visual Studio Code):
I seem to have missed 2 semicolons (;) before the fi (the closure of the if statement). These are the correct if-then statements: if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched. Please try again." && exit 2; fi Note each semicolon before the fi, near the end of each line. If the Bash error in stderr was something like "Expected a semicolon in lines 6 and 10", I might not publish the question and answer. Seems writing a Bash if-then statement is slightly more verbose than say, in JavaScript.
Syntax error near unexpected token `}' in a Bash function with an if-then statement [closed]
1,478,106,400,000
Tracking down strange behavior a bash script resulted in the following MWE: set -o errexit set -o nounset set -x my_eval() { eval "$1" } my_eval "declare -A ASSOC" ASSOC[foo]=bar echo success fails with: line 9: foo: unbound variable. Yet it works if eval is used in place of my_eval (and, obviously, if the declare is done directly, without any indirection). Why does evalling a declare statement in a function not work the same as doing it outside of a function? I'm using GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu), from the popular Ubuntu distribution of Linux.
A glance at the man pages tells us: The -g option forces variables to be created or modified at the global scope, even when **declare** is executed in a shell function. Thus, if your script would say: my_eval "declare -gA ASSOC" it/you would be happier. The point is that the "declare" statement sees its scope at where it is executed/evaluated, and not at where it is written.
why doesn't eval declare in a function work in bash?
1,478,106,400,000
I need to set up a function in zsh that would edit a different file based on some input at the command line. I want to simplify my aliases so I don't have multiple aliases to do the same thing but with a slight variation. I am specifically trying to set up a function to open in my editor different versions of the php.ini file (for php 5.4, 5.5, 5.6 and 7.0) Right now, I have defined the following aliases: alias editphpini54="subl /usr/local/etc/php/5.4/php.ini" alias editphpini55="subl /usr/local/etc/php/5.5/php.ini" alias editphpini56="subl /usr/local/etc/php/5.6/php.ini" alias editphpini70="subl /usr/local/etc/php/7.0/php.ini" What I would like to do is set up a function called editphpini and then input a variable (version number) and have it intelligently open up the right file. This could then be future-proof as well, as long as the basic path remains the same. So, what I want to do be able to type in editphpini 54 And have the function parse that command and load the php.ini file located in /usr/local/etc/php/5.4/php.ini Using the above example, I would then be able to substitute 55 56 or 70 at the end of the command and that would issue the relevant command. In my thoughts, the function would take the XX version number, insert it into the command subl /usr/local/etc/php/X.X/php.ini using the XX to define the version number. Honestly, I have tried multiple things, and nothing seems to work, so rather than just list all my failed attempts, I'd appreciate if someone could point me in the right direction. Thank you, Ali
function editphpini() { local version=$( echo $1 | sed 's/^\(.\)/\1./' ) subl /usr/local/etc/php/${version}/php.ini } usage: % editphpini 54
ZSH function to edit a file based on an input at the cli
1,478,106,400,000
I want to retry a command for 5 times with an interval of 20 seconds. I want this command to be passed as a method parameter. How to do it? And once the function is written how to pass the value to the function? I want my current code to be converted to a function which takes a set of parameters. How to write and call this function in my shell script? My current code is like this : trialNumber=0 until [ $trialNumber -ge 5] do ssh $USERID@$HOST $SCRIPT_LOCATION/runme.sh # This line is my command and it may very with number of parameters or command itself. [ $? -eq 0 ] && break trialNumber=$[$trialNumber+1] sleep 20 done ( Above code is embedded at many places I want to move it into a function).
retry() { trialNumber=$1 delay=$2; shift 2 while [ "$trialNumber" -gt 0 ]; do "$@" && return ret=$? sleep "$delay" trialNumber=$(($trialNumber - 1)) done return "$ret" } retry 5 20 ssh "$USERID@$HOST" "$SCRIPT_LOCATION/runme.sh" Though the last sleep in case of failure is not necessary. May be better as: retry() { trialNumber=$1 delay=$2; shift 2 until "$@"; do ret=$? trialNumber=$(($trialNumber - 1)) [ "$trialNumber" -gt 0 ] || return "$ret" sleep "$delay" done }
Converting a loop of code to function
1,478,106,400,000
I'm attempting to write a function that writes arrays with a name that's passed in. Given the following bash function: function writeToArray { local name="$1" echo "$name" declare -a "$name" ${name[0]}="does this work?" } Running like this: writeToArray $("test") I get two errors: bash: declare: `': not a valid identifier =does this work?: command not found I am expecting to be able to do this: writeToArray $("test") for item in "${test[@]}"; do echo "item" echo "$item" done This should print: item does this work? How could I properly configure this to write the array (named test in the example, such that this array named test is readable outside the function)?
You'd use a nameref for that: writeToArray() { local -n writeToArray_name="$1" writeToArray_name[0]="does this work?" } Testing: bash-5.0$ test[123]=qwe bash-5.0$ writeToArray test bash-5.0$ typeset -p test declare -a test=([0]="does this work?" [123]="qwe") With older versions of bash which didn't have namerefs yet, you could use eval: writeToArray() { eval "$1[0]='does this work?'" } When writeToArray is invoked with test as argument, eval is passed test[0]='does this work?' as argument, and that in turns is evaluated as code in the shell language where it assigns does this work? to the element of index 0 of the test array (also works for associative array; scalar variables are converted to arrays). Note that $("test") is syntax to capture and expand the output of test command and split+glob it when in list context. test (aka [) produces no output when not passed any argument), so $("test") expands to the empty string and split+glob gives you nothing at all out of it. Here, it's the name of your variable that you want to pass to writeToArray, so test, not its contents ("$test") nor the output of a command by the same name ("$(test)") let alone the output of a command by the same name subject to split+glob as with your $("test") attempt.
Bash create parameter named array within function
1,478,106,400,000
I have this simple script, which does nothing more than: check if an email matches a specific pattern in that case, add a tag to a taglist before quitting, print that taglist set -e lista_tag=() in_file="/tmp/grepmail-classify.txt" # save stdin to file, to use it multiple times cp /dev/stdin $in_file # CLASSIFY res=$(grepmail -B "some regex pattern" < $in_file) if [ ! -z "$res" ] then lista_tag+=("PUSH") fi res=$(grepmail -B "some other regex pattern" < $in_file) if [ ! -z "$res" ] then lista_tag+=("MERGIFY") fi # ⁝ Many many more similar patterns # output them comma separated echo ${lista_tag[*]} As you can see there is a case for refactoring and abstraction. res and if .. fi parts are repeated. But I am not sure how to safely pass commands around. What I think I would like to do is to invoke a function like this (or similar): classify '"grepmail -B "somepattern"' 'MYTAG' But it is tricky! I have read the FAQ but I am not sure it covers my case. So here is the question: what is the correct way to pass commands around (if there is any)? How would the res= part of such function look like?
classify '"grepmail -B "somepattern"' 'MYTAG' This is hard to get to work, for exactly the reasons mentioned in BashFAQ 050. But we can make it work if we put the "tag" argument first, since that allows us to use the rest for the command: #!/bin/bash lista_tag=() classify() { local tag="$1" shift res=$( "$@" < "$in_file") if [ -n "$res" ]; then lista_tag+=("$tag") fi } classify PUSH grepmail -B "some regex pattern" classify MERGIFY grepmail -B "some other regex pattern" The key here being that we don't stick the arguments of the command into one string, but keep them separate. "$@" is magic: it expands to all the positional parameters separately. After shifting out the tag, the rest are your command. You can't stick the redirection there in the same way, though, as that would require running the command through eval and quoting it appropriately. Which you'd also need to do carefully for any user-provided input, since otherwise you'd have a high chance of leaving a command execution vulnerability there. Anyway, since the grepmail -B part seems constant, just pass the tag and the pattern: #!/bin/bash lista_tag=() in_file=foo.txt classify() { local pattern="$1" local tag="$2" if [[ -n "$(grepmail -B "$pattern" < "$in_file")" ]]; then lista_tag+=("$tag") fi } classify "some regex pattern" PUSH classify "some other regex pattern" MERGIFY
How to pass commands around?
1,478,106,400,000
As pointed out in this question, the prototype for the ioctl function inside a Linux kernel module is: (version 1) int ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg); or (version 2) long ioctl(struct file *f, unsigned int cmd, unsigned long arg); I would like to use them in a kernel module which implements a character device driver. Are both the above prototypes suitable in this case? If yes, why? If no, how to choose the right one? What header/source file(s) contain these prototypes? In other words: what is the official reference file for these prototypes? I'm running Ubuntu 20.04 on x86_64 and these are my available header files: /usr/include/asm-generic/ioctl.h /usr/include/linux/ioctl.h /usr/include/linux/mmc/ioctl.h /usr/include/linux/hdlc/ioctl.h /usr/include/x86_64-linux-gnu/sys/ioctl.h /usr/include/x86_64-linux-gnu/asm/ioctl.h The only significant line is in /usr/include/x86_64-linux-gnu/sys/ioctl.h: extern int ioctl (int __fd, unsigned long int __request, ...) __THROW; but I can't find here any clue about the above two alternative prototypes.
Are both the above prototypes suitable in this case? If yes, why? If no, how to choose the right one? They are not both suitable. Only version 2 is currently available in the kernel, so this is the version that should be used. What header/source file(s) contain these prototypes? In other words: what is the official reference file for these prototypes? They are in include/linux/fs.h (this is a path relative to the kernel sourcecode root directory), inside the struct file_operations definition: long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); That is: the member unlocked_ioctl must be a pointer to a function long ioctl(struct file *f, unsigned int cmd, unsigned long arg); which is exactly version 2. If a function my_ioctl() is defined inside a kernel module using version 1 instead, a compiler error will be generated: error: initialization of ‘long int (*)(struct file *, unsigned int, long unsigned int)’ from incompatible pointer type ‘long int (*)(struct inode *, struct file *, unsigned int, long unsigned int)’ [-Werror=incompatible-pointer-types] .unlocked_ioctl = my_ioctl ^~~~~~~~ Some additional comments Version 1 has been the only one, till kernel 2.6.10, where struct file_operations only had int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long); This ioctl function, however, created a Big Kernel Lock (BKL): it locked the whole kernel during its operation. This is undesirable. So, from 2.6.11, int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long); long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); A new way to use ioctls has been introduced, which did not lock the kernel. Here the old ioctl with kernel lock and the new unlocked_ioctl coexist. From 2.6.36, the old ioctl has been removed. All the drivers should be updated accordingly, to only use unlocked_ioctl. Refer to this answer for more information. In a recent kernel release (5.15.2), it seems that there are still few files using the old ioctl: linux-5.15.2$ grep -r "ioctl(struct inode" * Documentation/cdrom/cdrom-standard.rst: int cdrom_ioctl(struct inode *ip, struct file *fp, drivers/staging/vme/devices/vme_user.c:static int vme_user_ioctl(struct inode *inode, struct file *file, drivers/scsi/dpti.h:static int adpt_ioctl(struct inode *inode, struct file *file, uint cmd, ulong arg); drivers/scsi/dpt_i2o.c:static int adpt_ioctl(struct inode *inode, struct file *file, uint cmd, ulong arg) fs/fuse/ioctl.c:static int fuse_priv_ioctl(struct inode *inode, struct fuse_file *ff, fs/btrfs/ioctl.c:static noinline int search_ioctl(struct inode *inode, fs/ocfs2/refcounttree.h:int ocfs2_reflink_ioctl(struct inode *inode, fs/ocfs2/refcounttree.c:int ocfs2_reflink_ioctl(struct inode *inode, net/sunrpc/cache.c:static int cache_ioctl(struct inode *ino, struct file *filp, vme_user.c, dpt_i2o.c and cache.c, however, have: static const struct file_operations adpt_fops = { .unlocked_ioctl = adpt_unlocked_ioctl, and then static long adpt_unlocked_ioctl(struct file *file, uint cmd, ulong arg) { struct inode *inode; long ret; inode = file_inode(file); mutex_lock(&adpt_mutex); ret = adpt_ioctl(inode, file, cmd, arg); So they use the old version, inside the new (getting the inode from the available data, as suggested by Andy Dalton in the comments). As regards the files inside fs: they seem not to use a struct file_operations; also, their functions are not the ioctl defined in int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long); because they take different parameters (fuse_priv_ioctl in fs/fuse/ioctl.c, search_ioctl in fs/btrfs/ioctl.c, ocfs2_reflink_ioctl in fs/ocfs2/refcounttree.c), so they maybe are only used internally in the driver. So, the assumption in the linked question that two versions are available for the ioctl function inside a Linux kernel module is wrong. Only unlocked_ioctl (version 2) must be used.
Two different function prototypes for Linux kernel module ioctl
1,478,106,400,000
I'm new to bash. I'm confused about the way functions work in this language. I have written this code: #!/usr/bin/env sh choice_func() { echo "$choice" } echo "Enter your choice:" read choice choice_func While investigating my code, I realized that I have forgotten to send the value of choice as input when calling choice_func(). But it works properly! how it is possible that the function has not been given the input but can echo it?
You read a value into the variable choice in the main part of the script. This variable has global scope (for want of a better word), which means that it will be visible inside the function too. Note that if you were to read the value inside the function, then the variable would still have global scope and be visible outside the function (after the function call). This would be the case unless you declared it as local with local choice inside the function. For more information about scoping of shell variable, see e.g. What scopes can shell variables have? To pass the value in the variable choice to the function, use choice_func "$choice" The function can then access this value in $1, its first positional parameter: choice_func () { echo "$1" } or choice_func () { local choice="$1" # choice is now local and separate from the variable # with the same name in the main script. echo "$choice" } This is the proper way to pass a value to a function without relying on global variables in a shell script.
Function can echo the value that has not received as input
1,478,106,400,000
I made a function in bash and when I call it, it crashes with an unbound variable error. I don't understand cause the variables that are said to be unbound are declared. Moreover, it seems to be triggered randomly like some times it crashes on line 66, some times it crashes on line 76 and some other times it crashes on line 86. Here is the function: #!/usr/bin/env bash function setConfigLS() { declare DFLT_CFG_FILE="${WEB_DOCUMENT_ROOT}/application/config/config.php" declare DFLT_ARRAY='config' declare cfgFile="$DFLT_CFG_FILE" declare array="$DFLT_ARRAY" declare value key arg declare -a args=() while (( $# > 0 )); do arg="$1" && shift case "$arg" in --file=*) cfgFile="${arg#*=}" ;; -f|--file) cfgFile="$1" shift ;; --value=*) value="${arg#*=}" ;; -v|--value) value="$1" shift ;; --key=*) key="${arg#*=}" ;; -k|--key) key="$1" shift ;; --array=*) array="${arg#*=}" ;; -a|--array) array="$1" shift ;; -h|--help) echo >&2 'Set a LimeSurvey configuration option.' echo >&2 '' echo >&2 'Usage:' echo >&2 ' setConfigLS [options...] <KEY> <VALUE>' echo >&2 ' setConfigLS [options...] --value=<VALUE> --key=<KEY>' echo >&2 '' echo >&2 'Options:' echo >&2 ' --file, -f <CONFIG_FILE> LimeSurvey configuration file.' echo >&2 " Default: ${DFLT_CFG_FILE}" echo >&2 ' --array, -a <ARRAY> Name of array containing the configuration.' echo >&2 " Default: ${DFLT_ARRAY}" echo >&2 ' --key, --k <KEY> Key of the configuration option to set. (required)' echo >&2 ' --value, -v <VALUE> Value of the configuration option. (required)' echo >&2 ' --help, -h Prints this message.' echo >&2 '' return 0 ;; *) args+=( "$arg" ) ;; esac done if [ -z "$key" ]; then # line 66: key: unbound variable if (( ${#args} > 0 )); then key="${args[0]}" args=( "${args[@]:1}" ) else echo 'Error: `--key` is required' >&2 return 1 fi fi if [ -z "$value" ]; then # line 76: value: unbound variable if (( ${#args} > 0 )); then value="${args[0]}" args=( "${args[@]:1}" ) else echo 'Error: `--value` is required' >&2 return 1 fi fi if (( ${#args} > 0 )); then # line 86: args: unbound variable echo 'Error: too many arguments' >&2 return 1 fi array="${array//\//\\\/}" value="${value//$'\n'/\\$'\n'}" ssed -Ri "$cfgFile" \ -e 's~^(\s*)('"${array}"'\s*=>\s*array\s*\()((?:\([^)]*\)|[^)])+)~\1\2\n\1 \3\n\1~' ssed -Ri "$cfgFile" \ -e '/^\s*'"${array}"'\s*=>\s*array\s*\([^)]*$/ { :a n s~^((?:\s*(?:[^,/\s]|/[^/]))+)(\s*//.*)?$~\1,\2~ s~^(\s*)//\s*('"${key//~/\\~}"'\s*=>)~\1\2~ /^\s*\)/ { i \ '"${key}"'=>'"${value}"', bq } /^\s*'"${key//\//\\\/}"'\s*=>/ { s~>.*~>'"${value//~/\\~}"',~ bq } ba :q }' } I tried replacing declare value key arg to... declare value= declare key= declare arg= ...but it didn't change anthing. I'm a little bit confused! Did I miss something? Is there something I'm not seeing? Edit 1 The function is called from an entrypoint script of a docker image based on ubuntu 18.04. In fact, I use this image. The function's file is copied to /opt/docker/functions/set-config-ls.sh. Here is the script from which the function is called: #!/usr/bin/env bash set -eu declare FUNC_DIR='/opt/docker/functions' declare APP_DIR="${WEB_DOCUMENT_ROOT}" declare DB_SETUP_PHP="/opt/docker/db_setup.php" source "${FUNC_DIR}/tty-loggers.sh" source "${FUNC_DIR}/yes-no.sh" source "${FUNC_DIR}/file-env.sh" source "${FUNC_DIR}/set-config-ls.sh" source "${FUNC_DIR}/env-list-vars.sh" #################################################################### ########################## Setup Variables ######################### fileEnv 'LIMESURVEY_DB_TYPE' 'mysql' fileEnv 'LIMESURVEY_DB_HOST' 'mysql' fileEnv 'LIMESURVEY_DB_PORT' '3306' fileEnv 'LIMESURVEY_TABLE_PREFIX' '' fileEnv 'LIMESURVEY_ADMIN_NAME' 'Lime Administrator' fileEnv 'LIMESURVEY_ADMIN_EMAIL' '[email protected]' fileEnv 'LIMESURVEY_ADMIN_USER' '' fileEnv 'LIMESURVEY_ADMIN_PASSWORD' '' fileEnv 'LIMESURVEY_DEBUG' '0' fileEnv 'LIMESURVEY_SQL_DEBUG' '0' fileEnv 'MYSQL_SSL_CA' '' fileEnv 'LIMESURVEY_USE_INNODB' '' # if we're linked to MySQL and thus have credentials already, let's use them fileEnv 'LIMESURVEY_DB_NAME' "${MYSQL_ENV_MYSQL_DATABASE:-limesurvey}" fileEnv 'LIMESURVEY_DB_USER' "${MYSQL_ENV_MYSQL_USER:-root}" if [ "${LIMESURVEY_DB_USER}" = 'root' ]; then fileEnv 'LIMESURVEY_DB_PASSWORD' "${MYSQL_ENV_MYSQL_ROOT_PASSWORD:-}" else fileEnv 'LIMESURVEY_DB_PASSWORD' "${MYSQL_ENV_MYSQL_PASSWORD:-}" fi if [ -z "${LIMESURVEY_DB_PASSWORD}" ]; then logError 'error: missing required LIMESURVEY_DB_PASSWORD environment variable' >&2 logError ' Did you forget to -e LIMESURVEY_DB_PASSWORD=... ?' >&2 logError '' >&2 logError ' (Also of interest might be LIMESURVEY_DB_USER and LIMESURVEY_DB_NAME.)' >&2 exit 1 fi declare -A CONNECTION_STRINGS=( [mysql]="mysql:host=${LIMESURVEY_DB_HOST};port=${LIMESURVEY_DB_PORT};dbname=${LIMESURVEY_DB_NAME};" [dblib]="dblib:host=${LIMESURVEY_DB_HOST};dbname=${LIMESURVEY_DB_NAME}" [pgsql]="pgsql:host=${LIMESURVEY_DB_HOST};port=${LIMESURVEY_DB_PORT};user=${LIMESURVEY_DB_USER};password=${LIMESURVEY_DB_PASSWORD};dbname=${LIMESURVEY_DB_NAME};" [sqlsrv]="sqlsrv:Server=${LIMESURVEY_DB_HOST};Database=${LIMESURVEY_DB_NAME}" ) if [ -z "${CONNECTION_STRINGS[${LIMESURVEY_DB_TYPE}]}" ]; then logError "error: invalid database type: ${LIMESURVEY_DB_TYPE}" >&2 logError " LIMESURVEY_DB_TYPE must be either \"mysql\", \"dblib\", \"pgsql\" or \"sqlsrv\"." >&2 exit 1 fi #################################################################### ######################## Download LimeSurvey ####################### if [ ! -f "${APP_DIR}/.RELEASE_${LIMESURVEY_GIT_RELEASE}" ] || isYes "${LIMESURVEY_FORCE_FETCH}"; then find "$APP_DIR" -maxdepth 1 -type f -name '.RELEASE_*' -delete logInfo "Retrieving LimeSurvey... (this operation may take a while)" >&2 wget -O "/tmp/lime.tar.gz" \ --progress="$( [ -t 1 ] && echo 'bar:noscroll' || echo 'dot:mega' )" \ "https://github.com/LimeSurvey/LimeSurvey/archive/${LIMESURVEY_GIT_RELEASE}.tar.gz" logInfo "Extracting files from archive..." >&2 tar -xzf "/tmp/lime.tar.gz" \ --strip-components=1 \ --keep-newer-files \ --exclude-vcs \ --to-command='sh -c '\'' mkdir -p "$(dirname "'"${APP_DIR}"'/$TAR_FILENAME")" && touch "'"${APP_DIR}"'/$TAR_FILENAME" && dd of="'"${APP_DIR}"'/$TAR_FILENAME" >/dev/null 2>&1 && echo "'"${APP_DIR}"'/$TAR_FILENAME" '\' | xargs -I '{}' touch -t 195001010000 '{}' chown -R "${APPLICATION_USER}:${APPLICATION_GROUP}" "$APP_DIR" rm "/tmp/lime.tar.gz" touch ".RELEASE_${LIMESURVEY_GIT_RELEASE}" fi #################################################################### ######################### LimeSurvey Setup ######################### # Install BaltimoreCyberTrustRoot.crt.pem if [ ! -f "${APP_DIR}/BaltimoreCyberTrustRoot.crt.pem" ]; then logInfo "Downloading BaltimoreCyberTrustroot.crt.pem..." curl -fsSLo "${APP_DIR}/BaltimoreCyberTrustRoot.crt.pem" \ "https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem" fi if [ ! -f "${APP_DIR}/application/config/config.php" ]; then logWarn "No config file for LimeSurvey" logWarn " Copying default config file..." # Copy default config file but also allow for the addition of attributes echo " 'attributes' => array()," | awk '/lime_/ && c == 0 { c = 1; system("cat") } { print }' \ "${APP_DIR}/application/config/config-sample-${LIMESURVEY_DB_TYPE}.php" \ > "${APP_DIR}/application/config/config.php" fi # Set LimeSurvey configs setConfigLS -a 'db' -k 'connectionString' "'${CONNECTION_STRINGS[${LIMESURVEY_DB_TYPE}]}'" setConfigLS -a 'db' -k 'tablePrefix' "'${LIMESURVEY_TABLE_PREFIX}'" setConfigLS -a 'db' -k 'username' "'${LIMESURVEY_DB_USER}'" setConfigLS -a 'db' -k 'password' "'${LIMESURVEY_DB_PASSWORD}'" setConfigLS -a 'urlManager' -k 'urlFormat' "'path'" setConfigLS -k 'debug' "${LIMESURVEY_DEBUG}" setConfigLS -k 'debugsql' "${LIMESURVEY_SQL_DEBUG}" if [ -n "${MYSQL_SSL_CA}" ]; then setConfigLS -a 'db' 'attributes' \ "array(PDO::MYSQL_ATTR_SSL_CA => '${APP_DIR//\//\\\/}\/${MYSQL_SSL_CA}', PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT => false)" fi declare cfg key val for ENV_VAR in $(envListVars "limesurvey\."); do val="$(envGetValue "$ENV_VAR")" cfg="${ENV_VAR#limesurvey.}" cfg="${cfg%%.*}" key="${ENV_VAR#limesurvey.*.}" setConfigLS -a "$cfg" "$key" "$val" done mkdir -p "${APP_DIR}/upload/surveys" chown -R "${APPLICATION_USER}:${APPLICATION_GROUP}" \ "${APP_DIR}/tmp" "${APP_DIR}/upload" "${APP_DIR}/application/config" #################################################################### #################### LimeSurvey Database Setup ##################### if [ -n "${LIMESURVEY_USE_INNODB}" ]; then # If you want to use INNODB - remove MyISAM specification from LimeSurvey code sed -i "/ENGINE=MyISAM/s/\(ENGINE=MyISAM \)//1" \ "${APP_DIR}/application/core/db/MysqlSchema.php" fi logInfo "Waiting for database..." >&2 while ! curl -sL "${LIMESURVEY_DB_HOST}:${LIMESURVEY_DB_PORT:-3306}"; do sleep 1; done DBSTATUS=$(TERM=dumb php -f "$DB_SETUP_PHP" -- \ "${LIMESURVEY_DB_HOST}" "${LIMESURVEY_DB_USER}" "${LIMESURVEY_DB_PASSWORD}" \ "${LIMESURVEY_DB_NAME}" "${LIMESURVEY_TABLE_PREFIX}" "${MYSQL_SSL_CA}" \ "${APP_DIR}") &>/dev/null if [ "${DBSTATUS}" != "DBEXISTS" ] && [ -n "${LIMESURVEY_ADMIN_USER}" ] && [ -n "${LIMESURVEY_ADMIN_PASSWORD}" ]; then logInfo 'Database not yet populated - installing Limesurvey database' >&2 su - "${APPLICATION_USER}" \ -c php -f "${APP_DIR}/application/commands/console.php" -- \ install "${LIMESURVEY_ADMIN_USER}" "${LIMESURVEY_ADMIN_PASSWORD}" \ "${LIMESURVEY_ADMIN_NAME}" "${LIMESURVEY_ADMIN_EMAIL}" verbose fi if [ -f "${APP_DIR}/application/commands/UpdateDbCommand.php" ]; then logInfo 'Updating database...' >&2 su - "${APPLICATION_USER}" -c php "${APP_DIR}/application/commands/console.php" updatedb else logWarn 'WARNING: Manual database update may be required!' >&2 fi if [ -n "${LIMESURVEY_ADMIN_USER}" ] && [ -n "${LIMESURVEY_ADMIN_PASSWORD}" ]; then logInfo 'Updating password for admin user...' >&2 su - "${APPLICATION_USER}" \ -c php -f "${APP_DIR}/application/commands/console.php" -- \ resetpassword "${LIMESURVEY_ADMIN_USER}" "${LIMESURVEY_ADMIN_PASSWORD}" fi Here is the output of bash --version: GNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Edit 2 I put what I could on github. Here is the commit. I'm not entirely sure but I think it should work if you clone the repo and run the launch script.
This doesn't set a value for any of value, key or arg: declare value key arg So, if the assignment to key in the case isn't reached: while (( $# > 0 )); do arg="$1" && shift case "$arg" in --key=*) key="${arg#*=}" ;; then key will still be unset ("unbound") after the loop, and since the script has set -u, it'll throw an error when it's used. if [ -z "$key" ]; then # line 66: key: unbound variable Initializing the variables to empty strings (as with declare key= value= arg=) would remove that issue. However, you also have this reference to args: if [ -z "$key" ]; then # line 66: key: unbound variable if (( ${#args} > 0 )); then Note that that refers to args, not args[@], you're taking the length of the zeroth element of the array args, not the number of elements in it. But if args is empty, that zeroth element doesn't exist, again, an error.
Random unbound variable error within function
1,478,106,400,000
I have been trying all day with no success to get bash to receive arguments: the closest reference to this I could find is: How to pass parameters to an alias? if i execute: rename -v -n 's/^the.//' * it does exactly what I need, but I would like to turn into into an alias that received "the." string at run time. Is there a way of doing this? Please any ideas would be welcome! I have tried this, but with no success: alias rp="_rp(){ rename 's/"$1"//' *; unset -f _rp; }; _rp"
You can't use arguments in an alias. (You can append items after it, but that then just complicates this situation.) Here's what the man page (man bash) says about them: The first word of each simple command, if unquoted, is checked to see if it has an alias. If so, that word is replaced by the text of the alias. [...] There is no mechanism for using arguments in the replacement text. If arguments are needed, a shell function should be used. [...] For almost every purpose, aliases are superseded by shell functions. So, instead of an alias you should use a function. rp() { rename "s{$1}{}" *; } # No "{}" characters in the substitution Usage rp 'the.' # Quotes optional but recommended. Remember . represents any character
bash alias rename function with arguments
1,478,106,400,000
I'm using bash shell on Ubuntu Linux. I have this in my script output_file=create_test_results_file "$TFILE1" Through echo statements, I have verified that the value of $TFILE1 is a file path, e.g. /tmp/run_tests.sh1.7381.tmp But when I run my script, somehow the contents of the file are being passed to my function, whose contents are #!/bin/bash create_test_results_file () { RESULTS_INPUT_FILE=$1 OUTPUT_FILE="/tmp/output`date +%m`_`date +%d`_`date +%y`.txt" touch $OUTPUT_FILE marker="" num_passed=0 num_failed=0 while read p; do if [[ $p == *"√"* ]]; then if [[ $p == *"PASSED"* ]]; then num_passed=$((num_passed+1)) elif [[ $p == *"WARNING"* ]]; then num_failed=$((num_failed+1)) fi elif [ $num_passed -gt 0 -o $num_failed -gt 0 ] then echo "second branch" echo "$marker PASSED: $num_passed, WARNING: $num_failed" >> $OUTPUT_FILE marker=$p num_passed=0 num_failed=0 else marker=$p fi done <"$RESULTS_INPUT_FILE" # Add un-added lines if [ $num_passed -gt 0 -o $num_failed -gt 0 ] #if [ \( "$num_passed" -gt 0 -o "$num_failed" -gt 0 \) -a \( -z "$marker" \) ] then echo "$marker PASSED: $num_passed, FAILED: $num_failed" >> $OUTPUT_FILE fi echo $OUTPUT_FILE } because I get errors like /tmp/run_tests.sh1.7381.tmp: line 1: Validation: command not found /tmp/run_tests.sh1.7381.tmp: line 2: 2017-04-20: command not found /tmp/run_tests.sh1.7381.tmp: line 3: Login: command not found /tmp/run_tests.sh1.7381.tmp: line 4: $'\E[1': command not found The words "Validation", "2017-04-20", and so on, are all contents of the file. What's the correct way to pass in the file path as an argument and not have it be interpreted literally?
The command line for calling your function: output_file=create_test_results_file "$TFILE1" This will assign the value create_test_results_file to the variable output_file before running the command "$TFILE1". I believe you might have wanted to do output_file=$( create_test_results_file "$TFILE1" ) This assigns the output of create_test_results_file "$TFILE1" to the variable output_file. There are several things one could comment upon in this script, but I'll pick this line: OUTPUT_FILE="/tmp/output`date +%m`_`date +%d`_`date +%y`.txt" This is better written as OUTPUT_FILE=$( date +"/tmp/output%m_%d_%y.txt" ) Also related: Security implications of forgetting to quote a variable in bash/POSIX shells Why does my shell script choke on whitespace or other special characters?
How do I pass a file path to a function instead of the contents of the file?
1,478,106,400,000
This function essentially aims: alias "git log"="git log --name-status" had it been possible. Since it is not possible to alias something with spaces in it, I choose to write a shell function: git() { case $# in 1) case "$1" in log) git log --name-status ;; *) git "$@" ;; esac ;; *) git "$@" ;; esac } However, whenever I run this, the terminal emulator crashes after ~1 second. What might be the reason for this? I am using mintty and Cygwin.
You're recursively calling git the function. Use command git for the internal calls so that the function isn't used for them.
Terminal emulator crashes with function with nested case statements?
1,478,106,400,000
What am I missing here? I have created a simple array: declare -a appArray=( "item1 -a -b" "item2 -c -d" ) If I echo this I can see it all echo ${appArray[@]} > item1 -a -b item2 -c -d I then create a function as follows: fc_DEBUG () { if [ $1 -eq 1 ] ; then echo $2; fi; }; It is designed to sit in a bash script, and if I set a DEBUG variable it will echo the text back. So I can use it throughout the script without needing to manually add / remove things. It works fine with basic data: e.g fc_DEBUG $DEBUG "This is DEBUG text" If I call this with the Array however, I only get a part of the Array. fc_DEBUG $DEBUG "${appArray[@]}" > item1 -a -b
${appArray[@]} gets expanded before fc_DEBUG runs. So the second argument the function sees, is the first of the array. To be explicit, the three arguments fc_DEBUG sees, are $DEBUG "item1 -a -b" "item2 -c -d" (replace $DEBUG with the words resulting from the split+glob operator applied to the actual value of $DEBUG (as you forgot to quote it)). In technical terms, the array is passed by value, not by reference. fc_DEBUG () { if [ "$1" -eq 1 ] ; then shift echo "$@" fi } Now, the first argument is dropped from the argument list with shift, and the rest of all the arguments is printed. Call it with a quoted array: fc_DEBUG "$DEBUG" "${appArray[@]}"
display array in a function - not working
1,478,106,400,000
I studied this article called Returning Values from Bash Functions. Data Lorem. \begin{document} hello \end{document} Case #1 which does not work Code #!/bin/bash function getStart { local START="$(awk '/begin\{document\}/{ print NR; exit }' data.tex)" } START2=$(getStart) echo $START2 which returns falsely an empty line. I expect 1. Why does the script return an empty line? Case #2 which works Code #!/bin/bash function getStart { local START="$(awk '/begin\{document\}/{ print NR; exit }' data.tex)" echo $START } getStart which prints correctly 1. Output of choroba's answer #!/bin/bash function getStart { local START="$(awk '/begin\{document\}/{ print NR; exit }' data.tex)" echo $START } START2=$(getStart) echo $START2 gives the linenumber only once, which is not the expected result. I think it should do it twice.
$(...) (aka "Command substitution") captures the output of the ... command. Assigning a value to a variable produces no output, so there's nothing to capture. In case #2, echo produces the output. getStart () { local l=Hallo echo $l } v=$(getStart) echo $v To answer your update: the function outputs Hallo. This output is captured by the command substitution, because that's what command substitution does, so up to v=$(getStart), the script produces no output. Then the line echo $v outputs Hallo.
Returning local values from Bash variables?
1,427,278,252,000
I have the following simple script. In this script, I am assigning a value to a global variable inside a function. I can clearly see that the value is being assigned to the variable via a debug statement. However, when I echo the variable at the end, it's always empty. function getValue() { local key=$1 local configFile=$2 keyValuePair="$(egrep "$key" "$configFile")" if [ ! "$?" -eq 0 ] then echo "Cannot find the value for the specifed key" return 1; fi value="$(cut -d"=" -f"2" <<< "$keyValuePair")" echo "$value" return 0; } function configuer() { if [ ! -f "$RMCFGFILE" ] then echo "Cannot file RMGCFG file." return 1; fi #Recyclebin configuration value=$(getValue "recyclebin" $RMCFGFILE) if [ ! "$?" -eq 0 ] then echo "$value" return 1; fi RECYCLEBIN="$value" **#### I am assigning here** return 0; } RECYCLEBIN= RMCFGFILE="/home/sas/.rm.cfg" LOCALEFILE="" CONFIG_RESULT=$(configuer) if [ ! "$?" -eq 0 ] then echo "$CONFIG_RESULT" exit 1; fi echo "Configuration success" eval echo "Recyclebin: ${RECYCLEBIN}" **##No value** Does anyone see what's going wrong here??
You're expecting: CONFIG_RESULT=$(configuer) To assign a value to $RECYCLEBIN because you... RECYCLEBIN="$value" ... in the configuer() function. It's true that the function does assign a value to $RECYCLEBIN but that value only persists for the duration of the $(subshell) in which you set it. It will not apply any changes to its parent shell's environment - which is where you call it. When you: eval echo "Recyclebin: ${RECYCLEBIN}" eval parses all of its arguments out into a space separated string and attempts to run the results as a shell command. So "${RECYCLEBIN}" disappears because - in the current shell environment - it was last set to the '' null string like: RECYCLEBIN= So on its execution of the statement all it does is: echo Recyclebin: Which is functionally no different than... echo "Recyclebin: ${RECYCLEBIN}" ...anyway because $RECYCLEBIN is empty.
Value assigned inside a function variable is always empty
1,427,278,252,000
I have a bash function called numeric that returns either 1 or 0. numeric () { # compute k either 1 or 0 echo "$k" } How can I use this function in a conditional statement to check if a variable var is numeric?
Remember that in the context of shell conditional expressions, a return value of 0 means "success" or "true", and a non-zero value means "failure" or "false", so I would recommend adapting the function so that it returns 0 if the argument is a numerical value. Assuming that the "conditional statement" is an if construct, the following will work: if numeric "$var" then # Code if $var is numeric else echo "$var is not numeric" fi
Use custom test function in bash conditional statement
1,427,278,252,000
This site says that functions are faster than aliases, but he rightly points out that aliases are easier to understand - when you want something very simple and do not need to consider passing arguments, aliases are convenient and sensible. That being the case, my personal profile is about 1,000 lines and serves both as a source of functions and tools that I use a lot, and as a means of keeping techniques that I can refer to and reuse for other tasks, with both aliases and functions in there. A problem though is that aliases take precedence over functions, and re-definitions of aliases and functions can cause problems (e.g. if I have a function called gg and then later on in the script, by accident, I have an alias called gg - But also if a function is redefined later, again as a function, it overrides the previous definition). The profile loads, but I end up with problems. One solution could be to eliminate all aliases and only use functions (does anyone do that, I'd be curious to know, because if I want to do alias m=man that's more intuitive and sensible than function m() { man $@; }?), but I still have the problem of function redefinitions in that case. Is there a way to parse a script with the goal of answering: "for each declaration of an alias or function, show me all lines that contain a re-declaration (either alias or function) of that item"?
Try something like this: $ cat find-dupes.pl #!/usr/bin/perl use strict; #use Data::Dump qw(dd); # Explanation of the regexes ($f_re and $a_re): # # Both $f_re and $a_re start with '(?:^|&&|\|\||;|&)' to anchor # the remainder of the expression to the start of the line or # immediately after a ;, &, &&, or ||. Because it begins with # '?:', this is a non-capturing sub-expression, i.e. it just # matches its pattern but doesn't return what it matches. # $f_re has two main sub-expressions. One to match 'function name ()' # (with 'function ' being optional) and the other to match # 'function name () {' (with the '()' being optional). # # Each sub-expression contains more sub-expressions, with one of # them being a capture group '([-\w.]+)' and the rest being # non-capturing (they start with '?:'). i.e. it returns the # function name as either $1 or $2, depending on which subexp # matched. my $f_re = qr/(?:^|&&|\|\||;|&)\s*(?:(?:function\s+)?([-\w.]+)\s*\(\)|function\s+([-\w.]+)\s+(?:\(\))?\s*\{)/; # $a_re matches alias definitions and returns the name of # the alias as $1. my $a_re = qr/(?:^|&&|\|\||;|&)(?:\s*alias\s+)([-\w.]+)=/; # %fa is a Hash-of-Hashes (HoH) to hold function/alias names and # the files/lines they were found on. i.e an associative array # where each element is another associative array. Search for # HoH in the perldsc man page. my %fa; # main loop, read and process the input while(<>) { s/#.*|^\s*:.*//; # delete comments s/'[^']+'/''/g; # delete everything inside ' single-quotes s/"[^"]+"/""/g; # delete everything inside " double-quotes next if /^\s*$/; # skip blank lines while(/$f_re/g) { my $match = $1 // $2; #print "found: '$match':'$&':$ARGV:$.\n"; $fa{$match}{"function $ARGV:$."}++; }; while(/$a_re/g) { #print "found: '$1':'$&':$ARGV:$.\n"; $fa{$1}{"alias $ARGV:$."}++; }; close(ARGV) if eof; }; #dd \%fa; # Iterate over the function/alias names found and print the # details of duplicates if any were found. foreach my $key (sort keys %fa) { my $p = 0; # Is this function/alias ($key) defined more than once on # different lines or in different files? if (keys %{ $fa{$key} } > 1) { $p = 1; } else { # Iterate over the keys of the second-level hash to find out # if there is more than one definition of a function/alias # ($key) in the same file on the same line ($k) foreach my $k (keys %{ $fa{$key} }) { if ($fa{$key}{$k} > 1) { $p = 1; # break out of the foreach loop, there's no need to keep # searching once we've found a dupe last; }; }; }; # print the details if there was more than one. print join("\n\t", "$key:", (keys %{$fa{$key}}) ), "\n\n" if $p; }; The commented-out Data::Dump, print, and dd lines were for debugging. Uncomment them to get a better idea of what this script does and how it works. The output of the dd function from the Data::Dump module is particularly interesting as it shows you the structure (and contents) of the %fa HoH. Data::Dump is not included with perl, it's a library module you need to install. You didn't mention what distro you're using but if you're using debian/ubuntu/mint/etc, you can install it with sudo apt install libdata-dump-perl. Other distros probably have it packaged under a slightly different name. Otherwise, you can install it with cpan. Example output (using a file containing your aliases from your comment plus a few dummy functions): $ cat yorsub.aliases function foo () { echo ; } bar () { echo ; } bar () { echo ; } function baz () { echo ; } && quux () { echo ; } ; alias xyz=abc; type tmux &> /dev/null && alias t='tmux' alias cd-='cd -'; alias cd..='cd ..'; alias u1='cd ..'; alias u2='cd ../..'; alias u3='cd ../../..'; alias u4='cd ../../../../..'; alias u5='cd ../../../../../..'; alias u6='cd ../../../../../../..' alias back='cd -'; alias cd-='cd -'; alias .1="cd .."; alias .2="cd ../.."; alias .3="cd ../../.."; alias .4="cd ../../../.."; alias .5="cd ../../../../.."; alias .6='cd ../../../../../../..' function cd.. { cd .. ; } function abc () { xyx "$@" }; abc () { xyz } ; function abc { xyz }; alias abc=xyz $ ./find-dupes.pl yorsub.aliases abc: function yorsub.aliases:8 alias yorsub.aliases:8 bar: function yorsub.aliases:3 function yorsub.aliases:2 cd-: alias yorsub.aliases:6 cd..: alias yorsub.aliases:6 function yorsub.aliases:7
Finding duplicate aliases and functions in a script (.bashrc etc)
1,427,278,252,000
In bash, when I'd like to have a glimpse at what an already defined shell function does, I can: $ type myFunctionName For a variable myFunctionName, it provides me with the type of the variable (a function), but also print the source of this shell function on the terminal. Very handy. When I do the same in zsh, it only gives me the its type, not its shell code. Is there a way to ask zsh to print to source of a shell function given its name ?
For both zsh and bash (and ksh) you can use typeset -f myFunctionName to get the function definition % x() function> { function> echo x function> } % typeset -f x x () { echo x }
Is there a zsh command to output shell function code, like `type` in bash [duplicate]
1,427,278,252,000
within of my bash code I have a part of an sed + AWK code, which do interatively some operation on input file and add the results to another txt file (the both filles had been created by the same bash script, and can be stored as different variables). #sed removing some lines in input fille "${file}".xvg, defined in the begining of bash script sed -i '' -e '/^[#@]/d' "${file}".xvg # AWK measure XMAX and YMAX in the input file # adding these outputs as two lines in another batch.bfile, which is going to be used for something awk ' NR==1{ max1=$1 max2=$2 } $1>max1{max1=$1} $2>max2{max2=$2} END{printf "WORLD XMAX %s\nWORLD YMAX %s\n",max1+0.5,max2+0.5'} "${file}".xvg >> "${tmp}"/batch.bfile is it possible to combine the both (sed +awk ) actions into some function (defined in the beggining of my bash script) and then use it as one line command within the script (in more sophisticated cas it will be applied on many filles within FOR loop)? Here is example of my version: #!/bin/bash #folder with batch file home=$PWD tmp="${home}"/tmp ## define some functions for file processing bar_xvg_proc () { ##AWK procession of XVG file: only for bar plot; sed -i '' -e '/^[#@]/d' ${file} # check XMAX and YMAX for each XVG awk ' NR==1{ max1=$1 max2=$2 } $1>max1{max1=$1} $2>max2{max2=$2} END{printf "WORLD XMAX %s\nWORLD YMAX %s\n",max1+0.5,max2+0.5'} ${file} >> "${tmp}"/grace2.bfile } ### bar_xvg_proc "${home}"/test.xvg and here is an error from sed sed: -i may not be used with stdin BUT if I define my test.xvg as a new variable $file="${home}"/test.xvg before calling my function in the script - it works well. How I could use this function directly with input file (w/o the specific variable assigned to the file )? Here is my xvg file: # Created by: # :-) GROMACS - gmx cluster, 2019.3 (-: # # Executable: /usr/local/bin/../Cellar/gromacs/2019.3/bin/gmx # Data prefix: /usr/local/bin/../Cellar/gromacs/2019.3 # Working dir: /Users/gleb/Desktop/DO/unity_or_separation # Command line: # gmx cluster is part of G R O M A C S: # # Good gRace! Old Maple Actually Chews Slate # @ title "Cluster Sizes" @ xaxis label "Cluster #" @ yaxis label "# Structures" @TYPE xy @g0 type bar 1 94 2 31 3 24 4 24 5 15 6 6 7 6 8 5 9 4 10 4 11 3 12 3 13 3 14 3 15 2 16 2 17 2 18 2 19 1 20 1 21 1 22 1 23 1 24 1 25 1
Just change ${file} to "$1" inside your function and it'd do what you want. Also then consider changing this: bar_xvg_proc () { ##AWK procession of XVG file: only for bar plot; sed -i '' -e '/^[#@]/d' "$1" # check XMAX and YMAX for each XVG awk ' NR==1{ max1=$1 max2=$2 } $1>max1{max1=$1} $2>max2{max2=$2} END{printf "WORLD XMAX %s\nWORLD YMAX %s\n",max1+0.5,max2+0.5'} "$1" >> "${tmp}"/grace2.bfile } to this: bar_xvg_proc () { ##AWK procession of XVG file: only for bar plot; # check XMAX and YMAX for each XVG awk ' /^[#@]/ { next } (++nr)==1{ max1=$1 max2=$2 } $1>max1{max1=$1} $2>max2{max2=$2} END{printf "WORLD XMAX %s\nWORLD YMAX %s\n",max1+0.5,max2+0.5'} "${@:--}" >> "${tmp}"/grace2.bfile } You never need sed when you're using awk and using "${@:--}" that way lets you have a function that will work whether you pass multiple file names to it or pipe a stream to it as it's telling awk to use stdin if no file is present. Idk if you should really be using >> instead of > at the end of that, and you might want to do the output redirection outside of the function.
bash: functions containing part of AWK code
1,427,278,252,000
I found (on Google) this perfectly working line to replace every occurences in all files in my directory and subdirectories: grep -lr previoustext | xargs sed -i 's/previoustext/newtext/g' It works great. But now I'm trying to use it in a function in my bash_aliases file as following: freplace() { grep -lr previoustext | xargs sed -i 's/previoustext/newtext/g'; } However, when I call freplace previoustext newtext in my terminal, nothing happens ... . The text is not replaced. Any idea why it doesn't work ?
If you want to pass arguments to a function, you need to use positional parameters to pick them up. freplace() { grep -lr "$1" | xargs sed -i "s/$1/$2/g" } Note that it doesn't work for strings containing / or other characters special to sed.
bash function to replace every occurence of text in directory and subdirectories
1,427,278,252,000
I'd like to be able to have $SECONDS shown with hours, minutes, seconds in an environment variable, so I only need to use, e.g. $RUNTIME in various places in the script rather than have the whole thing every time I want to use it. I don't know what formatting to use to allow it to go into a variable: export RUNTIME="$(($SECONDS / 3600))hrs $((($SECONDS / 60) % 60))min $(($SECONDS % 60))sec" So I can simply: echo "The script ran for: $RUNTIME" Thanks.
Defining RUNTIME as a variable wouldn't help as it outputs a constant value, always, the run time when it was defined. Try a shell function in lieu: runtime() { printf "%dhrs %dmin %dsec\n" $((SECONDS / 3600)) \ $(((SECONDS / 60) % 60)) \ $(($SECONDS % 60)); } runtime and call it / use it with a "command substitution".
Can I put $SECONDS into an environment variable in a bash script?
1,427,278,252,000
I want to pass arguments from a loop into a function func. For simplicity let's say we are working with the loop for x in {1..5} do for y in {a..c} do echo $x$y done done I'm just echoing because I don't know what to do. I'd like to run the equivalent of func 1a 1b 1c 2a 2b 2c 3a 3b 3c 4a 4b 4c 5a 5b 5c. How do I do this?
func {1..5} would be equivalent to func 1 2 3 4 5. In general, the list of words in a for statement is just like any list of words in a command, so you can just replace the loop with a single invocation of the command, with whatever list you used there moved to the command arguments. Also, you can use multiple brace expansions together: {1..5}{a..c} would create the list 1a 1b 1c 2a 2b 2c 3a 3b 3c 4a 4b 4c 5a 5b 5c (as distinct words), so in the case you show, func {1..5}{a..c} should work. If your loop does something more complex to create the arguments to the final command, you can use an array to collect them (in Bash/ksh/zsh). Assuming we have generate_arg that has to be run to produce the arguments to func: args=() for i in {1..5}; do args+=( "$(generate_arg "$i")" ) done func "${args[@]}" (Using an array is better than concatenating the values to a string in that it keeps values with whitespace intact.)
How to pass list of arguments into a function
1,427,278,252,000
I have a shell function (in .bashrc) which creates a temp file, executes the arguments (including all sequence of pipelines), redirects it to a temp file and then open it in VS Code. I invoke the shell function as Temp "ls | nl" In the follwoing code, I tried to make it work. I break the entire string with IFS as a space, store it in an array. Now I want to execute the entire sequence of pipelines and redirect the final stdout to the temp file. How can I achieve this.? I know that for loop won't work here. But I want to print the entire array. Temp() { a="$(mktemp --suffix=.md "/tmp/$1-$(GetFilename "$2")-XXX")" IFS=' ' read -r -a Array <<< "$1" nArray=${#Array[@]} # Size of array for ((i=0; i<nArray; i=i+1)); do done #"${@}" > "$a" code -r "$a" } In the interactive terminal session the following works: cat <<EOF $(ls | nl) EOF So, I tried Heredoc, instead of the for loop inside the function cat > "$a" <<EOF $($1) EOF But this puts quotes around | and thus fails. The above would work, if we could remove the quotes somehow. This also doesn't work cat > "$a" <<EOF $(echo $1 | sed "s/'//g") EOF
If you want a function to execute a shell command given as an argument, you'll need to use eval on the string, e.g. this prints FOO: eval_arg() { eval "$1" } eval_arg "echo foo |tr a-z A-Z" Just expanding "$1" won't do, since shell grammar like pipes, quotes and redirections aren't processed after parameter expansions. Splitting the first argument to an array or to $@ doesn't really change this. So, $(ls | nl) has a verbatim pipe character, which is processed in the shell, but in $($1), any special characters within the value of $1 aren't processed (apart from word splitting and globs). There are no quotes in play here. As for that last example, $(echo $1 | sed "s/'//g") splits the value of $1 on whitespace (word splitting with the default IFS), then joins the words with spaces (echo), passes that as input to sed which removes any single quotes there. Hence: $ set -- "'foo bar'" $ $(echo $1 | sed "s/'//g") foo bar The outer quotes in the set command are processed by the shell, while the inner quotes are part of the value. Note that just set -- 'foo bar' would not result in $1 containing any quotes, the only quotes here would be removed as part of the normal shell processing.
Shell Function: Sequence of Pipelines as Argument
1,427,278,252,000
I am writing a shell function which makes an external API call via cURL (the external API syntax isn't under my control). I've approached it like this (simplified): #!/bin/sh template_get_entry='get_entry:%s' template_set_entry='set_entry:%s=%s' curlheaders='-H stuff' curluri="https://www.domain.com:1234/api.php" # make an API call to get entry "foo" call_api "$template_get_entry" "foo" # make an API call to set entry "foo" to "bar" call_api "$template_set_entry" "foo" "bar" call_api() { apicmd="$( printf "$1" "$2" "$3" )" result="$( eval "/usr/local/bin/curl" "$curlheaders" "-d" "$apicmd" "$curluri" )" retcode="$?" .....stuff..... } There are 2 problems with this code. First, the number of args is variable. If the line defining apicmd is called with less than the maximal number of args, printf interprets any excess commands as extra instances of printing the format string, to be appended. I can't see the correct way to work around this. Second, because I've used eval, this creates a knockon problem with eval, in that retcode will surely pick up the return code from eval and not from curl, and I don't know the right way to prevent/fix that. How should I do something like this, which needs a variable number of args?
You could try to fill the format strings with zero length specifiers up to the maximum expected parameter count: template_get_entry='get_entry:%s %0.0s'
Using shell 'printf' where the format string is in a variable and doesn't have a fixed number of field placeholders?
1,427,278,252,000
I do not understand the following expression. function abc(){ .............. ............... [[ -f $filename]] && return 0 || return 1 } As per tutorial if there is a file exists with filename variable name then this function returns 1 otherwise it returns 0. I understand && || operator ,but how is this statement getting the desire result? As per me,In case [[ -f $filename ]] evaluates false ,then one statement of && is false then result of and is false.Now it goes to OR and if first operand is 0 it returns result of second operand so it should return 1,but instead it is returning 0. How is this being evaluated?
Both return statements on that last line of the function can be removed. [[ -f "$filename" ]] This is the last statement in the function with both return's removed (note the quoted variable expansion and the added space before ]]). The "exit value" of the function will be the result of this statement. If the file $filename exists, the function will exit with a value of zero (signifying "success", "yes", "ok" etc.), otherwise it will exit with a value of one (or more generally, non-zero, signifying "failure" of some kind). Don't mix || and && on the same line unless you know what it does. In a command line as command1 && command2 || command3 the last command would be executed if either of the previous commands failed (returned non-zero). It's better to write if command1; then command2 else command3 fi if this is what you meant. This matters in commands like [[ -f "$filename" ]] && echo "exists" || touch "$filename" This would try to execute the touch command if the echo failed, which it may do if there's nowhere to output the string to (a write error occurs).
Return value of function in UNIX
1,427,278,252,000
Suppose, I have a directory called Titlepage that have many files named titlepage_1.pdf, titlepage_2.pdf ... titlepage_n.pdf and their tex files also. I have a bash function that alter two filenames.(e.g. $alterpdf 2 3 this command swap filename titlepage_2.pdf to titlepage_3.pdf. And does same thing for the corresponding tex files also.) function swap(){ mv $1 $1._tmp && mv $2 $1 && mv $1._tmp $2; } function alterpdf(){ swap titlepage_$1.pdf titlepage_$2.pdf; swap titlepage_$1.tex titlepage_$2.tex; } Now, I want a function(e.g, filepush new.pdf 2 3) that can insert new.pdf inside titlepage_2.pdf and titlepage_3.pdf with the name titlepage_3.pdf. And next files's number is increased. titlepage_3.pdf to titlepage_4.pdf titlepage_4.pdf to titlepage_5.pdf and so on. There also a .tex file for all pdf in that directory. Edit I want to implement follows- Titlepage$ ls titlepage_1.tex titlepage_1.pdf titlepage_2.tex titlepage_2.pdf titlepage_3.tex titlepage_3.pdf Titlepage$ vim new.tex Titlepage$ pdflatex new.tex Titlepage$ ls new.pdf new.tex titlepage_1.tex titlepage_1.pdf titlepage_2.tex titlepage_2.pdf titlepage_3.tex titlepage_3.pdf Titlepage$ push new.pdf 2 3 Titlepage$ ls titlepage_1.tex titlepage_1.pdf titlepage_2.tex titlepage_2.pdf titlepage_3.tex titlepage_3.pdf titlepage_4.tex titlepage_4.pdf Edit proposed solution $ls *.pdf|grep -Eo [0-9]+|sort -n This results the index of the files. Starting from tail increment each index adding 1 for each file up to the insertion point(2nd argument of push). Then rename the target file(new.pdf) to titlepage_3rd_argement_of_push.pdf
OK, here's a script that does something like what you want. #!/bin/bash NEWFILES=${1} INSERT_IDX=${2} PREFIX="titlepage_" # just in case prefixnum=${filebase//[^0-9]/} case $prefixnum in (*[![:blank:]]*) echo "invalid prefix, contains numbers"; exit 1;; esac # check input arguments if [ ! $# -eq 2 ]; then echo "USAGE: insert.sh <newfiles> <insertion_index>" echo 'example: insert.sh "new.pdf new.tex" 2' exit 1 fi ## process infiles for infile in ${NEWFILES} do ext=${infile##*.} for file in $(ls -1 ${PREFIX}[0-9]*.${ext} | sort -rV) do filebase=${file%.*} number=${filebase//[^0-9]/} numberless=${filebase//[0-9]/} if [ "${numberless}${number}.${ext}" != "${file}" ]; then echo "invalid file ${file}" echo "${numberless}${number}.${ext}" "${file}" exit 1 fi if [ ${number} -ge ${INSERT_IDX} ] then echo "$file" "${numberless}$((${number} + 1))".$ext mv "$file" "${numberless}$((${number} + 1))".$ext else echo ${file} fi done echo "${infile}" "${PREFIX}${INSERT_IDX}.${ext}" mv "${infile}" "${PREFIX}${INSERT_IDX}.${ext}" done
How to make a function in bash that insert a new filename between others?
1,427,278,252,000
Below is the script i drafted, that will work based on the SIDs it will get from ps -ef | grep pmon Once the SID is grepped, it will pass the SID to dbenv() to set the necessary parameters, and it also cuts the DB_VERSION from /etc/oratab entries. Based on the version, if 12 or 11 then the script should execute a block, or if the version is 10 or 9, it should execute a block. 12 or 11 has the alert logs under TRACE_FILE's value, 10 or 9 wont have any output for TRACE_FILE, so 10 and 9 should clear the alert log based on BDUMPs value. So I have drafted the below script and it works fine, I feel the script has got lot of repetition where i applied logic for DB_VERSION. Any ideas on how could this script be enhanced ############################################################################################################################################################# #!/bin/bash ############################################################################################################################################################# TODAY=`date +%Y-%m-%d` DATE=`date +%Y%b%d` YESTERDAY=`date -d '-1 day' +%b%Y` YDAY=`date -d '-1 day' +%Y%b%d` HOST=`hostname` LOG_LOCATION="/home/oracle/utility_script/dba_maint/logs" mkdir -p ${LOG_LOCATION} LOG_FILE="${LOG_LOCATION}/oracle_files_cleanup_${DATE}.log" rm ${LOG_FILE} 2>/dev/null dbenv () { ORACLE_HOME=`cat /etc/oratab | grep ^$ORACLE_SID | cut -d":" -f2`; export ORACLE_HOME PATH=$ORACLE_HOME/bin:$PATH ; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib ; export LD_LIBRARY_PATH DB_VERSION=`cat /etc/oratab | grep "^$ORACLE_SID" | cut -d":" -f2 | rev | cut -d"/" -f2| rev | cut -d"." -f1`; export DB_VERSION } dbcheck() { sqlplus / as sysdba << EOF &>${LOG_LOCATION}/dbcheck.out exit EOF } sql_plus() { sqlplus -s / as sysdba << EOF &>/dev/null SET NEWPAGE NONE; set lines 200 pages 300; set feedback off; set heading off; spool ${LOG_LOCATION}/$1.log $2 exit EOF } for SID in `ps -eaf | grep pmon | grep -v grep | awk '{print $8}' | sort | cut -d"_" -f3` do ORACLE_SID=${SID} ; export ORACLE_SID dbenv ${ORACLE_SID} #-- Passing the ORACLE_SID to dbenv function to source the database. if [ ${DB_VERSION} -eq 11 -o ${DB_VERSION} -eq 12 ] then dbcheck DB_CHECK=`cat ${LOG_LOCATION}/dbcheck.out | egrep "ORA|SP2|idle"` LOWER_SID=`echo ${ORACLE_SID} | tr '[A-Z]' '[a-z]'` #-- Queries to fetch the proper log location from database ADUMP="select DISPLAY_VALUE from v\$parameter where name='audit_file_dest';" BDUMP="select DISPLAY_VALUE from v\$parameter where name='background_dump_dest';" CDUMP="select DISPLAY_VALUE from v\$parameter where name='core_dump_dest';" UDUMP="select DISPLAY_VALUE from v\$parameter where name='user_dump_dest';" TRACE_FILE="select DISPLAY_VALUE from v\$parameter where name='diagnostic_dest';" #-- Calls the sql_plus function with the parameters as the logname and SQL query sql_plus "adump_${ORACLE_SID}" "${ADUMP}" sql_plus "bdump_${ORACLE_SID}" "${BDUMP}" sql_plus "cdump_${ORACLE_SID}" "${CDUMP}" sql_plus "udump_${ORACLE_SID}" "${UDUMP}" sql_plus "trace_${ORACLE_SID}" "${TRACE_FILE}" #-- Remove any empty lines after the log location ADUMP_LOC=`cat ${LOG_LOCATION}/adump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` BDUMP_LOC=`cat ${LOG_LOCATION}/bdump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` CDUMP_LOC=`cat ${LOG_LOCATION}/cdump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` UDUMP_LOC=`cat ${LOG_LOCATION}/udump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` TRACE_LOC=`cat ${LOG_LOCATION}/trace_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` #-- If the Database is not in idle state or without any errors, start housekeeping if [ -z "${DB_CHECK}" ] then echo -e "\t\t\t\t HOUSEKEEPING for database : ${ORACLE_SID}" >>${LOG_FILE} echo -e "\t\t\t\t ============ === ======== = =============" >>${LOG_FILE} #-- Cleans .aud files older than 60 days in ADUMP location if [ ! -z "${ADUMP_LOC}" ] then echo -e "\t\t\tAdump cleanup" >> ${LOG_FILE} fi #-- Cleans .trm or .trc files older than 60 days in BDUMP location if [ ! -z "${BDUMP_LOC}" ] then echo -e "\n\n\t\t\tBdump cleanup" >> ${LOG_FILE} fi #-- Cleans .trm or .trc files older than 60 days in CDUMP location if [ ! -z "${CDUMP_LOC}" ] then echo -e "\n\t\t\tCdump cleanup" >> ${LOG_FILE} fi #-- Cleans .trm or .trc files older than 60 days in UDUMP location if [ ! -z "${UDUMP_LOC}" ] then echo -e "\n\t\t\tUdump cleanup" >> ${LOG_FILE} fi #-- Rotates the Database alert log on 01st of every month. if [ `date +%d` -eq 01 ] then if [ ! -z "${TRACE_LOC}" ] then echo -e "\n\t\t\tALERT LOG ROTATION" >> ${LOG_FILE} fi fi #-- Rotates the Listener log on 01st of every month. if [ `date +%d` -eq 01 ] if [ ! -z "${TRACE_LOC}" ] then echo -e "\n\t\t\tLISTENER LOG ROTATION" >> ${LOG_FILE} fi fi else echo -e "ERROR : Please fix the below error in database - ${ORACLE_SID} on host - ${HOST} \n ${DB_CHECK}" >> ${LOG_LOCATION}/house_keeping_fail_${ORACLE_SID}_${DATE}.log fi elif [ ${DB_VERSION} -eq 10 -o ${DB_VERSION} -eq 9 ] then dbcheck DB_CHECK=`cat ${LOG_LOCATION}/dbcheck.out | egrep "ORA|SP2|idle"` #-- Queries to fetch the proper log location from database ADUMP="select DISPLAY_VALUE from v\$parameter where name='audit_file_dest';" BDUMP="select DISPLAY_VALUE from v\$parameter where name='background_dump_dest';" CDUMP="select DISPLAY_VALUE from v\$parameter where name='core_dump_dest';" UDUMP="select DISPLAY_VALUE from v\$parameter where name='user_dump_dest';" #-- Calls the sql_plus function with the parameters as the logname and SQL query sql_plus "adump_${ORACLE_SID}" "${ADUMP}" sql_plus "bdump_${ORACLE_SID}" "${BDUMP}" sql_plus "cdump_${ORACLE_SID}" "${CDUMP}" sql_plus "udump_${ORACLE_SID}" "${UDUMP}" #-- Remove any empty lines after the log location ADUMP_LOC=`cat ${LOG_LOCATION}/adump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` BDUMP_LOC=`cat ${LOG_LOCATION}/bdump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` CDUMP_LOC=`cat ${LOG_LOCATION}/cdump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` UDUMP_LOC=`cat ${LOG_LOCATION}/udump_${ORACLE_SID}.log | sed 's/[[:blank:]]*$//'` #-- If the Database is not in idle state or without any errors, start housekeeping if [ -z "${DB_CHECK}" ] then #-- Cleans .aud files older than 60 days in ADUMP location if [ ! -z "${ADUMP_LOC}" ] echo -e "\t\t\tAdump cleanup" >> ${LOG_FILE} fi #-- Cleans .trm or .trc files older than 60 days in BDUMP location if [ ! -z "${BDUMP_LOC}" ] then echo -e "\n\n\t\t\tBdump cleanup" >> ${LOG_FILE} fi #-- Cleans .trm or .trc files older than 60 days in CDUMP location if [ ! -z "${CDUMP_LOC}" ] then echo -e "\n\t\t\tCdump cleanup" >> ${LOG_FILE} fi #-- Cleans .trm or .trc files older than 60 days in UDUMP location if [ ! -z "${UDUMP_LOC}" ] then echo -e "\n\t\t\tUdump cleanup" >> ${LOG_FILE} fi #-- Rotates the ${DB_VERSION} version Database alert log on 01st of every month. if [ `date +%d` -eq 01 ] then if [ ! -z "${BDUMP_LOC}" ] then echo -e "\n\t\t\tALERT LOG ROTATION" >> ${LOG_FILE} fi fi else echo -e "ERROR : Please fix the below error in database - ${ORACLE_SID} on host - ${HOST} \n ${DB_CHECK}" >> ${LOG_LOCATION}/house_keeping_fail_${ORACLE_SID}_${DATE}.log fi fi done exit $? #---------------------------------------------------------------------END-----------------------------------------------------------------------------------#
This question probably belongs on https://codereview.stackexchange.com/ instead of here, but here are my recommendations: use $() rather than backticks for command substitution. you (almost) never need to pipe grep in to awk. For example, instead of: ps -eaf | grep pmon | grep -v grep | awk '{print $8}' you can do: ps -eaf | awk '/pmon/ && ! /grep/ {print $8}' Similarly, piping grep into cut is usually better done with awk. e.g instead of: cat /etc/oratab | grep ^$ORACLE_SID | cut -d":" -f2 use awk -F: "/^$ORACLE_SID/ {print \$2}" /etc/oratab (normally you wouldn't escape the $ of $2 in an awk script because it's more usual to single-quote the entire awk script. In this case, we're double-quoting the awk script so that we can use the bash variable $ORACLE_SID in awk, so we need to backslash-escape awk's $2 to prevent the shell from replacing it with its own $2) you don't need to do pipe ps into grep or awk anyway. You can just do ps h -o cmd -C pmon instead. Or use pgrep. sed can read files by itself, you don't need to pipe catinto sed. So can grep and awk and perl and cut and every other standard text-processing tool. [ -n "$var" ] is the same as [ ! -z "$var" ]. -z tests for empty string, -n tests for non-empty string. there are several occasions where you haven't double-quoted your variables. you should (almost) always double-quote variables when you use them. single-quotes are for fixed, literal strings. double-quotes are for when you want to interpolate a variable or command substitution into a string. indenting with 8 characters is excessive. use 2 or 4 spaces per indent level. or set the tab stop in your editor to 2 or 4 spaces. it's a good habit to use lowercase or MixedCase for your own variables, leaving ALLCAPS variable names for standard utilities and common programs. tools like sqlplus, mysql, psql, etc are very convenient for doing scripted database queries in sh or bash etc but you have to be extremely careful about any variables you use with SQL commands - especially if the values in the variables come from user-supplied data, or other "untrusted" sources. It is very easy to break a script if the input data is unvalidated and unsanitised. It is just as easy to create an SQL injection bug. For non-trivial SQL queries, you should probably learn perl or python or some other language with a database library that supports placeholders to avoid any issues with quoting of variables in sql commands.
Execute a block based on the output of a variable [closed]
1,427,278,252,000
A function definition is a command. When a function definition is run, I thought that the function body would be kept intact, as if the function body is single quoted. I knew I was wrong, when I understood the following from the Bash manual in G-Man’s answer to my Alias and functions question: Aliases are expanded when a function definition is read, not when the function is executed, …. So alias expansion is performed in the function body, while parameter expansion is not, as shown by the following example: $ alias myalias=cat $ echo $var 3 $ myfunc() { myalias myfile; echo $var; } $ declare -fp myfunc myfunc () { cat myfile; echo $var } Similarly, a call to a function is also a command. aliases defined in a function are not available until after that function is executed. so alias definition is executed only when executing, i.e., calling the function, not when running the function definition. alias expansions and alias definition execution are just two of the shell operations listed below. My questions are: What shell operations are performed inside the function body, and what operations are not performed, when running a function definition? when calling a function, i.e., executing a function? Are there any shell operations which are performed during both running the definition of a function and calling the function? Or do running the definition of a function and calling the function perform non-overlapping sets of shell operations? The possible operations are listed in the following quote from the Bash manual: 3.1.1 Shell Operation The following is a brief description of the shell’s operation when it reads and executes a command. Basically, the shell does the following: Reads its input from a file (see Section 3.8 [Shell Scripts], page 39), from a string supplied as an argument to the -c invocation option (see Section 6.1 [Invoking Bash], page 80), or from the user’s terminal. Breaks the input into words and operators, obeying the quoting rules described in Section 3.1.2 [Quoting], page 6. These tokens are separated by metacharacters. Alias expansion is performed by this step (see Section 6.6 [Aliases], page 88). Parses the tokens into simple and compound commands (see Section 3.2 [Shell Commands], page 8). Performs the various shell expansions (see Section 3.5 [Shell Expansions], page 21), breaking the expanded tokens into lists of filenames (see Section 3.5.8 [Filename Expansion], page 30) and commands and arguments. Performs any necessary redirections (see Section 3.6 [Redirections], page 31) and removes the redirection operators and their operands from the argument list. Executes the command (see Section 3.7 [Executing Commands], page 35). Optionally waits for the command to complete and collects its exit status (see Section 3.7.5 [Exit Status], page 38).
<rant> “running a function definition” and “running the definition of a function” are not common idioms.  I expect that most people would refer to that simply as “defining a function”.  OK, that phrase might be interpreted to refer to conceptually specifying the characteristics and interfaces (parameters and other inputs, processing, and outputs) of a function (this is an activity performed by a person, or a group of people) choosing the implementation of a function; i.e., what commands to use in the function body.  This is an activity performed by a person, or a group of people, possibly using paper, whiteboards, and/or blackboards. typing the implementation of a function into a text editor or directly into an interactive shell (this, of course, is also an activity performed by a person, or possibly a group of people, or possibly a very intelligent and dexterous cat, dog or monkey) If you fear confusion with the above, you might be better served by phrases like “reading a function definition” and “processing the definition of a function”. You say, “A function definition is a command.”  It’s not clear what a claim like that even means — it’s a matter of semantics — but, at the semantic level, it’s debatable.  Section 3.2 of the Bash manual (Shell Commands) lists six kinds of shell commands: Simple Commands, Pipelines, Lists, Compound Commands, Coprocesses, and GNU Parallel.  Function definitions don’t fit well into any of those categories. </rant> Getting to your question, and looking at the seven steps / operations, Reading input obviously has to happen before anything else can happen.  It’s, again, a matter of semantics — you could say that reading input from a file or other stream text source is part of “reading a function definition” or that it is a prerequisite / prelude to “processing the definition of a function”. Breaking the input into words and operators is clearly part of “processing the definition of a function”. Breaking the input into tokens is, of course, a prerequisite / prelude to parsing the tokens (step 3, below). The manual says, “Alias expansion is performed by this step,” and you show in the question that you know that alias expansion happens when the function definition is read. Parsing the tokens.  This must at least start during the “processing the definition of a function” phase for the shell to realize that it is looking at a function definition and not a simple command.  Beyond that, we can do this simple experiment: type either of the following into a shell: myfunc1() { myfunc2() { < or ; } } It will fail with a “syntax error” before you get a chance to type the }.  (> and & produce the same effect.)  So clearly this step is part of processing the definition of the function. Performing the various shell expansions is part of calling the function. You show in the question that you know that parameter expansion does not happen when the function definition is read. It’s equally trivial to demonstrate that pathname / filename expansion does not happen when the function definition is read.  In your example, change echo $var (which should be echo "$var", but that’s another matter) to echo *. If you look at the function definition, you’ll see that it still says echo * and has not been expanded. Change directory (cd) and/or create file(s) and/or delete file(s) and/or rename file(s), and then call (execute) the function.  You’ll see that it outputs the current list of files1 in the current directory, and doesn’t reflect the contents of the directory where/when you defined the function. Performing redirections is part of calling the function.  This is also trivial to verify.  In your example, change echo $var to echo hello > myfile. If you look at the function definition, you’ll see that it still says  > myfile. Check the directory, and you’ll see that myfile has not been created (yet). Executing the command — do you really need to ask? Waiting for the command to complete can’t possibly happen until the command has been executed (step 6), so this obviously is performed only when calling the function. In other words, exactly what @ilkkachu said. The one exceptional special case I can think of is that steps 2 and 3 (lexical analysis and parsing) occur (again) when executing a function if the function contains eval statements — but I believe that that’s outside the scope of what your question is really asking. ____________ 1 excluding hidden (dot-) files, unless you have said shopt -s dotglob
Which of the following shell operations are performed inside the function body when running a function definition and when calling a function?
1,427,278,252,000
Say I've got this: #!/bin/sh function show_help { cat<<% Usage: $0 [-h] % } # the following lines is pseudo-code if argument contains "-h" show_help otherwise do_stuff If I run ./test it does some stuff as intended, but if I run ./test -h, it produces Usage: show_help [-h] but I intended to let it produce Usage: ./test [-h] so how can I achieve this by modifying only the show_help function? I don't want to modify the script itself, so I won't just add SCRIPT_NAME=$0 under the shebang line. I hope the solution is some kind of a builtin variable like $PWD or function like pwd, does there really exist one?
I can't reproduce this. The question is tagged with '/bash'. With bash, $0 is always the name of the script, so if this is in /tmp/test #!/bin/bash function show_help { cat <<% Usage: $0 [-h] % } show_help then bash /tmp/test gives me Usage: /tmp/test [-h]. If I use ksh93 /tmp/test I do get Usage: show_help [-h], due to the ksh setting $0 when you declare a function in a non POSIX manner. Switching to a portable function declaration #!/bin/bash show_help() { cat <<% Usage: $0 [-h] % } show_help and you get Usage: /tmp/test [-h] from both ksh and bash. So there are multiple errors in the original script. The #!/bin/sh should be #!/bin/bash if the question is about bash, the function declaration is incorrect, and the syntax of the if at the end is just wrong.
Get the name of the shell script from inside a function
1,427,278,252,000
Assume, I do archive several files with this functions: gen_password () { gpg --gen-random 1 "$1" | perl -ne' s/[\x00-\x20]/chr(ord($^N)+50)/ge; s/([\x7E-\xDB])/chr(ord($^N)-93)/ge; s/([\xDC-\xFF])/chr(ord($^N)-129)/ge; print $_, "\n"' } archive () { ARCHIVE_NAME="$1" PASSWORD=$(gen_password 32) 7za a -p"$PASSWORD" -mhe -- "$ARCHIVE_NAME" "$@" echo "Created 7z archive with password '$PASSWORD'" } This works well and I tried to upload encrypted archive on file sharing server. So there is the script that uploads content of file to the server(source): upload() { if [ $# -eq 0 ]; then echo "No arguments specified. Usage:\necho transfer /tmp/test.md\ncat /tmp/test.md | transfer test.md"; return 1; fi tmpfile=$( mktemp -t transferXXX ); if tty -s; then basefile=$(basename "$1" | sed -e 's/[^a-zA-Z0-9._-]/-/g'); curl --progress-bar --upload-file "$1" "https://transfer.sh/$basefile" >> $tmpfile; else curl --progress-bar --upload-file "-" "https://transfer.sh/$1" >> $tmpfile ; fi; cat $tmpfile; rm -f $tmpfile; } So I'm trying to pipe the encrypted archive in naive way: archive 1.rar pass.tar.gz d7432.png foo.7z | upload But there is one problem - encrypted archive is unreacheable for upload and command exiting with no result. So, the question is: how should I pipe the file to have it uploaded correctly?
Since your upload() function is expecting a parameter ($1) to use as the archived filename, pass it along in your commandline: archive foo.7z 1.rar pass.tar.gz d7432.png && upload foo.7z If foo.7z is a variable parameter for archive() as well, simply pass the same variable to upload(): archive $archivename 1.rar pass.tar.gz d7432.png && upload $archivename I would recommend the && glue, as you probably don't want to try to upload the archive file if the archive() function did not succeed. Here is sample function for .bashrc: share() { ARCHIVE_NAME="$1" archive "$ARCHIVE_NAME" "$@" && upload "$ARCHIVE_NAME" }
Pipe encrypted archive to uploader
1,427,278,252,000
In order to launch a new terminal and run a zsh function on it, I am trying to run the following command from within an urxvtc terminal (the urxvtd is running as a systemd service) urxvtc -e zsh -c "my-zsh-defined-function" which doesn't work as the function is unknown. I need to explicitly source my zshrc to make it work urxvtc -e zsh -c "source ~/.zshrc; my-zsh-defined-function" The problem is, I don't understand why. Shouldn't zsh source .zshrc as when I run urxvtc, and then I type my-zsh-defined-function ?
It shouldn't, since you're not running zsh interactively. Quoting man zsh (section STARTUP/SHUTDOWN FILES): [I]f the shell is interactive, commands are read from /etc/zshrc and then $ZDOTDIR/.zshrc. You could try using -i: -i Force shell to be interactive. It is still possible to specify a script to execute.
Why urxvtc doesn't accetp zsh functions when called with a "-c" argument?
1,427,278,252,000
What about use so solution? Functions run in loop (cycle?). In that loop - I have another function wuch also uses loop. When second function get NO answer from user - it send break 2 to stop loop and proceed main script actions. Function uses variables wich set in file. So, is it good idea use variables as parameters for functions?
One alternative that might be cleaner is to have answer return 0 or return 1, depending on whether the user said yes or no. Then test the value of answer in the place where you call it, and only do the action if answer returned 0. Based on your earlier script, it would look something like this: while tomcat_running && user_wants_to_stop_tomcat; do echo "$tomcat_status_stopping" kill $RUN sleep 2 done function tomcat_running() { check_tomcat_status [ -n "$RUN" ] } function user_wants_to_stop_tomcat() { answer "WARNING: Tomcat still running. Kill it? " } function answer() { while true; do printf "$1" read response case $response in [yY][eE][sS]|[yY]) return 0 ;; [nN][oO]|[nN]) return 1 ;; *) printf "Please, enter Y(yes) or N(no)!\n" ;; esac done }
Using break command as argument to function [closed]
1,427,278,252,000
I'm trying to write a find and cd function like so: findcd () { cd "$(dirname "$(find '$1' -type '$2' -name '$3')")" } to be called like so: find . f [FILE_NAME] But it's seeing the dollar sign and expecting more arguments as oppose to executing what's inside. I'm just starting with writing aliases and functions, so any advice would be super helpful!
Try this: findcd () { cd "$(dirname "$(find "$1" -type "$2" -name "$3")")" } The problem with your original attempt is that you had the variables single quoted so they were not being expanded. Also note that this will not work if you have more than one find result.
how to write function with nested commands [duplicate]
1,427,278,252,000
I'm trying to learn bash scripting using freeCodeCamp tutorial for beginners on YouTube. I'm stuck at the point where he shows how to create a function. He saved on a variable a command with an option #!/bin/bash showuptime(){ up=$(uptime -p | cut -c4-) since=$(uptime -s) cat << EOF ---------- This machine has been up for ${up} It has been running since ${since} ---------- EOF } showuptime I'm not able to replicate on my MacBook M1 with zsh on terminal. I'm using #!/bin/zsh and It works only if I don't pass the -p and -s options. Error: uptime: illegal option -- p usage: uptime uptime: illegal option -- s usage: uptime
To clear a few things up: As variable of a function you would rather indicate the arguments passed to the function (none here) which are accessed via $1, $2, and so on. What you have are variables in your functions and because they are not marked local they are not even limited to the function. The $( ) construct is called command substitution. It does not store the command in any way but instead executes what is inside it (uptime -p | cut -c4- in your first case) and assigns the output the command produced to the variable. The uptime command is executed at this point. The command output is stored, not the command. As Marcus mentioned the error comes from the uptime command not supporting those options. Since Mac OS is related to BSD the FreeBSD man page for uptime can give an orientation. On FreeBSD uptime accepts no options at all and it is probably the same for Mac OS. To answer the question title: the best way to store a command with options is to use an array. my_stored_command=(ls -lt --human-readable /home) can then later be run as "${my_stored_command[@]}" which expands the array to its individual elements again forming a command invocation.
How to store command with option on variable of a function in zsh?
1,427,278,252,000
I am attempting to write a function which will take two commands as inputs, time the executions of both of them, then output those times to a text file. After reading this post, I got most of the way there, and can write my execution times to a file one at a time. The problem I have now is getting the function to accept two multi-word inputs. My current code looks like: timeDiff () { { time "$1" ; } 2> ~/file.txt { time "$2" ; } 2>> ~/file.txt } If I run these lines sequentially with the functions I want, everything is fine. The file is overwritten with the time info of the two functions. Here are some of the attempts and problems I have with this: timeDiff grep "str" file1 query database lookup.sql This will cause my file to have a warning on grep, some times, and a bash: str: command not found. timeDiff 'grep "str" file1' 'query database lookup.sql' This will cause my file to have bash:grep "str" file1: No such file or directory, some times, and bash:query database lookup.sql: No such file or directory. I think that this is related to how I need quotation marks for my grep, but perhaps there's a better way of writing the inputs for the functions. I'm a beginner, so I'm eager to learn! Thanks!
With: { time "$1" ; } 2> ~/file.txt With a shell such as bash that has time as a keyword, you're using the time keyword to time the evaluation of the "$1" shell code. "$1" is code that executes the command whose name is in the first position parameter to the function: With: timeDiff grep "str" file1 query database lookup.sql You're invoking timeDiff with grep as first argument and str as second argument (and file1 as third, etc.) So time "$1" will time the execution of the command called grep without argument. grep requires at least one argument so will complain. In timeDiff 'grep "str" file1' 'query database lookup.sql' This time $1 will be grep "str" file1 but it's very unlikely there's a command by that name. What it looks like you want is to time the evaluation of shell code passed in the first and second arguments like: timeDiff () { time eval " $1" time eval " $2" } 2> ~/file.txt Or possibly: timeDiff () { eval "time { $1 }" eval "time { $2 } } 2> ~/file.txt To evaluate shell code consisting of: time { contents-of-$1 } To time the command group whose body is the contents of the first position argument. Also beware that in: { time cmd; } 2> file The timing will end-up in file, but also the errors of cmd (and the errors of the shell failing to run cmd if any). To get only the timing in file, you'd need something like: { time cmd 2>&3 3>&-; } 3>&2 2> file
Compare Execution Times of Two Functions
1,427,278,252,000
I have the following bash function that returns 0 when tho variable verbos is defined. Have read the bash manual which says that when return command return N, the N is omitted, the return status is that of the last command executed within the function. How can I use only return at the end, taking the value of N, depending on the return status of [ -n vb ]? tesverbos () { vb="${verbos+vbset}" if [ -n "$vb" ]; then return 0 else return 1 fi }
This should work tesverbos () { vb="${verbos+vbset}" test -n "$vb" }
Using bash return depending on return status of last command executed within a function
1,427,278,252,000
Have added a number of functions which I source from my .bashrc. For instance, I use export -f calc I also have another function usage_calc where I comment out the export call # export -f usage_calc But I can still call usage_calc. What is happening?
You're using bash as your shell. The function is defined in or via your .bashrc, which makes the function available to your shell. The export has no relevance
Calling a function from terminal without export from sources file [duplicate]
1,427,278,252,000
I've got a function defined in my fish shell: function cl --wraps=cd cd $argv && ls -l --color=auto end According to man function, the --wraps option "causes the function to inherit completions from the given wrapped command." However, when I type cl and start to tab-complete, I'm shown options which include non-directories (like .c files). However, when I type cd and then tab-complete, I'm only shown directories. Did I define my function incorrectly?
You're hitting this issue which was fixed in fish shell version 3.3.0. Upgrade to a newer fish and it should fixed.
Function tab-completion not matching that of wrapped command
1,427,278,252,000
I've the following code on myscript.sh for the purpose of execute function on a file with .md extension when this file is modified: function my_function(){ echo "I\'ve just seen $1" } typeset -fx my_function find . -name "*.md" | entr -r -s "my_function $0" entr documentation: * [...] the name of the first file to trigger an event can be read from $0.* I expect when I change README.md that output will be: "I've just seen README.md" In reality when i launch the script and change README.md, following output appears: bash myscript.sh # output: I've just seen myscript.sh Please, why ?
You are calling entr from a shell script. When the shell performs variable substitution (or parameter expansion in the manual), $0 will expand to the filename of the script. To protect the $0 from variable substitution by the shell, use \: find . -name "*.md" | entr -r -s "my_function \$0" Edit: As @roaima reminded, another way is to use single quotes, which protects the entire quoted text: find . -name "*.md" | entr -r -s 'my_function $0' NB: If the SHELL environment variable is set to something incompatible with bash, typeset -fx my_function might not work at all for entr (for example, it doesn't work with SHELL=/bin/mksh). Also, consider adding a shebang to your script, and leaving out the unnecessary word function: #!/bin/bash my_function(){ echo "I've just seen $1" } typeset -fx my_function find . -name "*.md" | entr -r -s "my_function \$0"
Use file marker with entr
1,427,278,252,000
I'm running Ubuntu on WSL2. I frequently download zipped homework files from my school's website. They go into my Downloads folder on Windows. I want them copied into a particular path on my Linux filesystem, unzipped, and then renamed to put my name in front of the filename. This is what I have so far: params: $1: filename, $2: week_x, $3: day_y hwcopy() { cp $1 /home/myName/homework/$2/$3 rm -r $1 cd /home/myName/homework/$2/$3 unzip $1 } I have a function addname but the problem is if I call addname $1, that will just rename the zipped folder. Here's a solution I thought of: hwcopy() { cp $1.zip /home/myName/homework/$2/$3 rm -r $1.zip cd /home/myName/homework/$2/$3 unzip $1.zip addname $1 } I suppose this would work, but it's a bit annoying because after autocompleting the file name for $1, I would need to hit backspace 4 times every time I wanted to call the function to get rid of the .zip in the argument. Is there a simple way to do this? I'm new to Linux so I don't know how to do this but I was thinking maybe there's a way to save $1 as a string inside the function and then cut off the last 4 characters, and then pass that into everything else?
With bash, use substitution with % char to remove end of content: $ file="myfile.zip" $ echo "${file%.zip}" myfile You could use wildcard: $ file="myfile.zip" $ echo "${file%.*}" myfile You could maximize motif with double %: $ file="myfile.tar.gz" $ echo "${file%.*}" myfile.tar $ echo "${file%%.*}" myfile
access name of unzipped file inside a function
1,427,278,252,000
So I have a function that I wish to run from my command line. cat foo.sh #!/bin/bash echo foobar I export it to my PATH variable and change to a different directory. export PATH=${PATH}:/home/usr/scripts/foo.sh mkdir test && cd test sh foo.sh sh: foo.sh: No such file or directory If i run it like so foo.sh I get bash: foo.sh: command not found. I could run it with the absolute path, but I didn't think I would need to if I added it to my $PATH. What mistake am I making here? Thanks.
Several issues here. The $PATH consists of colon-separated directories, not files. You shouldn't declare a script as a bash script and then use sh to run it. Generally speaking, a file that you're going to call as if it were a standard utility wouldn't have an extension. (Extensions are optional in many situations anyway.) # Create a directory mkdir -p "$HOME/bin" # Create a script in that directory cat <<'EOF' >"$HOME/bin/myscript" # Don't use "test" :-O #!/bin/bash echo This is myscript EOF # Make it executable chmod a+x "$HOME/bin/myscript" # Add the directory to the PATH export PATH="$PATH:$HOME/bin" # Now run the script as if it's a normal command myscript The caution against using a script called test is that /bin/test already exists as a valid command. Furthermore, in many shells test is a built-in that will override your own script regardless of the directory order in $PATH.
Unable to run file from command line after adding to PATH
1,427,278,252,000
I am looking for a way to simplify the different php versions and also paths of Composer. I found an approach in this answer that looks very appropriate for me. I have tried to implement the following for understanding function composer () { if [ "$PWD" == "/home/vhosts/domainName/httpdocs" ]; then /usr/bin/php7.3 lib/composer elif [ "$PWD" == "/home/vhosts/domainName2/httpdocs" ]; then /usr/bin/php5.6 composer.phar else composer fi; } This works fine, but I am now looking for a way to transfer the stdin, so that a "composer install" is possible. I hope the question is understandable 😅
Pass the arguments of the function on to the program you call from the function: composer () { if [ "$PWD" = "/home/vhosts/domainName/httpdocs" ]; then /usr/bin/php7.3 lib/composer "$@" elif [ "$PWD" = "/home/vhosts/domainName2/httpdocs" ]; then /usr/bin/php5.6 composer.phar "$@" else command composer "$@" fi } The "$@" will expand to the arguments of the function, individually quoted (the double quotes are essential). Also note that when calling just composer, you will have to use command composer as you would otherwise call the function recursively. I have also fixed some minor syntax things to make the function portable. With this function, doing composer install in /home/vhosts/domainName/httpdocs would result in /usr/bin/php7.3 lib/composer install Another variant of the function: composer () { case $PWD in "/home/vhosts/domainName/httpdocs") /usr/bin/php7.3 lib/composer "$@" ;; "/home/vhosts/domainName2/httpdocs") /usr/bin/php5.6 composer.phar "$@" ;; *) command composer "$@" esac }
Path dependent commands in .bashrc
1,427,278,252,000
It is known that parsing the output of ls is generally a bad idea and one solution is to use globbing instead of ls to 'safely' loop through files in a directory. for path in /path/to/search/*; do ... # Do more filtering ... echo "$path" done This function will further filter some of the results that the glob matches and outputs the remaining paths. However if I want to reuse this logic elsewhere can I use it's output via a function safely and loop that? function myglob() { for path in /path/to/search/*; do ... # Do more filtering ... echo "$path" done } function myExample() { results=$(myglob) for i in "$results"; do echo "$i" done } Or must I always duplicate the glob logic and make minor changes to the logic inside?
No, you can't reuse the loops output as you have show as that replicates the issue with ls exactly, as well as adds issues with echo possibly interpreting backslashes in filenames. Instead, if you're using a shell language that has arrays and name references (like in bash 4.3+), you can do it slightly differently: myglob () { declare -n list="$1" list=( /path/to/search/* ) } myexample () { local results=() myglob results for pathname in "${results[@]}"; do printf '%s\n' "$pathname" done # or shorter, just # printf '%s\n' "${results[@]}" } Here, the myglob function takes the name of a variable in list, which is a name reference variable. This means that any use of list will in fact use the named variable. The function simply expands the glob and stores the result in list as if it was an array. The myexample function then calls myglob with the string results. The list variable in myglob will therefore reference the results variable, and store the expanded pattern in it. The function then goes on to use results as an array of items. If myglob needs to do filtering: myglob () { declare -n list="$1" list=() for pathname in /path/to/search/*; do # decide whether to use "$pathname" or not # then, if it is to be used, list+=( "$pathname" ) done } That is, loop over the expanded pattern and append items to the list if they are to be returned to the caller.
Using paths from a function that uses globbing
1,427,278,252,000
I have written a below block of code #!/bin/bash TABLE_NAME="${1}" COL_NAME="${2}" FIELD_VALUES_1SQ_FUNC() { FIELD_VALUES_1SQS=`sqlplus -s sab/admin@TERM << EOF SET FEEDBACK OFF; SET HEADING OFF; Select TESTING.FIELD_VALUES_TEMP_1SQ.NEXTVAL from dual; exit; EOF` FIELD_VALUES_1SQ=`echo ${FIELD_VALUES_1SQS} | tr -d ' '` } RT_SEQ_CHECK_FUNC() { RT_SEQ_CHECKS=`sqlplus -s sab/admin@TERM << EOF SET FEEDBACK OFF; SET HEADING OFF; Select * from TESTING.FIELD_VALUES where FIELD_ROW_ID='${1}' and TF_ID='${2}'; exit; EOF` RT_SEQ_CHECK=`echo ${RT_SEQ_CHECKS} | tr -d ' '` } RT_FIELD_IDS_FUNC() { RT_FIELD_IDS=`sqlplus -s sab/admin@TERM << EOF SET HEADING OFF; SET FEEDBACK OFF; select max(TF_ID) from TESTING.TABLE_FIELD where field_id in(select field_id from TESTING.FIELD_DOMAIN where name='${2}') and table_id in (select table_id from TESTING.TABLE where name='${1}'); EXIT; EOF` RT_FIELD_ID=`echo ${RT_FIELD_IDS} | tr -d ' '` } FIELD_VALUES_1SQ_FUNC RT_FIELD_IDS_FUNC ${TABLE_NAME} ${COL_NAME} RT_SEQ_CHECK_FUNC ${FIELD_VALUES_1SQ} ${RT_FIELD_ID} if [ -z "${RT_SEQ_CHECK}" ] then echo "Sequence values doesn't exist |--${RT_SEQ_CHECK}--|" else echo "SEQUNCE VAlue exists |--${RT_SEQ_CHECK}--|" fi echo "TF_ID=${FIELD_VALUES_1SQ}" echo "FIELD_ROW_ID=${RT_FIELD_ID}" exit $? In my script, at first I am calling the function FIELD_VALUES_1SQ_FUNC to generate a sequence number. Second, I am calling RT_FIELD_IDS_FUNC ${TABLE_NAME} ${COL_NAME} where it will get some value. Third, the function RT_SEQ_CHECK_FUNC ${FIELD_VALUES_1SQ} ${RT_FIELD_ID} is called, where it checks if the value is there in database. If the value is there, then I should call the FIELD_VALUES_1SQ_FUNC() again to generate a new sequence value and check it with RT_SEQ_CHECK_FUNC ${FIELD_VALUES_1SQ} ${RT_FIELD_ID} function unless the value is not found for that select in FIELD_VALUES_1SQ_FUNC() function. Any ideas on how this can be achieved!
What you're looking for is called a while loop. Consider this simple example: n=0 while [ $n -lt 5 ]; do echo Not done yet n=$(($n+1)) done A while loop does two things, and by implication the programmer must do a third thing. The while loop tests the condition: is n less than 5? If the condition is true, then: the body of the while loop is executed once the while loop goes back to step 1 and tests the condition again If the condition is not true, the loop terminates and script execution continues with the statement that follows the done keyword of the loop. The third thing, the one that is the programmer's responsibility, is to do something inside the body of the loop that will (or might) change the status of the conditional expression. In the simple example above, that step is the n = $(($n+1)) statement. Without this, the loop will become infinite because the condition is initially true and never changes. Try running the script with that line commented out and see what happens. Then press CtrlC. To tailor this example to your specific problem, I think you'll want to negate the test [ -z "${RT_SEQ_CHECK}" ] for your while condition. By that, I mean that when [ -z "${RT_SEQ_CHECK}" ] is true, that means ${RT_SEQ_CHECK} is zero-length, and that's when you want to stop looping. Fortunately test has the -n option which is the exact opposite of the -z option. So in very broad terms, your while loop will look loosely like this: FIELD_VALUES_1SQ_FUNC RT_FIELD_IDS_FUNC ${TABLE_NAME} ${COL_NAME} RT_SEQ_CHECK_FUNC ${FIELD_VALUES_1SQ} ${RT_FIELD_ID} while [ -n "${RT_SEQ_CHECK}" ]; do FIELD_VALUES_1SQ_FUNC RT_FIELD_IDS_FUNC ${TABLE_NAME} ${COL_NAME} RT_SEQ_CHECK_FUNC ${FIELD_VALUES_1SQ} ${RT_FIELD_ID} done Finally, I have which I hope is a constructive comment on the structure of your code. You tend to use global variables to return a value from a function, and then refer to those global variables in the main body of your code. This can make the code difficult to read and follow. Rather than coding in this style: STEP1() { DATE=$(date) } STEP2() { echo "today is $DATE" } STEP1 STEP2 Try this: STEP1() { date } STEP2() { echo "today is $1" } DATE="$(STEP1)" STEP2 "$DATE" Again, applying that your code might result in something sort of like this: FIELD_VALUES_1SQ_FUNC() { sqlplus -s sab/admin@TERM << EOF | tr -d ' ' SET FEEDBACK OFF; SET HEADING OFF; Select TESTING.FIELD_VALUES_TEMP_1SQ.NEXTVAL from dual; exit; EOF } RT_SEQ_CHECK_FUNC() { sqlplus -s sab/admin@TERM << EOF | tr -d ' ' SET FEEDBACK OFF; SET HEADING OFF; Select * from TESTING.FIELD_VALUES where FIELD_ROW_ID='${1}' and TF_ID='${2}'; exit; EOF } RT_FIELD_IDS_FUNC() { sqlplus -s sab/admin@TERM << EOF | tr -d ' ' SET HEADING OFF; SET FEEDBACK OFF; select max(TF_ID) from TESTING.TABLE_FIELD where field_id in (select field_id from TESTING.FIELD_DOMAIN where name='${2}') and table_id in (select table_id from TESTING.TABLE where name='${1}'); EXIT; EOF } FIELD_VALUES_1SQ="$(FIELD_VALUES_1SQ_FUNC)" RT_FIELD_ID="$(RT_FIELD_IDS_FUNC ${TABLE_NAME} ${COL_NAME})" RT_SEQ_CHECK="$(RT_SEQ_CHECK_FUNC ${FIELD_VALUES_1SQ} ${RT_FIELD_ID})"
Iterate the if else statement until a condition is success
1,427,278,252,000
I’m trying to create a ls-like function. I started with this alias, which works fine: alias l="/usr/bin/ls -lF --color=always | tr -s ' ' | cut -d ' ' -f 9-" However, converting it to a function results in no colors: l() { local _c= [ -t 1 ] && _c=--color=always /usr/bin/ls -lF $_c "$@" | tr -s ' ' | cut -d ' ' -f 9- } Even removing all differences from the alias to the function remains colorless: l() { /usr/bin/ls -lF --color=always | tr -s ' ' | cut -d ' ' -f 9- } The only colored variant is one without oipe l() { /usr/bin/ls -lF --color=always } What prevents the color from passing through the pipes in functions?
Are you sure you haven't got an l alias that is either interfering with the function definition or being used before the function: aliases take precedence over functions. And are expanded even for a function definition.
escape sequence behaving differently in function
1,427,278,252,000
Sometimes when I'm copying and pasting a command from a site, I accidently copy the leading "$" or "#" by accident. Is there a Fish Function I could make that would check if one of those is included in a command and automatically remove it before running it? For example, if I copy and paste $ sudo apt install foo bar poo, I will get the error: Commands may not contain variables. In fish, please use 'eval $'.
Sure: function '$'; eval $argv; end Then myprompt$ $ echo hello world hello world
Is there a Fish Function I can make to eliminate leading "$"/"#" from commands copied from sites?
1,427,278,252,000
I am in the process of writing bioinformatics pipelines. These pipelines take in input files and pass them through multiple packages. Say there is a list of files that goes file1, file2, file3... file n, and for each I want to apply a function that goes function1 -file 1| function 2 | function 3 > file 1.output but want to do it for the whole list of files using a for loop, and allocate file names to the output files accordingly, what commands and syntax should I use?
According to the information in your question, this should do what you requested: for i in file1 file2 file3 file4 fileN do function1 "$i"| function2 | function3 > "$i".output done
Creating a function that reads off an input list of files
1,427,278,252,000
I’m working on a function this works but it is ugly. One thing that could be changed is being able to know the name of the screen. Using screen -dms minecraft java ….jar now starts a screen session named with what appears to be random numbers..hostname. Next is the voodoo that happens to strip the name from screen -ls and use it. Then there is awk. There has to be a better way. say_this() { REEN="$(ssh -p 8989 192.168.1.101 screen -ls)" echo $REEN > log/log.txt AWK="$(awk 'FNR == 1 { print $6 }' log/log.txt)" NAME="$(echo $AWK)" echo $1 ssh -p 8989 192.168.1.101 screen -S $NAME -p 0 -X stuff \"$1^M\" } say_this "say test" say_this "say !@#$%^&*()<>?This string should work!"
You're using a lot of variables and a log file unnecessarily. I'm not sure about the stuff after stuff, but I bet it can be simpler: say_this() { local name="$(ssh -p 8989 192.168.1.101 screen -ls | awk 'NR==2 {print $1}')" echo "$1" ssh -p 8989 192.168.1.101 screen -S "$name" -p 0 -X stuff "$1" }
Ugly bash function to send commands and "say" anything in screen over ssh. Is there a better way?
1,384,859,076,000
I tried to have a switch if either an option is set or not while getopts "s:u:d:e:ch" _OPTION; do case $_OPTION in ... c) isCSet="Y" then I'm calling my function : myFunction $isCSet then in my function I'm doing : echo $1 but I don't have anything in. How can I solve this problem?
You might be missing to initialize isCSet, eg: isCSet=N while getopts s:u:d:e:ch _OPTION; do case $_OPTION in ... c) isCSet=Y;; ...
Using variable in KSH function
1,384,859,076,000
I tried to write a small ksh script: fDestExists (){ cd /tmp read vANSWER?" >> Do you want to create a repository in pwd ? Type YES or NO" echo " |----> $(fGetDatum) You typed: " $vANSWER if [ "$vANSWER" = "YES" ]; then read vANSWER2?" >> Type your repository's name." mkdir -p $vANSWER2 cd $vANSWER2 echo " |----> Logs will be coped in pwd." elif [ "$vANSWER" = "NO" ]; then echo " |----> Logs will be coped in pwd." else echo " |----> You typed a wrong answer; exiting." exit 1 fi pwd #return } Several questions here. How can I use pwd's value in my echo? To return value, I read it was feasible using echo [yourValue] at last line. Then where you call the function, I guess I can use : $?. So how can I do the same behaviour with pwd?
1: you can directly use the PWD variable, eg: echo " |----> Logs will be coped in $PWD." 2: $? is used to retrieve the last command return value which is numerical. There is no way to pass a string here, the return value should be 0 meaning success or something different meaning some failure. Use return 0 or return -1 if you want to get that information. As you are modifying the script current directory, it will be available as $PWD in the caller side anyway.
Function, return value using pwd in KSH
1,384,859,076,000
The following (nested) function/s function hpf_matrix { # Positional Parameters Matrix_Dimension="${1}" Center_Cell_Value="${2}" # Define the cell value(s) function hpf_cell_value { if (( "${Row}" == "${Col}" )) && (( "${Col}" == `echo "( ${Matrix_Dimension} + 1 ) / 2" | bc` )) then echo "${Center_Cell_Value} " else echo "-1 " fi } # Construct the Row for Cols 1 to "Matrix_Dimension" function hpf_row { for Col in `seq ${Matrix_Dimension}` do echo -n "$(hpf_cell_value)" done } # Construct the Matrix echo "MATRIX ${Matrix_Dimension}" for Row in `seq ${Matrix_Dimension}` do echo "$(hpf_row)" done echo "DIVISOR 1" echo "TYPE P" } work/s fine, both as a standalone code and inside a script. I.e. hpf_matrix 5 18 will return MATRIX 5 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 18 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 DIVISOR 1 TYPE P and it will even work (with various values) as requested in: Kernel_Size=5 Center_Cell_Default=18 ; Center_Level=Default eval Center_Cell="Center_Cell_${Center_Level}" HPF_MATRIX_ASCII=`hpf_matrix ${Kernel_Size} ${!Center_Cell}` echo "${HPF_MATRIX_ASCII}" However, integrating without any changes the above pieces of code (the hpf_matrix function and feeding the "${HPF_MATRIX_ASCII}") inside a larger bash script, errors-out with the following message: ((: 1 2 3 4 5 == 1 2 3 4 5 : syntax error in expression (error token is "2 3 4 5 == 1 2 3 4 5 ") Minor Update If I understand correctly, for whatsoever is the reason behind, the line for Row in seq ${Matrix_Dimension} as well as the line for Col in seq ${Matrix_Dimension} are printed as "1 2 3 4 5" instead of "1" "2" "3" "4" "5". What is wrong in this case? I would like to keep a nested structure for the function unless it is clearly wrong.
In the large script, in which the above function was integrated to work as part of it, and prior of defining the hpf_matrix function, the IFS has been changed to IFS=, without taking care to reset it back before using the unquoted command substitution in the function! An explanation on Using unquoted command substitution ($(...)) without setting $IFS here: https://unix.stackexchange.com/a/88259/13011. A solution to this also here: https://unix.stackexchange.com/a/92188/13011.
Why does a working standalone nested function/script not work inside a larger script? [duplicate]
1,384,859,076,000
I have a bash script set up to monitor a number of UDP streams and convert it into actionable data. My problem is that I need to set the script to periodically check to see if the stream capture is running and restart it if it isn't. The challenge is to create a new process name or ID for each stream capture and check to see if it's running. Here's a watered down version of what I've got. I'm hoping someone can tell me if I'm on the right track or not: Subscriber () { processName="$1$2$4"; echo "$processName"; pgrep $processName; if [[ $? -ne 0 ]] ; then echo "Subcription for $1 with IP $2 not found, restarting." ; while read -re -t 43200 doc; do <Code to analyze stream> done < <(bash -c "exec -a $processName <Commands to capture stream as JSON doc>") else echo "Subcription for $1 wtih IP $2 found to be running, skipping." ; fi } while read line; do Subscriber $line; done < $flatFile Ideally, I'd like to get a process ID or name for the entire string of commands listed after exec -a, but it currently only taking the first command, which seems to be sort of working, but I'm not confident it will do what I'm wanting it to do. The flatFile reference is a dynamically updated flatfile listing several hundred streams I'm monitoring.
The name of a process on Linux at least is changed every time the process executes a command, but is changed to the basename of the file that is executed, not to the argv[0] that bash allows passing with exec -a. pgrep can match on the arg list (joined with spaces) with the -f option though, but note that like for grep, pgrep does regular expression matching, so you'd need to construct a regexp to match the argv[0] you want. Or you could make sure the name you pass to exec -a doesn't contain regexp characters (including . common in IPv4 addresses). Subscriber () { process_arg0="$1$2$4" # change all regex operators and space to _ process_arg0="${process_arg0//[][ .\\+?\$^()*{\}]/_}" printf>&2 '%s\n' "$process_arg0" if ! LC_ALL=C pgrep -f "^$process_arg0( |\$)"; then printf>&2 '%s\n' "Subscription for $1 with IP $2 not found, restarting." while IFS= read -r -t 43200 doc; do <Code to analyze stream> done < <( exec -a "$process_arg0" <Commands to capture stream as JSON doc> ) else printf>&2 '%s\n' "Subscription for $1 with IP $2 found to be running, skipping." fi } (also removing some other obvious errors like unquoted quotes, usage of echo, nonsensical -e option for read, missing IFS= for read, variable data embedded in interpreter code). In any case, matching processes by name or arg list is very brittle and dangerous as any process can assign themselves any name or arg list they want, and the approach used here introduces race conditions. Using proper process overseers / supervisors such as your init system (systemd, upstart and co) or dedicated tools such as runit, daemontools, supervisor, start-stop-daemon would likely be more appropriate.
Running a function as process with a set process name or id
1,384,859,076,000
With the following function I want to be able to call it with nico-usage or with a numeric value to print a different string. Con this be cleaned up or made easier. nico-usage () { local docstrg_lang=" {-V, --version}, {-u, --usage}, {-h, --help} -s SCAL, --scale SCAL" local docstrg_usage=" nicolaus -s 0.5 -aq 3" usg=$1 if (( usg == 1 )); then echo "$docstrg_lang" elif (( usg == 2 )); then echo "$docstrg_usage" else echo "$docstrg_lang" fi }
If the question is how to make it pretty, here is a variant: nico-usage () { if (( $1 == 2 )) ; then echo -e "\nnicolaus -s 0.5 -aq 3" else echo -e "\n{-V, --version}, {-u, --usage}, {-h, --help} -s SCAL, --scale SCAL" fi }
function that allows different outputs dependent on argument values
1,384,859,076,000
Let's say I have two files main.sh and sub.sh in the same folder with the following contents: main.sh: #!/usr/bin/env bash export PARAMETER="main" my_func(){ echo "$PARAMETER $1" } export -f my_func # Run the other script ./sub.sh sub.sh: #!/usr/bin/env bash PARAMETER="sub" my_func $PARAMETER If we run main.sh, it will output sub sub, but I want it to output main sub. Is there a way for the function defined in main.sh to always use the value which was declared in main.sh, without restricting the usage of that parameter name in scripts started by main.sh? I'm looking for solutions where I don't need to modify sub.sh, as in my real life problem there are plenty of sub.sh-like functions. Edit: Although I already got my question answered, here is a bit background: main.sh is an entrypoint script for a docker image. This entrypoint is basically a big case statement, and chooses which sub.sh scripts to run based on $1. I was thinking about creating a utility function in main.sh to avoid retyping a repetitive routine in sub.sh scripts. I'm not the only one who will add sub.sh-like scripts to the image, and that's why I wanted to avoid touching those scripts - I wanted to give other developers a tool, without them needing to be aware of it. My first thought was copying the value of PARAMETER to a parameter with a unique name, like my_func_PARAMETER. This would most likely ensure that the function always works as expected. That's when it came to my mind if it was possible to export functions with parameters already expanded.
You can create functions with parameters already expanded. And as you can overwrite them later (globally or in a subshell) you can kind of "export them with parameters already expanded". #!/usr/bin/env bash export PARAMETER="main" eval "my_func(){ echo \"$PARAMETER \$1\" }" export -f my_func # Run the other script ./sub.sh or #!/usr/bin/env bash export PARAMETER="main" my_func(){ echo "$PARAMETER $1" } export -f my_func # Run the other script ( eval "my_func(){ echo \"$PARAMETER \$1\" }" ./sub.sh ) Obviously that doesn't make much sense. As doesn't overwriting a variable before you want to use its old content. #!/usr/bin/env bash LOCAL_PARAMETER="sub" my_func "$LOCAL_PARAMETER"
Is there a way to export functions with parameters already expanded?
1,384,859,076,000
I have a function plist that is able to call head and tail commands. But for processing regions I call a different function pregion. # --- plist --- ("-H"|"--head") local -r hn="$2" ; shift 2 ;; ("-T"|"--tail") local -r tm="$2" ; shift 2 ;; ("--FS") # field separator local fs="$2" ; shift 2 ;; ("--incl") local incl+=("$2") ; shift 2 ;; # file type suffix ("--excl") local excl+=("$2") ; shift 2 ;; # file type suffix ("--RP") local pn=$2 ; shift 2 ;; ("--RQ") local qn=$2 ; shift 2 ;; ("--dyn"|"--dynamic") local dyn="1" ; shift 1 ;; ("-C"|"--context") local ctx=$2 ; shift 2 ;; ("-d"|"--directory") local fdir=$2 ; shift 2 ;; (--) shift; break ;; ... if [[ -v hn ]]; then head -v -hn "$n" elif [[ -v tm ]]; then tail -v -n "$tm" elif [[ -v dyn ]]; then pregion "$@" # requires original options here fi With head and tail, I only use options -H, -T', --FS, and --incl. Because I am using shift when processing options, I need to have a copy ef the original plist input arguments, because I cannot simply pass "$@" to pregion. This will call head or tail plist -H 8 ./01cuneus plist -T 13 ./01cuneus Examples of calling pregion plist --dyn -C 8 "Martin" ./01cuneus plist --incl .texi --incl .org --RP 8 --RQ 13 ./01cuneus plist --incl .texi --incl .org --dyn -C 8 "Martin" ./01cuneus
Copy the orginal args to an array, and use that with pregion. e.g. plist() { local -a origargs=("$@") ... case ... esac ... if ... elif [[ -v dyn ]]; then pregion "${origargs[@]}" fi
Bash function calling another function that requires passing user-defined options
1,384,859,076,000
So I am trying to sort files with specific extensions into specific folder (ones that have been chosen by user by command line arguments) Lets say $1 (.jpg) $2 (.docx) etc The script is all working and fine, but I am trying to write a loop that sorts these files into their folders (just simply based on their extensions, so .jpg into jpg folder, so basically mv .$1 $1) How can I write the loop so it will always add +1 into the command line argument until there is no more command line argument (lets say there are 5) and when there is no more argument, simply moves the unassigned files into a selected folder for it? Here what i was trying to do function sorting { count = 1 while [ $count -le 5 ]; do mv .$count $count count=$((count +1)) done } Then we I tried to call the function in the script I used sorting
You don't need an index to the arguments... Just do for file in "$@" do # apply command to "$file" done # work here on remaining files in directory The $file variable will take the successive values of your input arguments. Note the "$@" syntax, this insures that if arguments (file names) contain spaces they will still be processed as one unit.
Using loop in script for command line arguments
1,384,859,076,000
I have the following function in a bash script: testcur.sh : #!/bin/bash function valcurl { if [[ $1 != "" ]] then tbl=$2 # can be multiple values data=/home/data btic=$data/$tbl"_btic" kline=$data/$tbl"_kline" if [[ "$1" == "btic" ]] then errbtic=$data/$tbl"_btic_err" elif [[ "$1" == "kline" ]] then errkline=$data/$tbl"_kline_err" fi # how do I replace the parameter $1 to call the variable? cat $1 | jq . 2> $"err"$1 if [[ -z $"err"$1 ]] then echo "correct" else echo "contain error" fi else echo "Not var found, only btic or kline" fi } valcurl $1 $2 Is this possible or is there another way?
It seems that you want the error output to go to a specific file depending on one of the arguments to your function. For this, you don't need two separate variables: if [[ "$1" == "btic" ]]; then err="$data/${tbl}_btic_err" elif [[ "$1" == "kline" ]]; then err="$data/${tbl}_kline_err" fi or just err="$data/${tbl}_${1}_err" Then call jq: jq . <"$1" 2>"$err" I noticed several issues with your code and I have tried to sort them out below: #!/bin/bash valcurl () { if [ "$1" != 'btic' ] && [ "$1" != 'kline' ]; then echo 'first argument has to be btic or kline' >&2 return 1 fi if [ -z "$2" ]; then echo 'second argument must not be empty' >&2 return 1 fi datadir=/home/data datafile="$datadir/${2}_$1" err="${datafile}_err" if jq . <"$datafile" 2>"$err"; then echo 'correct' else echo 'contains error' fi } valcurl "$1" "$2" I have assumed that it is the data file constructed by using the first and second argument that you want to send through jq, and that you want to check jq's exit status for failures. jq would fail if $datafile is not a valid JSON file. I've also shortened the code a bit by not creating separate variables for each variation of $1, and then added some validation of the arguments.
Shell script: Call variable with parameters/arg
1,384,859,076,000
I created a function to log the results of a script and added an argument to the script. You may look at it at https://docs.laswitchtech.com/doku.php?id=documentations:linux:pxelinux In this script, I added an argument --screen to launch the same script with all the arguments into a screen with the -L switch. Enable_Screen(){ Check_Package screen ScreenCMD="./pxelinux.sh" CMDOptions="$@" CMDOptions=${CMDOptions// --screen/} CMD="$ScreenCMD $CMDOptions" if [ $debug = "true" ]; then echo -e "${ORANGE}[DEBUG][EXECUTE] screen -S PXE_Linux -L $CMD ${NORM}" fi screen -S PXE_Linux -L $CMD mv screenlog.0 pxelinux.screen.log exit 0 } Now I would like to add an option to the argument to append the log. an example of how I execute the script : ./pxelinux.sh --debug --screen --install-pxelinux Now this is the example I would like to use ./pxelinux.sh --debug --screen append --install-pxelinux Since this is an option for the screen function, I do not want it to be forwarded to the screen I am creating. In the screen function, you can see that I remove the --screen from the list of arguments and now I would need to remove append as well if it shows up in the arguments. But only if it's in the options of the --screen argument. Because append is an option to the argument --screen and may or may not be enabled. Basically, I used this convention for my arguments: --argument => execute a function in the script argument => option for the previously stated --argument Put more simply: script.sh #!/bin/bash Config_Network(){ echo -e " source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address $1 netmask $2 gateway $3 " | tee -a /etc/network/interfaces } Update_System(){ Command="apt-get update"; Executing "$Command" Command="apt-get upgrade -y"; Executing "$Command" } Restart_System(){ shutdown -r now } Check_Package(){ if [ $(dpkg-query -W -f='${Status}' $1 2>/dev/null | grep -c "ok installed") -eq 0 ]; then Command="apt-get install $1 -y"; Executing "$Command" fi } Executing(){ if [ $debug = "true" ]; then if eval $1;then echo -e "${GREEN}[DEBUG ][$(date)][SUCCESS][EXECUTING] $1 ${NORM}" | tee -a $logfile else echo -e "${RED}[DEBUG ][$(date)][ERROR ][EXECUTING] $1 ${NORM}" | tee -a $logfile fi else if eval $1;then echo -e "${GREEN}[DEBUG ][$(date)][SUCCESS][EXECUTING] $1 ${NORM}" else echo -e "${RED}[DEBUG ][$(date)][ERROR ][EXECUTING] $1 ${NORM}" fi fi } while test $# -gt 0 do case "$1" in --config-network) netconf ;; --update) Update_System ;; --restart) Restart_System ;; --*) exit ;; esac shift done exit 0 now when I execute script.sh, I want to be able to pass $1 $2 $3 to the netconf fonction no matter where it is in the statement. ./script.sh --config-network 10.10.10.10 255.255.255.0 10.10.10.1 --update --restart ./script.sh --restart --config-network 10.10.10.10 255.255.255.0 10.10.10.1 --update ./script.sh --update --restart --config-network 10.10.10.10 255.255.255.0 10.10.10.1 --update
I found a solution to my issue by testing the following 3 arguments if they contained -- in them. And since in this case I was looking for IPs and mask, I added a second test for that. So the first if validates that the following arguments are not functions in the script and then the second if validates the arguments I want to pass to the function. --config-network) if [[ $2 != *--* && $3 != *--* && $4 != *--* ]]; then if [[ $2 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ && $3 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ && $4 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then Config_Network $2 $3 $4 else Config_Network fi else echo "Bad [argument] $1 $2 $3 $4" Display_Help exit fi ;;
how to remove characters from variables to remove --function option1 option2
1,384,859,076,000
I am creating an shell application that do a lot of questions and I am using the read -p "<my_question>" <myvar> several times. The problem is that I also want to verify if answers is empty. So, I wondered to create a generic function to ask and verify if it is empty or not. If so, call the function itself recursively, until user provides something. When I 'fixed' the variable name as, lets say, 'userdatabase', the things works wonderfully well. Follows the function declaration and usage: ask() { read -p "$1" $2 if [[ -z $userdatabase ]]; then echo Empty is not allowed ask "$1" $2 else echo $userdatabase fi } ask "Provides the user database: " userdatabase Of course, I don't want put the "userdatabase" as variable name for all questions that application will make. So, I have noticed that I need a kind of "dynamic" variable. Letting the thinks a little more dynamic, it becomes: ask() { read -p "$1" $2 if [[ -z $$var ]]; then echo Empty is not allowed ask "$1" $2 else echo $$var fi } ask "Provides the user database: " $var But when I use the utility, I receive something like SOMENUMBERvar. Obviously I am not using the "dynamic variable" in shell in the right way. So, how does I create a function that receives the question statement and a variable name that will be filled with the variable from read -p command?
I would like to share something that I've discovered late. I've tried to add a further step in the answer validation: validate also if the path exists, and I could not adapt the Rakesh Sharma solution for this. Finally, I found exactly what I was looking for, that is a way to deal with "dynamic variable", and the real way to do this is using the ${!var}. Here is the final version of my function, and its usage: ask_validate() { read -p "$1" $2 if [ -z ${!2} ]; then echo Empty answer is not allowed. ask_validate "$1" $2 return fi if ! [ -d ${!2} ]; then echo You need to provides an existing path ask_validate "$1" $2 return fi echo The var name is: $2 echo The var value is: ${!2} } ask_validate "Please, provides the host path: " host_path ask_validate "Please, provides the virtual host configuration path: " virtualhost_path echo The host path is $host_path echo The virtual host configuration path is $virtualhost_path
Create generic function to ask a question and verify if answer is empty
1,384,859,076,000
I am writing a script which displays input options in a while loop provided by a function user_input() and set values depending on user input, then I call another function user_info(). If a user made a mistake I am trying to offer him to go back to correct his input. So if a user set $var by mistake to "Yes" he can go back an reset the option! Assuming the user reset $var to "No", is there any way not to resume at code (2), rather to jump to the elif statement and run code (3) ? If the question is not clear I can post my code to make it clearer. Thanks a lot: user_input(){ while true; do input option $var done user_info } user_info(){ some code if [ "${var}" = "Yes" ]; then code (1) if [ "${option}" = "back" ]; then user_input fi code (2) elif [ "${var}" = "No" ]; then code (3) fi }
I think I found a solution by adding return after each call of the user_input() function like this, please correct me if I made a mistake. Thanks a lot: user_input(){ while true; do input option $var done user_info } user_info(){ some code if [ "${var}" = "Yes" ]; then code (1) if [ "${option}" = "back" ]; then user_input return fi code (2) elif [ "${var}" = "No" ]; then code (3) fi }
resume running a script after function call
1,384,859,076,000
I'm new to shell programming and I have created a script that opens a connection to a server of mine. I want to have this script listen for an input from a client node and use that to run a function. This is my process. Run script > opens listener > on second computer use netcat to connect > run a function in the script on the server called nodefunction I have server_port coded to '4444' File name: run_hangman nc -l -k -v -p 4444 | bash hangman File name: hangman #!/bin/bash msg_timeout=0 host_ows=1 server_port=4444 dubOws=xxx.xxx.xxx.xxx initServer() { hostIP=`ip -o addr show dev "eth0" | awk '$3 == "inet" {print $4}' | sed -r 's!/.*!!; s!.*\.!!'` hostOws=`echo $hostIP | cut -d . -f 4` } servermsg(){ #message //if = --n, echos on same line if [ "$1" != "--n" ] then echo `date +"%T"` "[SERVER] "$1 else echo -n `date +"%T"` "[SERVER] " fi } owsmsg(){ #message //if = --n, echos on same line if [ "$1" != "--n" ] then echo `date +"%T"` "[OWS] "$1 else echo -n `date +"%T"` "[OWS] " fi } playermsg() { if [ "$1" != "--n" ] then echo `date +"%T"` "[PLAYER] "$1 else echo -n `date +"%T"` "[PLAYER] " fi } question(){ #question, read, example servermsg "$1" if [ -n "$3" ] then servermsg "$3" fi read $2 echo "" } owsArray(){ # for targetOws in $player_list do owsArray+=("OWS"$targetOws) done echo -n ${owsArray[*]} echo } openSocket() { servermsg "Starting the Game Listener" servermsg "Opening Listener on port "$server_port #nc -k -l $server_port |bash #nc -kl -q 1 -p $server_port # This should create the listener & This is where everything stops. servermsg "Now listening on port "$server_port } initServer owsmsg "Starting server on OWS"$hostOws"..." question "Enter all the OWSs that will play:" player_list "Example: 1 9 14 23" echo $player_list question "Type a category hint:" game_cat "Example: Type of Animal" question "Type your word:" game_word "Example: zebra" question "How many guesses:" game_guesses "Example: 7" servermsg "OWS"$host_ows "has created a Hangman session" servermsg "Players are:"; servermsg --n; owsArray servermsg "Your word is "${#game_word}" letters long and players have "$game_guesses" guesses" question "If this is all correct press enter, or CTRL+C to cancel" openSocket # I think I need a While script here to read the RAW input and run the playermsg function with the input? I run the run_hangman file and then I connect to it via my node computer. I enter the following line and echo "1 2 3" because that is what I need. I also can't enter "1 2 3" directly into the window running "run_hangman" as if I press enter it goes to a new line. echo "1 2 3" >/dev/tcp/xxx.xxx.xxx.xxx/4444 The server shows that it connected Listening on [0.0.0.0] (family 0, port 4444) 14:52:24 [OWS] Starting server on OWS225... 14:52:24 [SERVER] Enter all the OWSs that will play: 14:52:24 [SERVER] Example: 1 9 14 23 Connection from [xxx.xxx.xxx.xxx] port 4444 [tcp/*] accepted (family 2, sport 41564) Connection closed, listening again. 1 2 3 Now once it gets to openSocket it will allow me to send one more echo and then it closes on the server. I need to get what I presume is a while statement and have it listen for an input like "playermsg 'has started a game'" and have it actually run that function on the server. Will I be able to get this to run, almost seems like it has to be in the background? I've been using nc(1) for reference and some websites said to try -d and that didn't work either.
I got it figured out. I did indeed need a while statement. openSocket while read -r value; do val=${value:10} if [[ "$value" == playermsg* ]]; then val=${value:10} playermsg $val elif [[ "$value" == servermsg* ]]; then val=${value:10} servermsg $val else echo "Returned "$value echo "Value was "$val fi done So now on the second computer I simply run echo "playermsg testing" >/dev/tcp/xxx.xxx.xxx.xxx/4444 Then the server displays the following: 15:29:36 [SERVER] Starting the Game Listener 15:29:36 [SERVER] Opening Listener on port 4440 15:29:36 [SERVER] Now listening on port 4440 15:29:37 [PLAYER] testing
netcat daemon for calling functions in sh script
1,384,859,076,000
Not sure why this is producing error. This is a test code emulating my real code. I want to write a wrapper for find and want to allow for any argument, so I'm wrapping each arg in single quotes. #!/bin/bash function find2 { ARGS="/usr/bin/find" while [[ $# -gt 0 ]]; do ARGS="$ARGS '$1'" shift done echo CALLING: $ARGS $ARGS } find2 /tmp/test -name "hello.c" # THIS IS THE DESIRED IMPLEMENTATION (doesn't work) find '/tmp/test' '-name' 'hello.c' # THIS IS FOR DEBUGGING (works) I want to "find2" work, but it doesn't work. I get the following output: CALLING: /usr/bin/find '/tmp/test' '-name' 'hello.c' /usr/bin/find: `\'/tmp/test\'': No such file or directory /usr/bin/find: `\'-name\'': No such file or directory /usr/bin/find: `\'hello.c\'': No such file or directory however, if I use the exact same command (produced by find2) directly, it works fine: /tmp/test/hello.c Not sure what is going on.
(in Bash) You can change to an array of values: find2() { ARGS="/usr/bin/find" ARGS+=( "$@" ) echo CALLING: "${ARGS[@]}" "${ARGS[@]}" } find2 /tmp/test -name "hello.c" But this works and is quite simpler: find2() { ARGS=( "/usr/bin/find" "$@" ) echo CALLING: "${ARGS[@]}" "${ARGS[@]}" } find2 /tmp/test -name "hello.c" Of course, the direct way also work (in any shell with functions): find2() { /usr/bin/find "$@"; } find2 /tmp/test -name "hello.c" Why the original code failed? To "see" what the code is doing you may use set -x or better yet , replace the echo for printf, as this: find2() { ARGS="/usr/bin/find" ARGS+=( "$@" ) printf '<%s> ' CALLING: "${ARGS[@]}"; echo "${ARGS[@]}" } find2 /tmp/test -name "hello.c" When you execute it, you see: $ ./script.sh <CALLING:> </usr/bin/find> </tmp/test> <-name> <hello.c> Each argument is a separate element (note the position of the <>). However, in your original code (adding printf): function find2 { ARGS="/usr/bin/find" while [[ $# -gt 0 ]]; do ARGS="$ARGS '$1'" shift done printf '<%s> ' CALLING: "${ARGS[@]}"; echo $ARGS } find2 /tmp/test -name "hello.c" You will get, on execution: $ ./script.sh <CALLING:> </usr/bin/find '/tmp/test' '-name' 'hello.c'> All the values are a long text line, not separate arguments (note the position of the <>).
bash script function argument problem [duplicate]
1,331,066,898,000
I've just set up a new machine with Ubuntu Oneiric 11.10 and then run apt-get update apt-get upgrade apt-get install git Now if I run git --version it tells me I have git version 1.7.5.4 but on my local machine I have the much newer git version 1.7.9.2 I know I can install from source to get the newest version, but I thought that it was a good idea to use the package manager as much as possible to keep everything standardized. So is it possible to use apt-get to get a newer version of git, and what is the right way to do it?
You have several options: Either wait until the version you need is present in the repository you use. Compile your own version and create a deb. Find a repository that provides the version you need for your version of your distribution(e.g. Git PPA). If you don't need any particular feature from the newer version, stay with the old one. If a newer version is available in the repositories you use, then apt-get update && apt-get upgrade (as root) updates to the latest available version. For those who don't know what a PPA is, link
How can I update to a newer version of Git using apt-get?
1,331,066,898,000
Is there a way to color output for git (or any command)? Consider: baller@Laptop:~/rails/spunky-monkey$ git status # On branch new-message-types # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/models/message_type.rb # no changes added to commit (use "git add" and/or "git commit -a") baller@Laptop:~/rails/spunky-monkey$ git add app/models And baller@Laptop:~/rails/spunky-monkey$ git status # On branch new-message-types # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: app/models/message_type.rb # The output looks the same, but the information is totally different: the file has gone from unstaged to staged for commit. Is there a way to colorize the output? For example, files that are unstaged are red, staged are green? Or even Changes not staged for commit: to red and # Changes to be committed: to green? Working in Ubuntu. EDIT: Googling found this answer which works great: git config --global --add color.ui true. However, is there any more general solution for adding color to a command output?
You can create a section [color] in your ~/.gitconfig with e.g. the following content [color] diff = auto status = auto branch = auto interactive = auto ui = true pager = true You can also fine control what you want to have coloured in what way, e.g. [color "status"] added = green changed = red bold untracked = magenta bold [color "branch"] remote = yellow I hope this gets you started. And of course, you need a terminal which supports colour. Also see this answer for a way to add colorization directly from the command line.
How to colorize output of git?
1,331,066,898,000
I have a script which runs rsync with a Git working directory as destination. I want the script to have different behavior depending on if the working directory is clean (no changes to commit), or not. For instance, if the output of git status is as below, I want the script to exit: git status Already up-to-date. # On branch master nothing to commit (working directory clean) Everything up-to-date If the directory is not clean then I would like it to execute some more commands. How can I check for output like the above in a shell script?
Parsing the output of git status is a bad idea because the output is intended to be human readable, not machine-readable. There's no guarantee that the output will remain the same in future versions of Git or in differently configured environments. UVVs comment is on the right track, but unfortunately the return code of git status doesn't change when there are uncommitted changes. It does, however, provide the --porcelain option, which causes the output of git status --porcelain to be formatted in an easy-to-parse format for scripts, and will remain stable across Git versions and regardless of user configuration. We can use empty output of git status --porcelain as an indicator that there are no changes to be committed: if [ -z "$(git status --porcelain)" ]; then # Working directory clean else # Uncommitted changes fi If we do not care about untracked files in the working directory, we can use the --untracked-files=no option to disregard those: if [ -z "$(git status --untracked-files=no --porcelain)" ]; then # Working directory clean excluding untracked files else # Uncommitted changes in tracked files fi To make this more robust against conditions which actually cause git status to fail without output to stdout, we can refine the check to: if output=$(git status --porcelain) && [ -z "$output" ]; then # Working directory clean else # Uncommitted changes fi It's also worth noting that, although git status does not give meaningful exit code when the working directory is unclean, git diff provides the --exit-code option, which makes it behave similar to the diff utility, that is, exiting with status 1 when there were differences and 0 when none were found. Using this, we can check for unstaged changes with: git diff --exit-code and staged, but not committed changes with: git diff --cached --exit-code Although git diff can report on untracked files in submodules via appropriate arguments to --ignore-submodules, unfortunately it seems that there is no way to have it report on untracked files in the actual working directory. If untracked files in the working directory are relevant, git status --porcelain is probably the best bet.
Determine if Git working directory is clean from a script
1,331,066,898,000
Is there a similar piece of software to SourceTree, a GUI for git, for Linux? I know about Giggle, git cola, etc. I'm looking for a beautiful, easy to use GUI for git.
A nice alternative is SmartGit. It has very similar features to SourceTree and has built in 3-column conflict resolution, visual logs, pulling, pushing, merging, syncing, tagging and all things git :)
GUI for GIT similar to SourceTree
1,331,066,898,000
I'm trying to create a user without password like this: sudo adduser \ --system \ --shell /bin/bash \ --gecos ‘User for managing of git version control’ \ --group \ --disabled-password \ --home /home/git \ git It's created fine. But when I try to login under the git user I'm getting the password entering: su git Password:... When I leave it empty I get an error: su: Authentication failed What's wrong?
You've created a user with a “disabled password”, meaning that there is no password that will let you log in as this user. This is different from creating a user that anyone can log in as without supplying a password, which is achieved by specifying an empty password and is very rarely useful. In order to execute commands as such “system” users who don't log in normally, you need to hop via the root account: su -c 'su git -c "git init"' or sudo -u git git init If you want certain users to be able to run commands as the git user without letting them run commands as root, set up sudo (run visudo as root and add a line like %gitters ALL = (git) ALL).
Creating a user without a password
1,331,066,898,000
In a git repository, I have set up my .gitmodules file to reference a github repository: [submodule "src/repo"] path = src/repo url = repourl when I 'git status' on this repo, it shows: On branch master Your branch is up-to-date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: src/repo (new commits) If I cd into src/repo and git status on repo, it says that there is nothing to commit. Why is my top-level git repo complaining?
It's because Git records which commit (not a branch or a tag, exactly one commit represented in SHA-1 hash) should be checked out for each submodule. If you change something in submodule dir, Git will detect it and urge you to commit those changes in the top-level repoisitory. Run git diff in the top-level repository to show what has actually changed Git thinks. If you've already made some commits in your submodule (thus "clean" in submodule), it reports submodule's hash change. $ git diff diff --git a/src/repo b/src/repo index b0c86e2..a893d84 160000 --- a/src/repo +++ b/src/repo @@ -1 +1 @@ -Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea +Subproject commit a893d84d323cf411eadf19569d90779610b10280 Otherwise it shows -dirty hash change which you cannot stage or commit in the top-level repository. git status also claims submodule has untracked/modified content. $ git diff diff --git a/src/repo b/src/repo --- a/src/repo +++ b/src/repo @@ -1 +1 @@ -Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea +Subproject commit b0c86e28675c9591df51eedc928f991ca42f5fea-dirty $ git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) (commit or discard the untracked or modified content in submodules) modified: src/repo (untracked content) no changes added to commit (use "git add" and/or "git commit -a") To update which commit records should be checked out for the submodule, you need to git commit the submodule in addition to committing the changes in the submodule: git add src/repo
Git submodule shows new commits, submodule status says nothing to commit
1,331,066,898,000
I have a git mirror on my disk and when I want to update my repo with git pull it gives me error message: Your configuration specifies to merge with the ref '3.5/master' from the remote, but no such ref was fetched. It also gives me: 1ce6dac..a5ab7de 3.4/bfq -> origin/3.4/bfq fa52ab1..f5d387e 3.4/master -> origin/3.4/master 398cc33..1c3000a 3.4/upstream-updates -> origin/3.4/upstream-updates d01630e..6b612f7 3.7/master -> origin/3.7/master 491e78a..f49f47f 3.7/misc -> origin/3.7/misc 5b7be63..356d8c6 3.7/upstream-updates -> origin/3.7/upstream-updates 636753a..027c1f3 3.8/master -> origin/3.8/master b8e524c..cfcf7b5 3.8/misc -> origin/3.8/misc * [neuer Zweig] 3.8/upstream-updates -> origin/3.8/upstream-updates When I run make menuconfig it gives me Linux version 3.5.7? What does this mean? How can I update my repo?
Check the branch you are on (git branch), check the configuration for that branch (in .../.git/config), you probably are on the wrong branch or your configuration for it tells to merge with a (now?) non-existent remote branch.
git pull from remote but no such ref was fetched?
1,331,066,898,000
I want to put my home directory (~) under source control (git, in this case), as I have many setting files (.gitconfig, .gitignore, .emacs, etc.) in there I would like to carry across machines, and having them in Git would make it nice for retrieving them. My main machine is my MacBook, and the way that OS X is set up, there are many folders I want to ignore (Documents, Downloads, .ssh). There are also folders which are already using Git (.emacs.d). My thought was to just add all these directories to my .gitignore file, but that seems kind of tiresome, and could potentially lead to some unforeseen consequences. My next thought was to periodically copy the files I want to store into some folder in home, then commit that folder. The problem with that will be that I have to remember to move them before committing. Is there a clean way to do this?
I have $HOME under git. The first line of my .gitignore file is /* The rest are patterns to not ignore using the ! modifier. This first line means the default is to ignore all files in my home directory. Those files that I want to version control go into .gitignore like this: !/.gitignore !/.profile [...] A trickier pattern I have is: !/.ssh /.ssh/* !/.ssh/config That is, I only want to version .ssh/config - I don't want my keys and other files in .ssh to go into git. The above is how I achieve that. Edit: Added slashes to start of all paths. This makes the ignore patterns match from the top of the repository ($HOME) instead of anywhere. For example, if !lib/ was a pattern (don't ignore everything in the lib directory) and you add a file .gitignore, previously the pattern (!.gitignore) was matching that. With the leading slash (!/.gitignore), it will only match .gitignore in my home directory and not in any subdirectories. I haven't seen a case where this makes a practical difference with my ignore list, but it appears to me to be more technically accurate.
Tips for putting ~ under source control
1,331,066,898,000
I want to copy my c directory with all subdirectories excluding ./git subdirectory. I do it using rsync : echo "copy c and sh files " rsync -a --include='*.c' --include='*.sh' --include='*/' --exclude='*' ~/c/ ~/Dropbox/Public/c # remove .git directory = do not send it to dropbox. Thx to Tomasz Sowa rm -rf ~/Dropbox/Public/c/.git Can I do it better?
Just add an explicit exclude for .git: rsync -a --exclude='.git/' --include='*.c' --include='*.sh' --include='*/' --exclude='*' ~/c/ ~/Dropbox/Public/c Another option is to create ~/.cvsignore containing the following line along with any other directories you'd like to exclude: .git/
How to use rsync to backup a directory without git subdirectory
1,331,066,898,000
When I do git push I get the command prompt like Username for 'https://github.com': then I enter my username manually like Username for 'https://github.com': myusername and then I hit Enter and I get prompt for my password Password for 'https://[email protected]': I want the username to be written automatically instead of manually having to type it all the time. I tried doing it with xdotool but it didn't work out. I have already done git config --global user.name myusername git config --global user.email [email protected] but still it always asks for me to type manually
Actually what you did there is setting up the author information, just for the commits. You didn't store the credentials. credentials can be stored in 2 ways: using the git credential functions: https://git-scm.com/docs/git-credential-store change the origin url to "https://username:[email protected]". a third alternative is to use an ssh key (as @StephenKitt said). For github configuration, you can find all needed information in GitHub help page
Storing username and password in Git
1,331,066,898,000
I performed a git commit command and it gave me the following reply: 7 files changed, 93 insertions(+), 15 deletions(-) mode change 100644 => 100755 assets/internal/fonts/icomoon.svg mode change 100644 => 100755 assets/internal/fonts/icomoon.ttf mode change 100644 => 100755 assets/internal/fonts/icomoon.woff I know files can have user / group / other rwx permissions and those can be expressed as three octal digits, like "644" or "755". But why is git showing six digits here? I've read the following articles but didn't find an answer: Wikipedia's article on "File system permissions" How do I remove files saying “old mode 100755 new mode 100644” from unstaged changes in Git? Unix permissions made easy Chmod permissions (flags) explained: 600, 0600, 700, 777, 100 etc..
The values shown are the 16-bit file modes as stored by Git, following the layout of POSIX types and modes: 32-bit mode, split into (high to low bits) 4-bit object type valid values in binary are 1000 (regular file), 1010 (symbolic link) and 1110 (gitlink) 3-bit unused 9-bit unix permission. Only 0755 and 0644 are valid for regular files. Symbolic links and gitlinks have value 0 in this field. That file doesn’t mention directories; they are represented using object type 0100. Gitlinks are used for submodules. Each digit in the six-digit value is in octal, representing three bits; 16 bits thus need six digits, the first of which only represents one bit: Type|---|Perm bits 1000 000 111101101 1 0 0 7 5 5 1000 000 110100100 1 0 0 6 4 4 Git doesn’t store arbitrary modes, only a subset of the values are allowed, from the usual POSIX types and modes (in octal, 12 for a symbolic link, 10 for a regular file, 04 for a directory) to which git adds 16 for Git links. The mode is appended, using four octal digits. For files, you’ll only ever see 100755 or 100644 (although 100664 is also technically possible); directories are 040000 (permissions are ignored), symbolic links 120000. The set-user-ID, set-group-ID and sticky bits aren’t supported at all (they would be stored in the unused bits). See also this related answer.
File permission with six octal digits in git. What does it mean?
1,331,066,898,000
I found a collection of slackbuilds, some i need there are on GitHub. https://github.com/PhantomX/slackbuilds/ I don't want to get all git. git clone https://github.com/PhantomX/slackbuilds.git But only get a slackbuild, for this one. How to do this? Is it possible?
You will end up downloading the entire history, so I don't see much benefit in it, but you can checkout specific parts using a "sparse" checkout. Quoting this Stack Overflow post: The steps to do a sparse clone are as follows: mkdir <repo> cd <repo> git init git remote add -f origin <url> I'm going to interrupt here. Since I'm quoting another post, I don't want to edit the quoted parts, but do not use -f with git remote add. It will do a fetch, which will pull in the entire history. Just add the remote without a fetch: git remote add origin <url> And then do a shallow fetch like described later. This creates an empty repository with your remote, and fetches all objects but doesn't check them out. Then do: git config core.sparseCheckout true Now you need to define which files/folders you want to actually check out. This is done by listing them in .git/info/sparse-checkout, eg: echo "some/dir/" >> .git/info/sparse-checkout echo "another/sub/tree" >> .git/info/sparse-checkout [...] You might want to have a look at the extended tutorial and you should probably read the official documentation for sparse checkout. You might be better off using a shallow clone. Instead of a normal git pull, try: git pull --depth=1 origin master I had an occasion to test this again recently, trying to get only the Ubuntu Mono Powerline fonts. The steps above ended up downloading some 11 MB, where the Ubuntu Fonts themselves are ~900 KB: % git pull --depth=1 origin master remote: Enumerating objects: 310, done. remote: Counting objects: 100% (310/310), done. remote: Compressing objects: 100% (236/236), done. remote: Total 310 (delta 75), reused 260 (delta 71), pack-reused 0 Receiving objects: 100% (310/310), 10.40 MiB | 3.25 MiB/s, done. Resolving deltas: 100% (75/75), done. From https://github.com/powerline/fonts * branch master -> FETCH_HEAD * [new branch] master -> origin/master % du -hxd1 . 11M ./.git 824K ./UbuntuMono 12M . A normal clone took about 20 MB. There's some savings, but not enough. Using the --filter + checkout method in Ciro Santilli's answer really cuts down the size, but as mentioned there, downloads each blob one by one, which is slow: % git fetch --depth=1 --filter=blob:none remote: Enumerating objects: 52, done. remote: Counting objects: 100% (52/52), done. remote: Compressing objects: 100% (49/49), done. remote: Total 52 (delta 1), reused 35 (delta 1), pack-reused 0 Receiving objects: 100% (52/52), 14.55 KiB | 1.32 MiB/s, done. Resolving deltas: 100% (1/1), done. From https://github.com/powerline/fonts * [new branch] master -> origin/master * [new branch] terminus -> origin/terminus % git checkout origin/master -- UbuntuMono remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 0 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 1.98 KiB | 1.98 MiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 581 bytes | 581.00 KiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 121.43 KiB | 609.00 KiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 100.66 KiB | 512.00 KiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 107.62 KiB | 583.00 KiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 112.15 KiB | 791.00 KiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 454 bytes | 454.00 KiB/s, done. remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0 Receiving objects: 100% (1/1), 468 bytes | 468.00 KiB/s, done. % du -hxd1 . 692K ./.git 824K ./UbuntuMono 1.5M . TL;DR: Use all of --filter, sparse checkout and shallow clone to reduce the total download, or only use sparse checkout + shallow clone if you don't care about the total download and just want that one directory however it may be obtained.
Is it possible to clone only part of a git project?
1,331,066,898,000
In order to get coloured output from all git commands, I set the following: git config --global color.ui true However, this produces an output like this for git diff, git log whereas commands like git status display fine Why is it not recognizing the escaped color codes in only some of the commands and how can I fix it? I'm using iTerm 2 (terminal type xterm-256color) on OS X 10.8.2 and zsh as my shell zsh --version zsh 5.0.0 (x86_64-apple-darwin12.0.0) git --version git version 1.7.9.6 (Apple Git-31.1)
You're seeing the escape sequences that tell the terminal to change colors displayed with the escape character shown as ESC, whereas the desired behavior would be that the escape sequences have their intended effect. Commands such as git diff and git log pipe their output into a pager, less by default. Git tries to tell less to allow control characters to have their control effect, but this isn't working for you. If less is your pager but you have the environment variable LESS set to a value that doesn't include -r or -R, git is unable to tell less to display colors. It normally passes LESS=-FRSX, but not if LESS is already set in the environment. A fix is to explicitly pass the -R option to tell less to display colors when invoked by git: git config --global core.pager 'less -R' If less isn't your pager, either switch to less or figure out how to make your pager display colors. If you don't want git to display colors when it's invoking a pager, set color.ui to auto instead of true.
git diff displays colors incorrectly
1,331,066,898,000
I feel like a kid in the principal's office explaining that the dog ate my homework the night before it was due, but I'm staring some crazy data loss bug in the face and I can't figure out how it happened. I would like to know how git could eat my repository whole! I've put git through the wringer many times and it's never blinked. I've used it to split a 20 Gig Subversion repo into 27 git repos and filter-branched the foo out of them to untangle the mess and it's never lost a byte on me. The reflog is always there to fall back on. This time the carpet is gone! From my perspective, all I did is run git pull and it nuked my entire local repository. I don't mean it "messed up the checked out version" or "the branch I was on" or anything like that. I mean the entire thing is gone. Here is a screen-shot of my terminal at the incident: Let me walk you through that. My command prompt includes data about the current git repo (using prezto's vcs_info implementation) so you can see when the git repo disappeared. The first command is normal enough: » caleb » jaguar » ~/p/w/incil.info » ◼  zend ★ » ❯❯❯ git co master Switched to branch 'master' Your branch is up-to-date with 'origin/master'. There you can see I was on the 'zend' branch, and checked out master. So far so good. You'll see in the prompt before my next command that it successfully switched branches: » caleb » jaguar » ~/p/w/incil.info » ◼  master ★ » ❯❯❯ git pull remote: Counting objects: 37, done. remote: Compressing objects: 100% (37/37), done. remote: Total 37 (delta 25), reused 0 (delta 0) Unpacking objects: 100% (37/37), done. From gitlab.alerque.com:ipk/incil.info + 7412a21...eca4d26 master -> origin/master (forced update) f03fa5d..c8ea00b devel -> origin/devel + 2af282c...009b8ec verse-spinner -> origin/verse-spinner (forced update) First, rewinding head to replay your work on top of it... >>> elapsed time 11s And just like that it's gone. The elapsed time marker outputs before the next prompt if more than 10 seconds have elapsed. Git did not give any output beyond the notice that it was rewinding to replay. No indication that it finished. The next prompt includes no data about what branch we are on or the state of git. Not noticing it had failed I obliviously tried to run another git command only to be told I wasn't in a git repo. Note the PWD has not changed: » caleb » jaguar » ~/p/w/incil.info » ❯❯❯ git fetch --all fatal: Not a git repository (or any parent up to mount point /home) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). After this a look around showed that I was in a completely empty directory. Nothing. No '.git' directory, nothing. Empty. My local git is at version 2.0.2. Here are a couple tidbits from my git config that might be relevant to making out what happened: [branch] autosetuprebase = always rebase = preserve [pull] rebase = true [rebase] autosquash = true autostash = true [alias] co = checkout For example I have git pull set to always do a rebase instead of a merge, so that part of the output above is normal. I can recover the data. I don't think there were any git objects other than some unimportant stashes that hadn't been pushed to other repos, but I'd like to know what happened. I have checked for: Messages in dmesg or the systemd journal. Nothing even remotely relevant. There is no indication of drive or file system failure (LVM + LUKS + EXT4 all look normal). There is nothing in lost+found. I didn't run anything else. There is nothing in the history I'm not showing above, and no other terminals were used during this time. There are no rm commands floating around that might have executed in the wrong CWD, etc. Poking at another git repo in another directory shows no apparent abnormality executing git pulls. What else should I be looking for here?
Yes, git ate my homework. All of it. I made a dd image of this disk after the incident and messed around with it later. Reconstructing the series of events from system logs, I deduce what happened was something like this: A system update command (pacman -Syu) had been issued days before this incident. An extended network outage meant that it was left re-trying to download packages. Frustrated at the lack of internet, I'd put the system to sleep and gone to bed. Days later the system was woken up and it started finding and downloading packages again. Package download finished sometime just before I happened to be messing around with this repository. The system glibc installation got updated after the git checkout and before the git pull. The git binary got replaced after the git pull started and before it finished. And on the seventh day, git rested from all its labors. And deleted the world so everybody else had to rest too. I don't know exactly what race condition occurred that made this happen, but swapping out binaries in the middle of an operation is certainly not nice nor a testable / repeatable condition. Usually a copy of a running binary is stored in memory, but git is weird and something about the way it re-spawns versions of itself I'm sure led to this mess. Obviously it should have died rather than destroying everything, but that's what happened.
How did `git pull` eat my homework?
1,331,066,898,000
If I tar a folder that is a git repository, can I do so without including the .git related files? If not, how would I go about doing that via a command?
Have a look at git help archive or git archive --help. The git subcommand archive allows you to make archives containing only files trackeod by git. This is probably what you are looking for. One of many examples listed at the end of the manual: git archive --format=tar.gz --prefix=git-1.4.0/ v1.4.0 >git-1.4.0.tar.gz A current version of git supports creating archives in the following formats: tar tar.gz or tgz zip See git archive --list for a list of formats supported on your system.
Tar a folder without .git files?
1,331,066,898,000
I have for many years had my entire $HOME directory checked into subversion. This has included all my dotfiles and application profiles, many scripts, tools and hacks, my preferred basic home directory structure, not a few oddball projects and a warehouse worth of random data. This was a good thing. While it lasted. But it's gotten out of hand. The basic checkout is the same across dozens of systems, but not all that stuff is appropriate for all my machines. It doesn't even all play nicely with different distros. I'm in the process of cleaning house -- separating the data out where it belongs, splitting out some scripts as separate projects, fixing some broken links in stuff that should be automated, etc. My intent is to replace subversion with git for the toplevel checkout of $HOME, but I'd like to pare this down to just the things I'd like to have on ALL my systems, meaning dotfiles, a few directories and some basic custom scripts. In reading up online a lot of people seem to be doing this using the symlink approach: clone into a subdirectory then create symlinks from $HOME into the repository. Having had my $HOME under full version control for over a decade, I don't like the idea of this approach and I can't figure out why people seem so averse to the straight checkout method. Are there pitfalls I need to know about specific to git as a top level checkout for $HOME? P.S. Partly as an exercise in good coding, I'm also planning on making my root checkout public on GitHub. It's scary how much security sensitive information I've allowed to collect in files that ought to be sharable without a second thought! WiFi password, un-passphrased RSA keys, etc. Eeek!
Yes, there is at least one major pitfall when considering git to manage a home directory that is not a concern with subversion. Git is both greedy and recursive by default. Subversion will naively ignore anything it doesn't know about and it stops processing folders either up or down from your checkout when it reaches one that it doesn't know about (or that belongs to a different repository). Git, on the other hand, keeps recursing into all child directories making nested checkouts very complicated due to namespace issues. Since your home directory is likely also the place where you checkout and work on various other git repositories, having your home directory in git is almost certainly going to make your life an impossible mess. As it turns out, this is the main reason people checkout their dotfiles into an isolated folder and then symlink into it. It keeps git out of the way when doing anything else in any child directory of your $HOME. While this is purely a matter of preference if checking your home into subversion, it becomes a matter of necessity if using git. However, there is an alternate solution. Git allows for something called a "fake root" where all the repository machinery is hidden in an alternate folder that can be physically separated from the checkout working directory. The result is that the git toolkit won't get confused: it won't even SEE your repository, only the working copy. By setting a couple environment variables you can tip off git where to find the goods for those moments when you are managing your home directory. Without the environment variables set nobody is the wiser and your home looks like it's classic file-y self. To make this trick flow a little smoother, there are some great tools out there. The vcs-home mailing list seems like the defacto place to start, and the about page has a convenient wrap up of howtos and people's experiences. Along the way are some nifty little tools like vcsh, mr. If you want to keep your home directory directly in git, vcsh is almost a must have tool. If you end up splitting your home directory into several repostories behind the scenes, combine vcsh with mr for quick and not very dirty way to manage it all at once.
Are there pitfalls to putting $HOME in git instead of symlinking dotfiles?
1,331,066,898,000
I'm using git. I did a normal merge, but it keeps asking this: # Please enter a commit message to explain why this merge is necessary, # especially if it merges an updated upstream into a topic branch. And even if I write something, I can't exit from here. I can't find docs explaining this. How should I do?
This is depend on the editor you're using. If vim you can use ESC and :wq or ESC and Shift+zz. Both command save file and exit. You also can check ~/.gitconfig for editor, in my case (cat ~/.gitconfig): [user] name = somename email = [email protected] [core] editor = vim excludesfile = /home/mypath/.gitignore_global [color] ui = auto # other settings here
How to exit a git merge asking for commit message?
1,331,066,898,000
How do I remove a file from a git repositorie's index without removing the file from the working tree? If I had a file ./notes.txt that was being tracked by git, I could run git rm notes.txt. But that would remove the file. I'd rather want git just to stop tracking the file.
You could just use git rm --cached notes.txt. This will keep the file but remove it from the index.
How to remove a file from the git index
1,331,066,898,000
I have a regex I stuck in my .gitignore similar to: (Big|Small)(State|City)-[0-9]*\.csv It didn't work so I tested it against RegexLab.NET. I then found the gitignore man page which led me to learn that gitignore doesn't use regexes, but rather fnmatch(3). However, fnmatch it doesn't seem to have an equivalent of the capture groups. Is this feasible or do I need to break this into three lines?
There's no way to express this regular expression with the patterns that gitignore supports. The problem is not the lack of capture groups (in fact, you are not using capture groups as such), the problem is the lack of a | operator. You need to break this into four lines. BigState-[0-9]*.csv SmallState-[0-9]*.csv BigCity-[0-9]*.csv SmallCity-[0-9]*.csv Note that the patterns match e.g. BigState-4foo.csv, since * matches any sequence of characters. You can't do better than this with glob patterns, unless you're willing to match only a fixed number of digits.
What is the .gitignore pattern equivalent of the regex (Big|Small)(State|City)-[0-9]*\.csv
1,331,066,898,000
─[$] cat ~/.gitconfig [user] name = Shirish Agarwal email = [email protected] [core] editor = leafpad excludesfiles = /home/shirish/.gitignore gitproxy = \"ssh\" for gitorious.org [merge] tool = meld [push] default = simple [color] ui = true status = auto branch = auto Now I want to put my git credentials for github, gitlab and gitorious so each time I do not have to lookup the credentials on the browser. How can this be done so it's automated ? I am running zsh
Using SSH The common approach for handling git authentication is to delegate it to SSH. Typically you set your SSH public key in the remote repository (e.g. on GitHub), and then you use that whenever you need to authenticate. You can use a key agent of course, either handled by your desktop environment or manually with ssh-agent and ssh-add. To avoid having to specify the username, you can configure that in SSH too, in ~/.ssh/config; for example I have Host git.opendaylight.org User skitt and then I can clone using git clone ssh://git.opendaylight.org:29418/aaa (note the absence of a username there). Using gitcredentials If the SSH approach doesn't apply (e.g. you're using a repository accessed over HTTPS), git does have its own way of handling credentials, using gitcredentials (and typically git-credential-store). You specify your username using git config credential.${remote}.username yourusername and the credential helper using git config credential.helper store (specify --global if you want to use this setup everywhere). Then the first time you access a repository, git will ask for your password, and it will be stored (by default in ~/.git-credentials). Subsequent accesses to the repository will use the stored password instead of asking you. Warning: This does store your credentials plaintext in your home directory. So it is inadvisable unless you understand what this means and are happy with the risk.
how to set up username and passwords for different git repos?
1,331,066,898,000
I would like to keep track of changes in /etc/ Basically I'd like to know if a file was changed, by yum update or by a user and roll it back if I don't like the chage. I thought of using a VCS like git, LVM or btrfs snapshots or a backup program for this. What would you recommend?
It sounds like you want etckeeper from Joey Hess of Debian, which manages files under /etc using version control. It supports git, mercurial, darcs and bazaar. git is the VCS best supported by etckeeper and the VCS users are most likely to know. It's possible that your distribution has chosen to modify etckeeper so its default VCS is not git. You should only be using etckeeper with a VCS other than git if you're in love with the other VCS.
How to keep track of changes in /etc/
1,331,066,898,000
I have the following call structure: Jenkins runs fab -Huser@host set_repository_commit_hash:123abc. set_repository_commit_hash runs git fetch with pty = False. The child process ssh [email protected] git-upload-pack 'user/repository.git' never finishes. I've tried running git fetch in a local clone and that succeeds, but running ssh [email protected] git-upload-pack 'user/repository.git' just returns the following and hangs: 00ab84249d3bb20930c185c08848c60b71f7b28990d6 HEADmulti_ack thin-pack side-band side-band-64k ofs-delta shallow no-progress include-tag multi_ack_detailed agent=git/1.8.4 0041cb34b1c8ca75d478df38c794fc15c5f01cc6377e refs/heads/branch_name 004012577068adf47015001bfa0cff9386d6cdf497ce refs/heads/[...] 003f84249d3bb20930c185c08848c60b71f7b28990d6 refs/heads/master [a couple more lines like the ones above, then:] 0000 Is this a known SSH/Git/Fabric/Jenkins problem? I did strace it, but I haven't recorded the session. I believe it was stuck on a read. Possibly relevant links: Jenkins issue 14752: SCM Polling / Max # of concurrent polling = 1 hangs github polling Why would git-upload-pack (during git clone) hang? tortoisegit issue 1880: tortoisegit fetch hangs due to running/never-exiting tortoisegitplink (especially comment #7) What is this random never-ending 'git-upload-pack' process?
This problem appears to have gone away on its own, as can be expected by a rapidly evolving piece of software. Since I have not observed this issue for probably a couple years now I'd like to extend my thanks to whoever fixed it and consider this question obsolete. If you are experiencing this issue with recent Git versions please consider asking a separate question, since it is likely not the exact same issue.
git-upload-pack hangs indefinitely
1,331,066,898,000
I've found tons of sites that explain how to have git warn you when you're changing line endings, or miscellaneous other techniques to prevent you from messing up an entire file. Assume it's too late for that -- the tree already has commits that toggle the line endings of files, so git diff shows the subtraction of the old file followed by the addition of a new file with the same content I'm looking for a git configuration option or command-line flag that tells diff to just ignore those -- if two lines differ only by whitespace, pretend they're the same. I need this config option/flag to work for anything that relies on file differences -- diff, blame, even merge/rebase ideally -- I want git to completely ignore trailing whitespace, particularly line endings. How can I do that?
For diff, there's git diff --ignore-space-at-eol, which should be good enough. For diff and blame, you can ignore all whitespace changes with -w: git diff -w, git blame -w. For git apply and git rebase, the documentation mentions --ignore-whitespace. For merge, it looks like you need to use an external merge tool. You can use this wrapper script (untested), where favorite-mergetool is your favorite merge tool; run git -c mergetool.nocr.cmd=/path/to/wrapper/script merge. The result of the merge will be in unix format; if you prefer another format, convert everything to that different format, or convert $MERGED after the merge. #!/bin/sh set -e TEMP=$(mktemp) tr -d '\013' <"$BASE" >"$TEMP" mv -f "$TEMP" "$BASE" TEMP=$(mktemp) tr -d '\013' <"$LOCAL" >"$TEMP" mv -f "$TEMP" "$LOCAL" TEMP=$(mktemp) tr -d '\013' <"$REMOTE" >"$TEMP" mv -f "$TEMP" "$REMOTE" favorite-mergetool "$@" To minimize trouble with mixed line endings, make sure text files are declared as such. See also Is it possible for git-merge to ignore line-ending differences? on Stack Overflow.
Ignore whitespaces changes in all git commands
1,331,066,898,000
I'm getting a bizarre error message while using git: $ git clone [email protected]:Itseez/opencv.git Cloning into 'opencv' Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts. X11 forwarding request failed on channel 0 (...) I was under the impression that X11 wasn't required for git, so this seemed strange. This clone worked successfully, so this is more of a "warning" issue than an "error" issue, but it seem unsettling. After all, git shouldn't need X11. Any suggestions?
It looks like you have ssh configured to always attempt to use X11 forwarding. The error message is GitHub telling you that you can't do X11 forwarding from their servers. Look for ForwardX11 yes in ~/.ssh/config or /etc/ssh/ssh_config and set it to no. This will prevent ssh from attempting to use X11 forwarding for every connection.
"X11 forwarding request failed" when connecting to github.com
1,331,066,898,000
Is there a way to make tree not show files that are ignored in .gitignore?
This might help: list git ignored files in an almost-compatible way for tree filter: function tree-git-ignore { # tree respecting gitignore local ignored=$(git ls-files -ci --others --directory --exclude-standard) local ignored_filter=$(echo "$ignored" \ | egrep -v "^#.*$|^[[:space:]]*$" \ | sed 's~^/~~' \ | sed 's~/$~~' \ | tr "\\n" "|") tree --prune -I ".git|${ignored_filter: : -1}" "$@" }
Have tree hide gitignored files