date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,501,772,142,000
I can use the following command to get a list of directories and their sizes and sort them from largest to smallest (in the example I renamed the directories to numbers to make this easier to understand). $: du -sk [a-z]* 2>/dev/null | sort -nr 413096 one 106572 two 97452 three 76428 four 55052 five 45068 six 33680 seven 23220 eight 17716 nine I'm writing a program that requires input of these directories from largest to smallest, but for matters of convenience it needs them all on one line. Is there a command that will allow me to sort the directories from largest to smallest on one line without the size? I would like the output to be like this: one two three four five six seven eight nine
If you are confident that the directory names do not contain whitespace, then it is simple to get all the directory names on one line: du -sk [a-z]*/ 2>/dev/null | sort -nr | awk '{printf $2" "}' Getting the information into python If you want to capture that output in a python program and make it into a list. Using python2.7 or better: import subprocess dir_list = subprocess.check_output("du -sk [a-z]*/ 2>/dev/null | sort -nr | awk '{printf $2\" \"}'", shell=True).split() In python2.6: import subprocess subprocess.Popen("du -sk [a-z]*/ 2>/dev/null | sort -nr | awk '{printf $2\" \"}'", shell=True, stdout=subprocess.PIPE).communicate()[0].split() We can also take advantage of python's features to reduce the amount of work done by the shell and, in particular, to eliminate the need for awk: subprocess.Popen("du -sk [a-z]*/ | sort -nr", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0].split()[1::2] One could go further and read the du output directly into python, convert the sizes to integers, and sort on size. It is simpler, though, just to do this with sort -nr in the shell. Specifying a directory If the directories whose size you want are not in the current directory, there are two possibilities: du -sk /some/path/[a-z]*/ 2>/dev/null | sort -nr | awk '{printf $2" "}' and also: cd /some/path/ && du -sk [a-z]*/ 2>/dev/null | sort -nr | awk '{printf $2" "}' The difference between these two is whether /some/path is included in the output or not.
Listing directories based on size from largest to smallest on single line
1,501,772,142,000
I need to compare a command output with a string. This is the scenario: pvs_var=$(pvs | grep "sdb1") so pvs var is: /dev/sdb1 vg_name lvm2 a-- 100.00g 0 if [[ $($pvs_var | awk '{ print $2 }') = vg_name ]]; then do something fi The issue is that the output of the if statement is -bash: /dev/sdb1: Permission denied I don't understand this behavior. Thank you
You are attempting to execute the contents of $pvs_var as a command, rather than passing the string to awk. To fix this, add an echo or printf in your if statement: if [[ $(echo "$pvs_var" | awk '{ print $2 }') = vg_name ]]; then do something fi
How to treat a command output as text
1,501,772,142,000
I was trying to export the Python environment requirements and this is what I intended to do: conda list -e > requirements.txt However I mistakenly typed this instead: conda list -e -> requirements.txt It still works, but the file is having fewer lines in the content. I want to know what exactly happened. I searched, but I couldn't find an explanation on the - in this case.
The -e option doesn't take any argument after it, so the - is just a regular argument to list. The first and only positional argument conda list has is a regular expression, which causes it to List only packages matching this regular expression. In your case, it will have listed only packages matching - (so, containing a hyphen in their name). That output was then redirected into requirements.txt as you intended. It will be shorter than conda list -e's output because there are some non-matching packages that aren't included.
What's going on with this dash '-' thing?
1,501,772,142,000
I want to modify my output in bash for better view of output. Simply put \n before this. How can I change it in .bashrc? For example: It's default: root@comp:$ abc bash: abc: command not found I want this: root@comp:$ abc bash: abc: command not found
You can trap the DEBUG signal: trap 'printf "\n"' DEBUG DEBUG trapped command printf "\n" will be run before the command is executed unlike PROMPT_COMMAND which will be run after the command is executed. You can add this to your ~/.bashrc to make it permanent. Example: $ abc No command 'abc' found, did you mean: .... $ trap 'printf "\n"' DEBUG $ abc No command 'abc' found, did you mean: ....
Newline ("\n") before output in bash
1,501,772,142,000
I want to run a command, then show the output on the screen as well as output it to a log file, currently I use tee -a, but the problem is tee doesn't preserve colours, and currently I have not been able to find a way to do that.
tee doesn't know anything about colors. But some applications produce colored output only when their output goes to a terminal, not when it goes to a regular file or to a pipe. In such cases, check if the application can be told to produce colored output anyway. For example, under OSX, for ls, you need to set the environment variable CLICOLOR_FORCE. If an application behaves differently when its output is a terminal and can't be configured, then run it in a terminal. The script utility runs a command in a terminal and records the output, escape sequences, screen redraws and all. script -q brew.log brew … (But doesn't brew keep logs already? Maybe if you set HOMEBREW_LOGS?)
Show output on terminal and output to log file, without using tee
1,501,772,142,000
I often use dpkg or aptitude combined with grep when I want to list certain packages available or installed on my system, but I noticed that when I add | grep, the output lines look a little bit different. Here's a pure dpkg output, the first command was typed when the terminal was smaller, the second one when the terminal was maximized: As you can see, the output differs depending on the size of the window -- the spaces are reduced in the case of a smaller one. Now, what happens when we add | grep: A part of the first output was dropped to the second line. But when I maximised the terminal and typed the command once more, the line is in one piece. Moreover, the columns have the same fixed size (the same spaces between them). This is an aptitude output: Both commands were typed in the maximised window, but the grep line has narrower columns, and some text of the third column was cut off. Why does it happen? Is there a way to stop grep from resizing the lines? I don't know how to add an image without changing its parameters, I hope you see what I'm talking about.
It is not grep changing the output. It is dpkg and aptitude. They check whether the output goes to a terminal or to some other command. If it is a terminal they adapt their own output width to match the terminal size. If the output does not go to a terminal, the command has no idea what column size would be appropriate. (The output might as well end in some file.) The same happens with ls. Compare ls and ls|cat. There is no general way to solve this, but some commands might have specific options for that. For example aptitude has --disable-columns and -w: --disable-columns This option causes aptitude search and aptitude versions to output their results without any special formatting. In particular: normally aptitude will add whitespace or truncate search results in an attempt to fit its results into vertical “columns”. With this flag, each line will be formed by replacing any format escapes in the format string with the corresponding text; column widths will be ignored. -w <width>, --width <width> Specify the display width which should be used for output from the search command (by default, the terminal width is used). The man page of dpkg says: COLUMNS Sets the number of columns dpkg should use when displaying formatted text. Currently only used by -l.
Why does grep change the length of output lines?
1,501,772,142,000
I have a python script. It has a SimpleLogger with sys.stdout as output_stream. logger = SimpleLogger(level=LogLevel.DEBUG) When I run it in console, I get the logs properly, but whenever I redirect the output to a file, nothing is found in the target. I tried multiple ways: python server.py > /tmp/x.log 2>&1 python server.py > /tmp/x.log In both cases, the /tmp/x.log is empty. I also tried nohup python server.py, but nothing has been written in nohup.out.
This is probably just due to buffering. You will only see something in the file when enough output has been accumulated. You can try using python -u to ask for unbuffered output, or set environment variable PYTHONUNBUFFERED= to any non-empty string, as documented in the Python command line documentation, or add a .flush() call after each .debug() or similar call.
python script output won't be directed to file
1,501,772,142,000
I understand that set -x gets a Bash user into debug mode and I feel that working full time in debug mode will help me handle possible problems better in Bash. I experience a problem when working with set -x: When I try to use the native tab completion of my distro (Ubuntu 16.04) to complete a directory's name, I get a very long, messy output. For example, $PWD is /var/www/html/ and I run either of these: cd ~/u[tab completion to complete u to ulcwe] cd ~ && cd u[tab completion to complete u to ulcwe] In both examples I'll get a very long and messy output: + return 0 + local -a toks + local quoted x tmp + _quote_readline_by_ref '~/u' quoted + '[' -z '~/u' ']' + [[ ~/u == \'* ]] + [[ ~/u == \~* ]] + printf -v quoted '~%q' /u + [[ ~/u == *\\* ]] + [[ ~/u == \$* ]] ++ compgen -d -- '~/u' + x='~/ulcwe' + read -r tmp + toks+=("$tmp") + read -r tmp + [[ -d != -d ]] + [[ -n '' ]] + [[ 1 -ne 0 ]] + compopt -o filenames + COMPREPLY+=("${toks[@]}") + return 0 lcwe/ Note the lcwe in the end. The above output is just part of a much larger output. How could I keep working in debug mode full time (set -x) but without all that output when performing tab completion?
This is due to the programmable completion feature of the shell that you are using. If you're happy with basic tab completion for commands and filenames in bash, then running complete -r in your ~/.bashrc will remove any "fancy" (programmable) completion that involves calling functions etc. in the current shell's environment. Basic filename completion (like what you're doing in the examples in your question) will still work after turning off programmable completion. Programmable completion is available in some shells and allows one to hook up callback functions that examine the state of the command line etc. to figure out what the possible strings that the user might want to insert next might be. For example, typing ssh and then Space and Tab may invoke a function that parses your ~/.ssh/config file for possible hostnames to connect to, or typing git checkout and then Space and Tab may cause a function to run a git command to figure out what branches are available in the current repository. Some users rely on programmable completions for speed and/or productivity reasons, but yes, if you have set -x active in the interactive shell session, these actions would produce trace output in the terminal. I've never been a friend of programmable completion in any shell as I don't want a simple Tab press to do "magic behind my back" in all sorts of interesting ways. I also think it's a bit lazy, but that's definitely only my personal opinion.
set -x: tab completion results in messy output when working in debug mode
1,501,772,142,000
If you redirect the nohup application as: nohup bash -c "printf \"command\n\"" &> /dev/null The nohup.out file is not created, however the terminal I ran the command also do not get any output. How to keep the terminal output from the command but not create the nohup.out file?
Try this, To get the STDOUT and STDERR both on console and in a nohup.out file, execute the following before executing your nohup command. exec > >(awk '{ print $0; fflush();}' | tee -a nohup.out) exec 2> >(awk '{ print $0; fflush();}' | tee -a nohup.out >&2) nohup bash -c "printf \"command\n\"" & EDIT: If you want the nohup.out to not be created, then try this exec > >(awk '{ print $0; fflush();}') exec 2> >(awk '{ print $0; fflush();}') nohup bash -c "printf \"command\n\"" & This will not create the nohup.out while displaying STDOUT and STDERR on the console. Put the above lines in a script and then run it in the background.
How to not create the nohup.out file, but keep the terminal output?
1,501,772,142,000
I split a gz file using gunzip piped to split: time gunzip -c file.gz | split -l 500 -d -a 4 - pref_ Which generates the following files: pref_0000 pref_0001 I'd like to pipe those files to zip them again. I tried the following: gunzip -c file.gz | split -l 500 -d -a 4 - pref_ | echo "file produced:" - # Nothing gunzip -c file.gz | split -l 500 -d -a 4 - pref_ | echo - gunzip -c file.gz | split -l 500 -d -a 4 - pref_ | echo Those do not work, how can I get an output from the split command? I expect to get the produced file names.
You could use the --filter option of split to invoke zip on each split file gunzip -c file.gz | split -l 500 -d -a 4 - pref_ --filter='zip $FILE'
redirect split output
1,501,772,142,000
I'm learning a bit of Bash by using xargs to list whois records for a bunch of IP addresses. The command used is: echo "$1" | tr "\n" "\0" | xargs -0 -n 1 -t -P 3 -I % sh -c 'echo "\n 44rBegin whois record -- \n"; whois -h whois.arin.net % ; echo "\n 44rEnd whois record -- \n"' The executed commands are: sh -c echo "\n 44rBegin whois record -- \n"; whois -h whois.arin.net 206.190.36.45 ; echo "\n 44rEnd whois record -- \n" sh -c echo "\n 44rBegin whois record -- \n"; whois -h whois.arin.net 212.146.69.237 ; echo "\n 44rEnd whois record -- \n" sh -c echo "\n 44rBegin whois record -- \n"; whois -h whois.arin.net 77.238.184.24 ; echo "\n 44rEnd whois record -- \n" I want the output to appear as if each block of commands executed using sh -c is executed sequentially. Instead my output is something like: 44rBegin whois record 44rBegin whois record 44rBegin whois record whois1 output 44rEnd whois record -- whois2 output 44rEnd whois record -- whois3 output 44rEnd whois record -- How can I fix this problem?
Any time you have multiple processes outputting to the same terminal (or file) in parallel, you run the risk of their output getting interspersed (unless you arrange to do some sort of locking or use low-level system calls like write to files opened in append-only mode). As a first step, you can minimize, but not totally eliminate, the problem by having each shell invocation use command substitution: run the whois command as a subprocess, capturing its output, then output everything combined into one printf operation. xargs -0 -n 1 -P 3 -I %% sh -c 'printf "\n%s\n%s\n%s\n" " 44rBegin whois record -- " "$(whois -h whois.arin.net %%)" " 44rEnd whois record -- "' Even better, if you have the flock program available, you can use it to lock each call to that combined printf: xargs -0 -n 1 -P 3 -I %% sh -c 'who="$(whois -h whois.arin.net %%)"; flock /tmp/who.lock printf "\n%s\n%s\n%s\n" " 44rBegin whois record -- " "$who" " 44rEnd whois record -- "'
Is the output mixed like this because of xargs and how can I fix it?
1,501,772,142,000
I want to implement something like this Q/A but for a sub-shell. Here is a minimal example of what I'm trying: (subshell=$BASHPID (kill $subshell & wait $subshell 2>/dev/null) & sleep 600) echo subshell done How can I make it so only subshell done returns instead of: ./test.sh: line 4: 5439 Terminated ( subshell=$BASHPID; ( kill $subshell && wait $subshell 2> /dev/null ) & sleep 600 ) subshell done Edit: I may be wrong on the terminology here, by subshell I mean the process within the first set of brackets. Update: I want to post the snippet from the actual program for context, above is a simplification: # If subshell below if killed or returns error connected variable won't be set (if [ -n "$2" ];then # code to setup wpa configurations here # If wifi key is wrong kill subshell subshell=$BASHPID (sudo stdbuf -o0 wpa_supplicant -Dwext -i$wifi -c/etc/wpa_supplicant/wpa_supplicant.conf 2>&1 \ | grep -m 1 "pre-shared key may be incorrect" \ && kill -s PIPE "$subshell") & # More code which does the setup necessary for wifi ) && connected=true # later json will be returned based on if connected is set
Note: wait $subshell won't work as $subshell is not a child of the process you're running wait in. Anyway, you'd not waiting for the process doing the wait so it doesn't matter much. kill $subshell is going to kill the subshell but not sleep if the subshell had managed to start it by the time kill was run. You could however run sleep in the same process with exec you can use SIGPIPE instead of SIGTERM to avoid the message leaving a variable unquoted in list contexts has a very special meaning in bash. So having said all that, you can do: ( subshell=$BASHPID kill -s PIPE "$subshell" & sleep 600 ) echo subshell done (replace sleep 60 with exec sleep 60 if you want the kill to kill sleep and not just the subshell, which in this case might not even have time to run sleep by the time you kill it). In any case, I'm not sure what you want to achieve with that. sleep 600 & would be a more reliable way to start sleep in background if that's what you wanted to do (or (sleep 600 &) if you wanted to hide that sleep process from the main shell) Now with your actual sudo stdbuf -o0 wpa_supplicant -Dwext -i"$wifi" -c/etc/wpa_supplicant/wpa_supplicant.conf command, note that sudo does spawn a child process to run the command (if only because it may need to log its status or perform some PAM session tasks afterwards). stdbuf will however execute wpa_supplicant in the same process, so in the end you'll have three processes (in addition to the rest of the script) in wpa_supplicant's ancestry: the subshell sudo as a child of 1 wpa_supplicant (which was earlier running stdbuf) as a child of 2 If you kill 1, that doesn't automatically kills 2. If you kill 2 however, unless it's with a signal like SIGKILL that can't be intercepted, that will kill 3 as sudo happens to forward the signals it receives to the command it runs. In any case, that's not the subshell you'd want to kill here, it's 3 or at least 2. Now, if it's running as root and the rest of the script is not, you won't be able to kill it so easily. You'd need the kill to be done as root, so you'd need: sudo WIFI="$wifi" bash -c ' (echo "$BASHPID" && exec stdbuf -o0 wpa_supplicant -Dwext -i"$WIFI" -c/etc/wpa_supplicant/wpa_supplicant.conf 2>&1 ) | { read pid && grep -m1 "pre-shared key may be incorrect" && kill -s PIPE "$pid" }' That way, wpa_supplicant will be running in the same $BASHPID process as the subshell as we're making of that with exec. We get the pid through the pipe and run kill as root. Note that if you're ready to wait a little longer, sudo stdbuf -o0 wpa_supplicant -Dwext -i"$wifi" -c/etc/wpa_supplicant/wpa_supplicant.conf 2>&1 | grep -m1 "pre-shared key may be incorrect" Would have wpa_supplicant killed automatically with a SIGPIPE (by the system, so no permission issue) the next time it writes something to that pipe after grep is gone. Some shell implementations would not wait for sudo after grep has returned (leaving it running in background until it gets SIGPIPEd), and with bash, you can also do that using the grep ... <(sudo ...) syntax, where bash doesn't wait for sudo either after grep has returned. More at Grep slow to exit after finding match?
Silently kill subshell?
1,501,772,142,000
What stream does the perf command use!? I've been trying to capture it with (perf stat -x, -ecache-misses ./a.out>/dev/null) 2> results following https://stackoverflow.com/q/13232889/50305, but to no avail. Why can I not capture the input... it's like letting some fish get away!!
Older versions of perf ~2.6.x I'm using perf version: 2.6.35.14-106. Capture all the output I don't have the -x switch on my Fedora 14 system so I'm not sure if that's your actual problem or not. I'll investigate on a newer Ubuntu 12.10 system later on but this worked for me: $ (perf stat -ecache-misses ls ) > stat.log 2>&1 $ $ more stat.log maccheck.txt sample.txt stat.log Performance counter stats for 'ls': 13209 cache-misses 0.018231264 seconds time elapsed I only want perf's output You could try this, the output from ls will get redirected to /dev/null. The output form perf (both STDERR and STDOUT) goes to the file, stat.log. $ (perf stat -ecache-misses ls > /dev/null ) > stat.log 2>&1 [saml@grinchy 89576]$ more stat.log Performance counter stats for 'ls': 12949 cache-misses 0.022831281 seconds time elapsed Newer versions of perf 3.x+ I'm using perf version: 3.5.7 Capturing only perf's output With the newer versions of perf there are dedicated options for controlling where messages get sent. You have the choice of either sending them to a file via the -o|--output option. Simply give either of those switches a filename to capture the output. -o file, --output file Print the output into the designated file. The alternative is to redirect the output to a alternate file descriptor, 3, for example. All you need to do is direct this alternate file handle prior to streaming to it. --log-fd Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive with it. --append may be used here. Examples: 3>results perf stat --log-fd 3 — $cmd -or- 3>>results perf stat --log-fd 3 --append — $cmd So if we wanted to collect the perf output for the ls command you could use this command: $ 3>results.log perf stat --log-fd 3 ls > /dev/null $ $ more results.log Performance counter stats for 'ls': 2.498964 task-clock # 0.806 CPUs utilized 0 context-switches # 0.000 K/sec 0 CPU-migrations # 0.000 K/sec 258 page-faults # 0.103 M/sec 880,752 cycles # 0.352 GHz 597,809 stalled-cycles-frontend # 67.87% frontend cycles idle 652,087 stalled-cycles-backend # 74.04% backend cycles idle 1,261,424 instructions # 1.43 insns per cycle # 0.52 stalled cycles per insn [55.31%] <not counted> branches <not counted> branch-misses 0.003102139 seconds time elapsed If you use the --append version then the contents of multiple commands will be appended to the same log file, results.log in our case. Installing perf Installation is pretty trivial: Fedora $ yum install perf Ubuntu/Debian $ apt-get install linux-tool-common linux-tools References System wide profiling Tracing on Linux perf: Linux profiling with performance counters Counting with perf stat
What stream does perf use?
1,501,772,142,000
How would you change the output of the time command from: real 0m12.304s user 0m10.187s sys 0m1.699s To: 12.30s EDIT: Using bash (OSX)
Put this in .bashrc: export TIMEFORMAT=%Rs Then source ~/.bashrc. Run type time to find out that time is a shell keyword. Then, run man bash (as bash is your shell) and search for "time".
How to change output of time command?
1,501,772,142,000
FILE-A has 100,000 lines. FILE-B is 50 search terms. I'm looking to complete the search of FILE-A (CSV or TXT) with the various terms from FILE-B (CSV or TXT) AND -- here is the kicker -- save the results in individual TXT files based off the search terms from FILE-B. Example: FILE-A 123 45678 1239870 2349878 39742366876 41967849 789 910 2378 6723 FILE-B 1 2 23 78 Results = "1.txt" with all matching lines from FILE-A, "2.txt" with all lines matching from FILE-A, "23.txt", "78.txt" and so on. So if FILE-B has 50 search terms, I would end up with 50 TXT files, named with the search term, assuming at least one hit with said term from FILE-A. I have searched using "fgrep -f FILE-B.txt FILE-A.csv >> output.txt" This puts all of the search terms from FILE-B found in FILE-A into one "output.txt". I'm instead looking to separate them into individual text files.
Grep + Xargs xargs -d '\n' sh -c ' for term; do grep "$term" fileA > "$term.txt"; done ' xargs-sh < fileB Improved by cas. Grep + Shell Generally using shell loops to read a file is bad practice, but here fileB is much smaller than fileA so it won't significantly hurt performance. while IFS= read -r term; do grep "$term" fileA > "$term.txt" done < fileB Awk awk 'NR==FNR{pat[$0];next}{for(term in pat){if($0~term){print>term}}}' fileB fileA NR==FNR{pat[$0];next} reads the first file given as an argument and puts each line in the array pat. {for(term in pat){if($0~term){print>term}}} is self-explainable: For each term in the array, test if the current line matches the term and print it to a file named accordingly if yes. Not all Awks will allow for many files to be open at the same time. One way to tackle this, as suggested by Ed Morton, is to use a close statement and to use the append operator: awk 'NR==FNR{pat[$0];next}{for(term in pat){if($0~term){print>>term;close(term)}}}' fileB fileA
Search File A with Terms from File B and save output to individual TXT files based off search term from File B
1,501,772,142,000
I'm quite new at bash and I am trying to learn it by creating some small scripts. I created a small script to look up the DNS entry for multiple domains at the same time. The domains are given as attributes. COUNTER=0 DOMAINS=() for domain in "$@" do WOUT_WWW=$(dig "$domain" +short) if (( $(grep -c . <<<"$WOUT_WWW") > 1 )); then WOUT_WWW="${WOUT_WWW##*$'\n'}" ; fi WITH_WWW=$(dig "www.${domain}" +short) if (( $(grep -c . <<<"$WITH_WWW") > 1 )); then WITH_WWW="${WITH_WWW##*$'\n'}" ; fi DOMAINS[$COUNTER]="$domain|$WOUT_WWW|$WITH_WWW" COUNTER=$(($COUNTER+1)) done Now I just want to loop through the new "multidimensional" array and give the output like mysql table: +------------------------------+ | Row 1 | Row 2 | Row 3 | +------------------------------+ | Value | Value | Value | +------------------------------+ How can I do that?
Using perl's Text::ASCIITable module (also supports multi-line cells): print_table() { perl -MText::ASCIITable -e ' $t = Text::ASCIITable->new({drawRowLine => 1}); while (defined($c = shift @ARGV) and $c ne "--") { push @header, $c; $cols++ } $t->setCols(@header); $rows = @ARGV / $cols; for ($i = 0; $i < $rows; $i++) { for ($j = 0; $j < $cols; $j++) { $cell[$i][$j] = $ARGV[$j * $rows + $i] } } $t->addRow(\@cell); print $t' -- "$@" } print_table Domain 'Without WWW' 'With WWW' -- \ "$@" "${WOUT_WWW[@]}" "${WITH_WWW[@]}" Where the WOUT_WWW and WITH_WWW arrays have been constructed as: for domain do WOUT_WWW+=("$(dig +short "$domain")") WITH_WWW+=("$(dig +short "www.$domain")") done Which gives: .---------------------------------------------------------------------. | Domain | Without WWW | With WWW | +-------------------+----------------+--------------------------------+ | google.com | 216.58.208.142 | 74.125.206.147 | | | | 74.125.206.104 | | | | 74.125.206.106 | | | | 74.125.206.105 | | | | 74.125.206.103 | | | | 74.125.206.99 | +-------------------+----------------+--------------------------------+ | stackexchange.com | 151.101.65.69 | stackexchange.com. | | | 151.101.1.69 | 151.101.1.69 | | | 151.101.193.69 | 151.101.193.69 | | | 151.101.129.69 | 151.101.129.69 | | | | 151.101.65.69 | +-------------------+----------------+--------------------------------+ | linux.com | 151.101.193.5 | n.ssl.fastly.net. | | | 151.101.65.5 | prod.n.ssl.us-eu.fastlylb.net. | | | 151.101.1.5 | 151.101.61.5 | | | 151.101.129.5 | | '-------------------+----------------+--------------------------------'
Bash output array in table
1,501,772,142,000
I haven't had much luck finding an answer to my problem, but maybe I'm not asking for it correctly. I have a process I startup like the following: nohup ping 127.0.0.1 > log.txt >2>&1 & Pretty simple command, send all the stdout to log.txt. Only problem is that the command I'm running sends data every nth second to log.txt. I'd like to output just one line to my log.txt file instead of appending it and running the risk of filling my drive up. Is there a way to just output one line to log.txt that continuously gets overwritten?
Here is a quick and dirty solution to only keep the last line of output in the log file: ping localhost | while IFS= read -r line; do printf '%s\n' "$line" > log.txt; done Beware that you now probably have all kinds of race conditions when trying to access the file for reading. "Locking" the file from mutual access might help. For more information about locking this question on stackoverflow might be a good start: How do I synchronize (lock/unlock) access to a file in bash from multiple scripts?
Limit stdout from a continuously running process
1,501,772,142,000
When I play music on vlc or cvlc in terminal or console there is always this (shown below) non-stopping output that prevents me from issuing commands by pressing ENTER key. I want to disable it, I tried to start vlc with vlc -q switch in quite mode but it only gets rid of [ ] bracket parts, the rest still remains and continues to grow. So, how to make vlc completely not to show this information and still be able to execute command-line commands like next, play, random etc? VLC media player 2.0.7 Twoflower (revision 2.0.6-54-g7dd7e4d) [0x255e418] dummy interface: using the dummy interface module... libdvdnav: Using dvdnav version 4.2.0 libdvdread: Encrypted DVD support unavailable. libdvdread: Attempting to use device /dev/sdb1 mounted on /run/media/easl/freyja for CSS authentication libdvdread: Could not open input: Permission denied libdvdread: Can't open /dev/sdb1 for reading libdvdread: Device /dev/sdb1 inaccessible, CSS authentication not available. libdvdnav:DVDOpenFilePath:findDVDFile /VIDEO_TS/VIDEO_TS.IFO failed libdvdnav:DVDOpenFilePath:findDVDFile /VIDEO_TS/VIDEO_TS.BUP failed libdvdread: Can't open file VIDEO_TS.IFO. libdvdnav: vm: failed to read VIDEO_TS.IFO [0x24966b8] main playlist: stopping playback TagLib: MPEG::Header::parse() -- Invalid sample rate. TagLib: ID3v2.4 no longer supports the frame type TDAT. It will be discarded from the tag. TagLib: MPEG::Header::parse() -- Invalid sample rate. TagLib: MPEG::Header::parse() -- Invalid sample rate. TagLib: ID3v2.4 no longer supports the frame type TDAT. It will be discarded from the tag. TagLib: MPEG::Header::parse() -- Invalid sample rate.
You should be able to get rid of the output of the libraries by piping stderr away cvlc -q mymedia 2> /dev/null As for the commands, I'm not sure vlc accepts commands from plain stdin, but it sounds like the rc interface might be what you're looking for. cvlc -q -Irc mymedia 2> /dev/null
How to disable VLC output in command-line mode?
1,501,772,142,000
I want to get vsftpd version into shell variable. I can get it to console with ease: # vsftpd -version vsftpd: version 2.2.2 Also I can get a lot of other info into variable: # i=`bash --version 2>&1 | head -n1`; echo "=$i="; =GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)= (please note output is between "=" signs). This simple way does not work with vsftpd: # i=`vsftpd -version 2>&1`; echo "=$i="; vsftpd: version 2.2.2 == Please note $i is "" here. What am I doing wrong?
Interestingly enough, my vsftpd writes the versino string to stdin. So you probably need to do a rather unusual redirection of stdin to stdout: i=`/usr/sbin/vsftpd -version 0>&1` How to find this out: run it in strace (you'll need to do it as root) and check for the string. In my case the log ends like this: $ strace /usr/sbin/vsftpd -version ... brk(0) = 0x7f835332d000 brk(0x7f835334e000) = 0x7f835334e000 write(0, "vsftpd: version 3.0.2\n", 22) = 22 exit_group(0) = ? +++ exited with 0 +++ The first argument to write() is the file descriptor (0/1/2 stand for stdin/stdout/stderr respectively).
How can I get vsftpd version into shell variable?
1,501,772,142,000
How to see from which file descriptor output is coming? $ echo hello hello $ echo hello 1>&2 hello all are going to /dev/pts/0 but there are 3 file descriptors 0,1,2
Ordinary output happens on file descriptor 1 (standard output). Diagnostic output, as well as user interaction (prompts etc.), happens on file descriptor 2 (standard error), and input comes into the program on file descriptor 0 (standard input). Example of output on standard output/error: echo 'This goes to stdout' echo 'This goes to stderr' >&2 In both instances above, echo writes to standard output, but in the second command, the standard output of the command is redirected to standard error. Example of filtering (removing) one or the other (or both) of the channels of output: { echo 'This goes to stdout' echo 'This goes to stderr' >&2 } >/dev/null # stderr will still be let through { echo 'This goes to stdout' echo 'This goes to stderr' >&2 } 2>/dev/null # stdout will still be let through { echo 'This goes to stdout' echo 'This goes to stderr' >&2 } >/dev/null 2>&1 # neither stdout nor stderr will be let through The output streams are connected to the current terminal (/dev/pts/0 in your case), unless redirected elsewhere as shown above.
How to see from which file descriptor output is coming?
1,501,772,142,000
I would like to send to the stdout entire file + extra line. How to do this nicely? So far I did: for LINE in $(cat $INPUT_FILE) do echo $LINE done echo $EXTRA_LINE How to do this bash way (for real)?
How about cat -- "$INPUT_FILE" echo "$EXTRA_LINE"
How to concat file and a line on-fly?
1,501,772,142,000
In the past, I've used bash consistently, because it's everywhere. But recently I started to try zsh. I don't want to give up updating my .bashrc fil which is rsync'ed to all my servers . So, in my .zshrc, I sourced my old .bashrc using the command source ~/.bashrc. Everything goes well, except every time I open a new terminal window with zsh. There is a bunch of information prompts to the screen. It looks like this: pms () { if [ -n "$1" ] then user="$1" else user="zen" fi python /Users/zen1/zen/pythonstudy/creation/project_manager/project_manager.py $user show "$2" } pmcki () { if [ -n "$1" ] then user="$1" else user="zen" fi python /Users/zen1/zen/pythonstudy/creation/project_manager/project_manager.py $user check_in "$2" } zen1@bogon:~|⇒ These are function definitions in my .bashrc. They're triggered by source ~/.bashrc in my .zshrc file. What I want is for .zshrc to source my .bashrc quietly, with all stderr and stdout output hidden. Is it possible to do that? How?
emulate -R ksh -c 'source ~/.bashrc' This tells zsh to emulate ksh while it's loading .bashrc, so it'll by and large apply ksh parsing rules. Zsh doesn't have a bash emulation mode, ksh is as close as it gets. Furthermore when a function defined in .bashrc is executed, ksh emulation mode will be enabled during the evaluation of the function as well. Hopefully this will solve the errors that you're getting when zsh reads your .bashrc. If it doesn't, it should be easy to tweak your .bashrc so that it works well under both shells for the most part. Make a few parts conditional, such as prompt settings and key bindings which are radically different. if [[ -z $ZSH_VERSION ]]; then bind … PS1=… fi If you really want to hide all output, you can redirect it to /dev/null (source ~/.bashrc >/dev/null 2>&1), but I don't recommend it: you're just hiding errors that indicate that something isn't working, that doesn't make that thing work.
Source .bashrc in zsh without printing any output
1,501,772,142,000
Unlike the answer to this question (Can a bash script be hooked to a file?) I want to be able to see content of files that haven't been created yet as or after they are created. I don't know when they will be created or what they will be named. That solution is for a specific file and actually mentions in the question title creating a "hook" to a specific file. My question is different because I don't want to hook anything, and what I do want is not specific to a particular file. My question's title specifies "..as they are created" which should be a clue that the files I am interested in do not exist yet. I have an application that users use to submit information from a website. My code creates output files when the user is finished. I want to be able to see the content of these files as they are created, similar to the way tail -f works, but I don't know ahead of time what the filenames will be. Is there a way to cat files as they are created or would I have to somehow create an endless loop that uses find with the -newermt flag Something like this is the best I can come up with so far: #!/bin/bash # news.sh while true do d=$(date +"%T Today") sleep 10 find . -newermt "$d" -exec head {} + done For clarification, I don't necessarily need to tail the files. Once they are created and closed, they will not be re-opened. Existing files will never change and get a new modification time, and so I am not interested in them.
If on Linux, something like this should do what you are looking for: inotifywait -m -e close_write --format %w%f -r /watch/dir | while IFS= read -r file do cat < "$file" done
Is there a way to cat files as they are created? [duplicate]
1,501,772,142,000
I saw this line while reading a blog on IFS that is : for i in $(<test.txt) And thought that $(<test.txt) prints the file contents to STDOUT. I maybe wrong in this, but out of curiosity I tried to do it on shell. So picked up a random file named array having random data and First did a cat array that gave me this : amit@C0deDaedalus:~/test$ amit@C0deDaedalus:~/test$ cat array 1) Ottawa Canada 345644 2) Kabul Afghanistan 667345 3) Paris France 214423 4) Moscow Russia 128793 5) Delhi India 142894 And then did $(<array) that gave me this : amit@C0deDaedalus:~/test$ $(<array) 1) Ottawa Ca: command not found I only know that < is used for input redirection but not getting exactly what is being interpreted by shell as a command here. Can anyone explain the concept behind this weird output in shell ? Update :- On running set -x it gives this : amit@C0deDaedalus:~/test$ $(<array) + '1)' Ottawa Canada 345644 '2)' Kabul Afghanistan 667345 '3)' Paris France 214423 '4)' Moscow Russia 128793 '5)' Delhi India 142894 + '[' -x /usr/lib/command-not-found ']' + /usr/lib/command-not-found -- '1)' 1): command not found + return 127 amit@C0deDaedalus:~/test$
The $(command) syntax executes command in a subshell environment and replaces itself with the standard output of command. And, as Bash Manual says, $(< file) is just a faster equivalent of $(cat file) (that's not a POSIX feature, though). So when you run $(<array), Bash performs that substitution, then it uses the first field as the command's name and the rest of the fields as command's arguments: $ $(<array) 1): command not found I don't have any 1) command/function, so it prints an error message. But in your specific scenario, you are getting a different error message probably because you modified the IFS variable: $ IFS=n; $(<array) 1) Ottawa Ca: command not found Edit 1 My guess is that your IFS was somehow modified, so that's why your shell tried to execute 1) Ottawa Ca instead of 1). After all, you were reading an IFS-related article. I wouldn't be surprised if your IFS ended up with a weird value. The IFS variable controls what is known as word splitting or field splitting. It basically defines how the data will be parsed by the shell in an expansion context (or by other commands like read). Bash manual explains this topic: 3.5.7 Word Splitting The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. The shell treats each character of $IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline>, the default, then sequences of <space>, <tab>, and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters space, tab, and newline are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs. Explicit null arguments ("" or '') are retained and passed to commands as empty strings. Unquoted implicit null arguments, resulting from the expansion of parameters that have no values, are removed. If a parameter with no value is expanded within double quotes, a null argument results and is retained and passed to a command as an empty string. When a quoted null argument appears as part of a word whose expansion is non-null, the null argument is removed. That is, the word -d'' becomes -d after word splitting and null argument removal. Note that if no expansion occurs, no splitting is performed. Here are some examples about IFS and command substitution usage: Example 1: $ IFS=$' \t\n'; var='hello world'; printf '[%s]\n' ${var} [hello] [world] $ IFS=$' \t\n'; var='hello world'; printf '[%s]\n' "${var}" [hello world] In both cases, IFS is <space><tab><newline> (the default value), var is hello world and there's a printf statement. But note that in the first case word splitting is performed, while in the second case it is not (because double-quotes inhibit that behavior). Word splitting occurs in non-quoted expansions. Example 2: $ IFS='x'; var='fooxbar'; printf '[%s]\n' ${var} [foo] [bar] $ IFS='2'; (exit 123); printf '[%s]\n' ${?} [1] [3] Neither ${var} nor ${?} contain any whitespace character, so one may think that word splitting wouldn't be an issue in such cases. But that's not true because IFS can be abused. IFS can hold virtually any value and it's easy to abuse. Example 3: $ $(echo uname) Linux $ $(xxd -p -r <<< 64617465202d75) Sat Apr 28 12:46:49 UTC 2018 $ var='echo foo; echo bar'; eval "$(echo "${var}")" foo bar This has nothing to do with word splitting, but note how we can use some dirty tricks to inject code. Related questions: Why does my shell script choke on whitespace or other special characters? Security implications of forgetting to quote a variable in bash/POSIX shells
Why is shell treating a part of the output of $(<file) as a command?
1,501,772,142,000
I'm using find -mtime -2 to find files modified in the past 24 hours or less. The output looks like this. /home/user/logs/file-2014-08-22.log /home/user/logs/file-2014-08-23.log I need to save the first line of the output to a variable and then save the second line to a separate variable. I can't just pipe it. I know you might suggest | grep 22 or 23 but this is part of a bash script that will run many times, there will be a different set of files with different names next time so grep would be too specific. Could awk accomplish this? If so, how? .
Assuming that there are no spaces or the like in any of your filenames, there are a couple of ways of doing this. One is just to use an array: files=( $(find -mtime -2) ) a=${files[1]} b=${files[2]} files will be an array of all the paths output by find in order, indexed from zero. You can get whichever lines you want out of that. Here I've saved the second and third lines into a and b, but you could use the array elements directly too. An alternative if you have GNU find or another with the printf option is to use it in combination with read and process substitution: read junk a b junk < <(find -printf '%p ') This one turns all of find's output into a single line and then provides that line as the input to read, which saves the first word (path) into junk, the second into a, the third into b, and the rest of the line into junk again. Similarly, you can introduce the paste command for the same effect on any POSIX-compatible system: read junk a b junk < <(find -mtime -2 | paste -s) paste -s will convert its input into a single tab-separated line, which read can deal with again. In the general case, if you're happy to execute the main command more than once (not necessary here), you can use sed easily: find | sed -n 2p That will print only the second line of the output, by suppressing ordinary output with -n and selecting line 2 to print. You can also stitch together head and tail for the same effect, which will likely be more efficient in a very long file. All of the above have the same effect of storing the second and third lines into a and b, and all still have the assumption that there are no spaces, tabs, newlines, or any other characters that happen to be in your input field separator (IFS) value in any of the filenames. Note though that the output order of find is undefined, so "second file" isn't really a useful identifier unless you're organising them to be ordered some other way. It's likely to be something close to creation order in many cases, but not all.
Saving individual output lines into variables
1,501,772,142,000
Scenario; I have SSH'ed to a machine, opened a new screen session, and fired off a script. Some days later I SSH back to that machine, re-attach the screen session and look at the output that has been generated, however; I can't scroll back through the output. From what I can see, screen stores one "screens worth" of stdout output. If my script has generated 100 lines of output in 48 hours, I can't see it all, just the last 40 odd lines or so. Is there a way to make screen store all stdout from the script I leave running, so I can re-attach the screen and PgUp/PgDn through it as if it were a script running on my local machine? Perhaps screen isn't the most optimal way to do this? Is there a better method for leaving scripts running on remote machines after log out, and being able to re-attach to that process at a later date and view all the output?
Screen does keep a log of past output lines; it's called the “scrollback history buffer” in the Screen documentation. To navigate through the scrollback, press C-a ESC (copy). You can use the arrow and PgUp/PgDn keys to navigate, as well as other keys to search and copy text. Press ESC to exit scrollback/copy mode. By default, Screen only keeps 100 lines' worth. Put a defscrollback directive in your .screenrc to change this figure. If you want a complete log from your script, save it in a file. Run it inside Screen to be able to connect back to the parent shell and easily see if the script is still running, suspend and restart it, etc. Or fork it off with nohup. To monitor the log file while it's growing, use tail -f.
Leave remote command running storing output
1,501,772,142,000
I know that there is a possibility to use tee to copy the contents of output to a file and still output it to the console. However, I can't seem to find a way, to prepare shell scripts (like a fixed template) without using tee for either each command in the script or to execute the script using a pipe to tee. Thus, I have to start calling the script each time with a pipe to tee instead of automatically doing this for me via the script. I tried using a modified shebang using a pipe but got no success and I can't seem to find a way to accomplish this. So instead of calling the script like this: ./myscript.sh |& tee scriptout.txt I would like to have the same effect just by calling it like this: ./myscript Of course the script needs to know the filename which is set in a variable inside the script. How can I accomplish this?
You could wrap the contents of your script in a function and pipe the functions output to tee: #!/bin/bash { echo "example script" } | tee -a /logfile.txt
Prepare shell script for output to file and console
1,501,772,142,000
It would appear that the output from hping is fully buffered when piped to perl for further line-by-line processing, so piping hping to perl doesn't work. hping --icmp-ts example.ca | perl -ne 'if (/Originate=(\d+) Receive=(\d+) Transmit=(\d+)/) { ($o, $r, $t) = ($1, $2, $3); } if (/tsrtt=(\d+)/) { print $r - $o, " ", $o + $1 - $t, "\n"; }' How do I change hping from being fully buffered to being line buffered when piped? Not a duplicate of the following question, since no solution works in OpenBSD base: Turn off buffering in pipe
There are two common solutions, stdbuf and unbuffer. stdbuf is from GNU coreutils, it was added in version 7.5 in 2009 so it has made its way to all current non-embedded Linux systems apart from CentOS 5. It is also in FreeBSD since version 8.4. No other unix variant has adopted it yet that I know of, and in particular not OpenBSD as of 5.4. unbuffer is an Expect script, and as such is available everywhere Expect is available, which includes pretty much any unix. All BSD variants have it in their port collection, in the expert package. So install the expect package and run unbuffer hping … | perl …
turn off buffering for `hping` in OpenBSD
1,501,772,142,000
I am modifying the text following tail -f. I have the following program that monitors a file: tail -vf -c5 /tmp/index \ | cat -n \ | sed s/^[^0-9]*\\\([0-9]\\\)/__\\\1__/g - \ ; The sed successfully changes tail's output. From another terminal i can now do: RED='\033[0;31m'; NC='\033[0m'; printf "I ${RED}love${NC} Stack Overflow\n" 1>>/tmp/index; And the tail monitoring program will display the update, in color. But what I want, is the sed program, to add by itself, colors to the output. I have tried a bunch of different setups but to no avail. Mostly involving adding backward slashes here and there. How do I have the tail-sed program, add colors to the output?
I wrote a little script years ago that adds colors based on an input regular expression. I have posted it before here. If you paste that script into a file named color in a directory in your $PATH and make it executable, you can do: tail -vf -c5 /tmp/index | cat -n | color -l '^\s*\d+' Or, more manually: tail -vf -c5 /tmp/index | cat -n | perl -pe 's#^.*?\d+#\x1b[31m$&\x1b[0m#;' Or, in sed: tail -vf -c5 /tmp/index | cat -n | sed -E 's#^[^0-9]*[0-9]*#\x1b[31m&\x1b[0m#;' Or you could just do the whole thing in perl: tail -vf -c5 /tmp/index | perl -pe 's#^#\x1b[31m $. \x1b[0m#;'
Tail -f | sed. Modify text for color
1,501,772,142,000
I want to enter the current date (ideally in the form YYYY-MM-DD hh:mm) as an option in a bash script: I tried /usr/local/bin/growlnotify -t "PMK" -m date but then the variable -m inserts the string "date" in the output. How can I tell the script that it has to use the output value of the "date" command? (I'm using MacOS X 10.6 and the growlnotify command is used to display a popup window with 2 strings ("PMK" and a second one where I'd like to get the current date/time) http://growl.info/ )
writing date as an argument to another command will not get you the output of that command, just the string you typed. In bash you can insert the result from a command by including it in $( ). That leaves that you need to specify form (format) that you want to get from date, and that can be deduced from man date (FORMAT section): date '+%Y-%m-%d %H:%M' This will give you a 24h clock (There are other ways to get this result, as Costas indicated, but this way you can easily change the characters between the year representation e.g. Germans often want /). The full invocation would then be (there is no need to quote PMK, but don't forget the $( ...)): /usr/local/bin/growlnotify -t PMK -m "$(date '+%Y-%m-%d %H:%M')"
How to enter date as an option in a bash command? [duplicate]
1,648,150,879,000
So I have 2 different Linux installations. One of them is Ubuntu and the second one is Kali. When I run date command with no options/arguments on my Ubuntu install I get: michal@ubuntu:~$ date Thu 24 Mar 2022 07:56:23 PM CET When I run date command with no options/arguments on my Kali install I get: ┌──(michal㉿kali)-[~] └─$ date Thu Mar 24 07:58:34 PM CET 2022 The locale setting is the same on both machines being: Ubuntu locale settings: michal@ubuntu:~$ locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= and Kali locale settings: ┌──(michal㉿kali)-[~] └─$ locale LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= Why the date command output is different on both machines? I want to PERMANENTLY change the Kali output, to be the same as my current Ubuntu output being: michal@ubuntu:~$ date Thu 24 Mar 2022 07:56:23 PM CET Which file needs to be edited? Where are those settings? I've tried to follow steps from this thread: How can I change the default date format (using LC_TIME)? but I don't understand what: "date's texinfo also explicitly recommends to set LC_TIME to C in order to produce locale independent output." means.
You can tell date how it should format its output: %% a literal % %a locale's abbreviated weekday name (e.g., Sun) %A locale's full weekday name (e.g., Sunday) %b locale's abbreviated month name (e.g., Jan) %B locale's full month name (e.g., January) %c locale's date and time (e.g., Thu Mar 3 23:05:25 2005) %C century; like %Y, except omit last two digits (e.g., 20) %d day of month (e.g., 01) %D date; same as %m/%d/%y %e day of month, space padded; same as %_d %F full date; like %+4Y-%m-%d %g last two digits of year of ISO week number (see %G) %G year of ISO week number (see %V); normally useful only with %V %h same as %b %H hour (00..23) %I hour (01..12) %j day of year (001..366) %k hour, space padded ( 0..23); same as %_H %l hour, space padded ( 1..12); same as %_I %m month (01..12) %M minute (00..59) %n a newline %N nanoseconds (000000000..999999999) %p locale's equivalent of either AM or PM; blank if not known %P like %p, but lower case %q quarter of year (1..4) %r locale's 12-hour clock time (e.g., 11:11:04 PM) %R 24-hour hour and minute; same as %H:%M %s seconds since the Epoch (1970-01-01 00:00 UTC) %S second (00..60) %t a tab %T time; same as %H:%M:%S %u day of week (1..7); 1 is Monday %U week number of year, with Sunday as first day of week (00..53) %V ISO week number, with Monday as first day of week (01..53) %w day of week (0..6); 0 is Sunday %W week number of year, with Monday as first day of week (00..53) %x locale's date representation (e.g., 12/31/99) %X locale's time representation (e.g., 23:13:48) %y last two digits of year (00..99) %Y year %z +hhmm numeric time zone (e.g., -0400) %:z +hh:mm numeric time zone (e.g., -04:00) %::z +hh:mm:ss numeric time zone (e.g., -04:00:00) %:::z numeric time zone with : to necessary precision (e.g., -04, +05:30) %Z alphabetic time zone abbreviation (e.g., EDT) In your case the command would be as following: date +"%a %d %b %Y %r %Z" By setting an alias, you can change the behavior of the date command: alias date='date +"%a %d %b %Y %r %Z"' You can put the alias in your ~/.bashrc to make the change permanent for your current user.
PERMANENTLY change the output format of the 'date' command in my Kali Linux
1,648,150,879,000
This: Save all the terminal output to a file Except after the fact. Meaning that instead of preparing to record or pipe all output to a file, I am dealing with output that has already taken place, and that I omitted to record to a file. Rather than to spend minutes scrolling up 7000 lines of output, copying and pasting that to a document, I have to think there is an easier way to get the current output. Considering that this may depend upon the terminal emulator, I am using Konsole and zsh in this case. How can I save the terminal output to a file after the fact?
With konsole, File->Save output as works as does CTRL-SHIFT-S, but you will only save what is in the buffer.
Save all terminal output to a file, after the fact
1,648,150,879,000
I have a command running for a long time, which I don't want to disturb. However, I would like to keep check on the process (most of the time remotely). I am constantly monitoring the process via commands like top, iotop, stat etc. The process is a terminal based process which wasn't started via screen or tmux or similar. So the only way to check the output is using physical access. I know that /proc contains lot of info about the process. So I was wondering if it also can display the output (or even just the last batch of output -- char/word/line). I searched in /proc/<pid>/fd, but couldn't find anything useful. Below is the output of ls -l /proc/26745/fd/* lrwx------ 1 user user 64 Oct 28 13:19 /proc/26745/fd/0 -> /dev/pts/17 lrwx------ 1 user user 64 Oct 28 13:19 /proc/26745/fd/1 -> /dev/pts/17 lrwx------ 1 user user 64 Sep 27 22:27 /proc/26745/fd/2 -> /dev/pts/17 Any pointers?
I would use strace for that: strace -qfp PID -e trace=write -e write=1,2 That will trace all write(2) system calls of PID and its child processes, and hexdump the data written to file descriptors 1 and 2. Of course, that won't let you see what the process has already written to the tty, but will start monitoring all writes from a point on. Also, strace is not amenable to change its output format -- you should explore using gdb(1) or write a small program using ptrace(2) if you need more flexibility.
Get latest output of a running command
1,648,150,879,000
When my computer is slowing down, I usually run ps aux --sort -rss to find out which process consumes too much memory. There may be a lot of processes. How to see only a few ones?
You can tell ps to sort its output. But this seems to be a use case for top or htop. Press M to sort processes by memory and P to sort by CPU time.
How to decrease ps aux output to a few lines?
1,648,150,879,000
For an automated mailing and to notify user, respective members of a specific group, I am looking for commands or a single command line which provides a list of email addresses and which can be used further. Currently I am able to look up the directory in a way like: ldapsearch -h dc.example.com -p 389 -D "EXAMPLE\admin" -x -w "password" -b "DC=example,DC=com" -s sub "(&(objectCategory=person)(objectClass=user)(sAMAccountName=*)(memberOf=CN=Developers,OU=Role_Groups,OU=Groups,DC=example,DC=com))" mail \ | grep "mail:" \ | cut -d " " -f 2 This gives me the email addresses of all group members condensed, but not in the format for further processing I were looking for. [email protected] [email protected] [email protected] ... How to get the results in one line i.e. comma or semicolon separated? [email protected];[email protected];[email protected];... Replacing newlines with commas using tr or sed wasn't working (for me).
After some research I've found the paste command. So adding | paste -sd ";" made it working in the way I was looking for. The final command line I'am using now is ldapsearch -h dc.example.com -p 389 -D "EXAMPLE\admin" -x -w "password" -b "DC=example,DC=com" -s sub "(&(objectCategory=person)(objectClass=user)(sAMAccountName=*)(memberOf=CN=Developers,OU=Role_Groups,OU=Groups,DC=example,DC=com))" mail \ | grep "mail:" \ | cut -d " " -f 2 \ | paste -sd ";"
How to create a list of email addresses from ldapsearch result for further processing?
1,648,150,879,000
I have a process running on a Linux machine that I would like to access the output from. It's in it's own container that I'm using docker exec to su into. With ps I can see: UID PID PPID C STIME TTY TIME CMD root 1 0 0 18:45 ? 00:00:00 python ./manage.py runserver I thought I could do this with fg 1 but that did not work. How can I capture the output of this already running process?
Use docker logs <yourcontainer> on the host to read its stdout. Add --follow to keep the output going.
Getting output from a running process
1,648,150,879,000
I have a command that outputs the following: READY Listening.... HELLO READY Listening.... TEST It is speech recognition from pocketsphinx_continuous. I need that output redirected to a file, and it does not seem to be comming from stdout or stderr because I have tried adding 1>log.txt and 2>log.txt and every time they are blank. Here is the kicker: when I add 1>log.txt to the command, there is no longer any output to the console, but log.txt is still blank. Also, when I add | tee log.txt it does not show up in the console and the file is still blank. Is this output coming from stdout, if so, why is it not being redirected to the file? This question is related to my other question here: Redirect Output of Pocketsphinx_continuous to a file Pocketsphinx is weird and using it's arguments to redirect output is not possible for me, in this question I just want to know where this output is coming from sdtout or sdterr or some other place, and how to redirect that output. EDIT ls -l /proc/PID/fd/ returns: lrwx------ 1 pi pi 64 Jan 4 04:12 0 -> /dev/pts/2 lrwx------ 1 pi pi 64 Jan 4 04:12 1 -> /dev/pts/2 lrwx------ 1 pi pi 64 Jan 4 04:11 2 -> /dev/pts/2 lrwx------ 1 pi pi 64 Jan 4 04:12 4 -> /dev/snd/pcmC0D0c
The hideous command: script -q -f -c "pocketsphinx_continuous -samprate 48000 -nfft 2048 -hmm /usr/local/share/pocketsphinx/model/en-us/en-us -lm 9745.lm -dict 9745.dic -inmic yes -logfn /dev/null" words.txt & works for me, it logs: READY.... Listening... HELLO to the file, and it is easy to weed out the unwanted stuff. EDIT: If anyone is doing this themselves in the future, remove the & at the end of that command if you are executing it in a console, if you are executing it via Python or another language, keep the & at the end.
Output is in console but not part of stdout or stderr
1,648,150,879,000
My Script is having 2 command-line arguments and then just couple of questions, after these questions the script will run for itself, I'm able to pass the command-line argument by just doing, -bash-3.2$ nohup ./Script.sh 21 7 nohup: appending output to `nohup.out' -bash-3.2$ Anyway to add the answers of these to-be-asked-questions with nohup?
Instead of using nohup, you could have your script ask these questions interactively and then background and disown the remainder of whatever else it has to do. Example $ more a.bash #!/bin/bash read a echo "1st arg: $a" read b echo "2nd arg: $b" ( echo "I'm starting" sleep 10 echo "I'm done" ) & disown Sample run: $ ./a.bash 10 1st arg: 10 20 2nd arg: 20 I'm starting $ Check on it: $ ps -eaf|grep a.bash saml 6774 1 0 01:02 pts/1 00:00:00 /bin/bash ./a.bash saml 6780 10650 0 01:02 pts/1 00:00:00 grep --color=auto a.bash 10 seconds later: $ I'm done
How do I Nohup an interactive shell-script?
1,648,150,879,000
The program seems cool, but giving it a red color really makes it look like my computer is on fire. I think using grep or similar piping command can do the trick, but I see that it prints ASCII escape codes for colors and removes the special formatting of the output that makes it look like fire.
cacafire ASCII fire (aafire) was created with the now dead libaa. That may actually predate terminal color codes (I don't remember). However, the newer varient is the still maintained libcaca which provides cacafire.
Is it possible to color the output of `aafire`?
1,648,150,879,000
I’m fairly new to scripting and I have a high-level question of how to do something. I want to design a simple game similar to space invaders, with these two main features for now: Bullets drop down from the top of the screen every second. The user can move the home ship around at the bottom line of the screen and dodge the bullets. The issue is that I can't find a way to constantly move the bullets down while still getting input. Most of the code from the main loop: while true #gets user input and updates screen. do read -r -sn1 -t 1 USERIN case $USERIN in D) moveleft ;; C) moveright ;; esac updatescreen #(my code makes sure that at least a second has passed since the last "updatescreen" execution before running this) done I want the screen to update every second, independently of the user input. Right now, the program is choppy when it updates because sometimes it has to wait longer for the input to timeout. It would be nice to have a lower read timeout but if I lower it I get an error: "invalid timeout specification." Is there a way around this or maybe a whole different program organization?
I found a solution. Basically, you run the update loop in the background and have the main loop in the foreground. They can communicate with each other using trap/kill commands. I uploaded a .sh file to github to give a full example. Here's a modified outline of how it works though: Note: you have to use ctrl-c to escape. #!/bin/bash update() { trap "move_left;" $LEFT trap "move_right;" $RIGHT while true; do #output whatever needed done } read_in() { trap "return;" SIGINT SIGQUIT while true; do read -s -n 1 key case key in C) kill -$RIGHT $game_pid ;; D) kill -$LEFT $game_pid ;; esac done } move_left() { #update position variable } move_right() { #update position variable } update & game_pid=$! read_in kill -9 $game_pid Use the example script linked above for a working example version but there you have it! Just took a little reworking of the program architecture.
Is there a way to execute commands while getting user input?
1,648,150,879,000
I am outputting results from an android shell command to a file, with MS-windows cmd via ADB.exe. It outputs the correct results, but I am getting an extra line between each result. It looks normal in interactive cmd (without extra lines), but when it is saved to a file the additional lines show up. I am using Notepad++ to view the file output. When viewing all symbols, it is showing a CR(carriage return) at the end of each printed line and a CR LF for each blank line. Is it possible to output the results to a file without the extra lines, and if so what could be causing this? Interactively, output direct to terminal D:\>adb shell "ls -l" drwxr-xr-x root root 2009-12-31 19:00 acct drwxrwx--x system cache 2020-03-12 07:14 cache lrwxrwxrwx root root 1969-12-31 19:00 charger -> /sbin/healthd dr-x------ root root 2009-12-31 19:00 config Redirecting to file D:\>adb shell "ls -l" > test.log drwxr-xr-x root root 2009-12-31 19:00 acct drwxrwx--x system cache 2020-03-12 07:14 cache lrwxrwxrwx root root 1969-12-31 19:00 charger -> /sbin/healthd dr-x------ root root 2009-12-31 19:00 config
Try adb shell -T "ls -l" > test.log or, if it complains that error: device only supports allocating a pty: adb shell "ls -l >/data/local/tmp/list"; adb pull /data/local/tmp/list test.log Not all the devices support the ssh inspired -t and -T options, even if your adb client program does. This isn't Windows-specific: even on a Unix system, adb shell "ls -l" > test.log will create a file with undesirable extra carriage returns at the end of the lines.
Remove extra blank lines from CMD adb shell, when redirected to file
1,648,150,879,000
I would like to save a Ruby program coloured terminal output into a png file, output doesn't fit into the screen height, so it is scrollable. Is it possible to save the whole or part of the scrollable terminal window area (not only the visible part of course, but scrolling a bit upwards) into a png file?
You don't have to use a real screen of limited size. Create a virtual screen of the size needed to show all your output at once, then dump that screen or terminal. For example: $ Xvfb :1 -screen 0 100x4000x24 -noreset & $ xterm -geometry 10x200 -display :1 -e \ sh -c 'echo $WINDOWID >/tmp/id;ls -l /etc;sleep 99' & $ DISPLAY=:1 convert x:$(cat /tmp/id) /tmp/out.png $ identify /tmp/out.png /tmp/out.png PNG 79x2604 ... This creates a 100 by 4000 pixel screen, with an xterm 200 lines high. The WINDOWID variable is exported by xterm, and can be given to ImageMagick's convert program to copy the image to a png file. The identify command shows that the whole xterm contents were captured, which would not be the case on my real screen of only 1080 pixels.
How can I save a scrollable terminal window (RoxTerm) into a png image?
1,648,150,879,000
I am on a Debian 8 System trying to determine the processes and their respective runtime of a certain user: $ ps -u <user> PID TTY TIME CMD 26038 ? 00:00:00 php5-fpm 26052 ? 00:00:00 php5-fpm 26950 ? 00:00:00 php5-fpm 27344 ? 00:00:00 php5-fpm 28292 ? 00:00:00 php5-fpm 28691 ? 01:54:21 python3 /usr/lo $ which ps # ps is not aliased or so. /bin/ps Now I additionally want the elapsed time for the respective processes. So I tried: $ ps -o cmd=,etime= -u <user> php-fpm: pool <user> 00:36 php-fpm: pool <user> 00:36 php-fpm: pool <user> 00:24 php-fpm: pool <user> 00:18 php-fpm: pool <user> 00:04 python3 /usr/local/bin/fixw 17:39:44 However I want to have the short process names php-fpm as in the first output, not the long names of the second one. I could not find any ways to so it, reading ps's man page. How do I get the CMD of output #1 with the elapsed time of output #2? Solution @StephenKitt's answer was the missing hint. However, I needed to modify it slightly. Strange output: $ ps -o comm=,etime= -u <user> ,etime= php5-fpm php5-fpm php5-fpm php5-fpm php5-fpm ps python3 /usr/lo Separated options work: $ ps -o comm= -o etime= -u <user> php5-fpm 00:55 php5-fpm 00:27 php5-fpm 00:24 php5-fpm 00:13 php5-fpm 00:08 python3 /usr/lo 17:49:38
Despite the column name, the output you’re looking for is comm rather than cmd: ps -o comm= -o etime= -u <user> cmd is a non-standard alias for args and shows the command with all its arguments, including any modifications the process has made. comm is the process name which on Linux by default is the first 15 bytes of the base name of the last file the process executed (or is the same name as its parent process if it didn't execute any file, or the task name for kernel tasks), and can be modified with the PR_SET_NAME prctl(). Note that etime is the amount of time since the process was spawned, and a process can (and often does) run more than one command in their lifetime, so if you see cmd 00:03, that doesn't necessarily mean cmd has been running for 3 seconds, the process that is currently running cmd could very well have run some other command before. $ sh -c 'sleep 3; exec ps -o comm= -o etime=' zsh 13:48 ps 00:03 The process running ps was previously running sh during which it spent most of its time waiting for another process running sleep (after having run sh for a very short time).
ps: get short command name and elapsed time
1,648,150,879,000
I have a bash script that reads filenames, takes a selection of data, build a table, and then adds the header. Unfortunately, at the point to add the header and give the output file, I have the following error message: ./big_table_rcp.sh: line 153: /tmp/out: Permission denied It is linked with the following line: | cat - out_${scenario}.txt > /tmp/out && mv /tmp/out ${gauge}_${scenario}.txt Does anyone know how to give access to the output file?
You may be getting permission errors because you don't have permissions to access /tmp/out or the /tmp directory. Before the offending line, include somehting like ls -l /tmp | grep out to see what permissions the /tmp/out file has. In addition, instead of using /tmp/out, use mktemp. tmpfile=`mktemp` your code here | cat - out_${scenario}.txt > "$tmpfile" && mv "$tmpfile" ${gauge}_${scenario}.txt From man mktemp: Create a temporary file or directory, safely, and print its name.
Permission denied on file under /tmp
1,648,150,879,000
All I wanted is to grep for specific lines in an ongoing log and re-direct it to some file.. tailf log | grep "some words" Now, I want the above command output to get re-directed to some file in on-going basis.... I tried, tailf log | grep "some words" >> file But that doesn't seem to work. What am I missing?
The issue is buffering. Use the --line-buffered option to force grep to flush the buffer after every line: tailf log | grep --line-buffered "some words" >> file
how do i redirect output from tailf & grep to a file [duplicate]
1,648,150,879,000
I have the following in an expect script spawn cat version expect -re 5.*.* set VERSION $expect_out(0,string) spawn rpm --addsign dist/foo-$VERSION-1.i686.rpm The cat command is getting the version correctly however it appears to be adding a new line. Since I expect the output to be the following: dist/foo-5.x.x-1.i686.rpm but am getting including the error at the begining the following: cannot access file dist/foo-5.x.x -1.i686.rpm Why is expect adding a new line to the cat command output and is there any way to have this not be done or to fix the output of the cat command?
TCL can read a file directly without the complication of spawn cat: #!/usr/bin/env expect # open a (read) filehandle to the "version" file... (will blow up if the file # is not found) set fh [open version] # and this call handily discards the newline for us, and since we only need # a single line, the first line, we're done. set VERSION [gets $fh] # sanity check value read before blindly using it... if {![regexp {^5\.[0-9]+\.[0-9]+$} $VERSION]} { error "version does not match 5.x.y" } puts "spawn rpm --addsign dist/foo-$VERSION-1.i686.rpm"
Cat in expect script adds new line to end of string
1,648,150,879,000
I often use thelast command to check my systems for unauthorized logins, this command: last -Fd gives me the logins where I have remote logins showing with ip. From man last: -F Print full login and logout times and dates. -d For non-local logins, Linux stores not only the host name of the remote host but its IP number as well. This option translates the IP number back into a hostname. Question: One of my systems is only showing a few days worth of logins. Why is that? What can I do when last only gives me few days? Here is the output in question: root ~ # last -Fd user pts/0 111-111-111-111. Wed Oct 8 20:05:51 2014 still logged in user pts/0 host.lan Mon Oct 6 09:52:01 2014 - Mon Oct 6 09:53:41 2014 (00:01) user pts/0 host.lan Sat Oct 4 10:11:39 2014 - Sat Oct 4 10:12:13 2014 (00:00) user pts/0 host.lan Sat Oct 4 09:31:07 2014 - Sat Oct 4 10:11:00 2014 (00:39) user pts/0 host.lan Sat Oct 4 09:26:04 2014 - Sat Oct 4 09:28:16 2014 (00:02) wtmp begins Sat Oct 4 09:26:04 2014
It is likely that logrotate has archived the log(s) of interest and opened a new one. If you have older wtmp files, specify one of those, as for example: last -f /var/log/wtmp-20141001
last command only shows few days worth of logins
1,648,150,879,000
I have the following simple script: echo "-------------------------- SOA --------------------------------" echo " " echo -n " ---------> "; dig soa "$1" +short | awk '{print $3}' The output is something like this: -------------------------- SOA -------------------------------- ---------> 2019072905 Now my question is can I make an "echo" command after the dig and the output to be something like this: -------------------------- SOA ----------------------------- ---------> 2019072905 <------------- I have tried to search for similar cases but was not able to find any related. Would this be possible? Thanks in advance.
I would do the whole thing in printf instead: #!/bin/sh header='-------------------------- SOA --------------------------' headerLength=$(awk '{print length()}' <<<"$header") value=$(dig soa "$1" +short | awk '{print $3}') valueString="-----------> $value <-------------" valueLength=$(awk '{print length()}' <<<"$valueString") offset=$(((headerLength + valueLength)/2+1)) printf "%s\n\n%${offset}s\n" "$header" "$valueString" This has the advantage of always appearing centered no matter what the length of your value is (using a slightly modified version that just sets value=$1 to illustrate): $ foo.sh 2019072905 -------------------------- SOA -------------------------------- -----------> 2019072905 <------------- $ foo.sh "some random long string" -------------------------- SOA -------------------------------- -----------> some random long string <------------- $ foo.sh "foo" -------------------------- SOA -------------------------- -----------> foo <-------------
Next command output on the same line? Bash script
1,648,150,879,000
I would like to do something similar to the following: which someapplciation | cd outputfrompreviouscommand The command which provides a directory and I would like to be able to make that output my current working directory without using a programming language i.e. awk, bash, perl, etc. and to only use the pipe command. To further give an example: which vi provides the output /some/dir I would like my working directory to be moved to that directory which I can test by using the pwd command which should have the output that matches of /some/dir.
You can't use a pipe if the second command you are running doesn't read from its standard input. However, you can do something like cd $(which someapplication) or, since you need a directory name for cd and not an executable name: cd $(dirname $(which someapplication)) The $(...) shell operator executes the command within parentheses and substitutes its output into the command line.
Pipe output from one command to another command's non standard input [duplicate]
1,648,150,879,000
In many cases after I find a file using the find command I then want to open the file or cat it or maybe print it. How can I operate on the result from find? For example, : find . -name "myfile.txt" ./docs/myfile.txt : find . -name "myfile.txt" | less does not work because it feeds the string "./docs/myfile.txt" to less, not the contents of the file at the specified path.
Similar to @coffeeMug, this is the more up-to-date way to doing this as it is apparently faster: find . -name "*.log" -exec ls -l '{}' + I'll also point you to CommandLineFu, which is always helpful with these things.
Access a file located with find
1,648,150,879,000
When I run a command like tail ~/SOMEFILE I get, for example: testenv@vps_1:~# tail ~/SOMEFILE This is the content of SOMEFILE. But what if I want to have a carriage return between: testenv@vps_1:~# and the output of: This is the content of SOMEFILE. So the final result would be like this: testenv@vps_1:~# tail ~/SOMEFILE This is the content of SOMEFILE. Or this: testenv@vps_1:~# tail ~/SOMEFILE This is the content of SOMEFILE. Or this: testenv@vps_1:~# tail ~/SOMEFILE This is the content of SOMEFILE. Note: The first example show one line of spacing between the two parts, the second example show two lines, and the third three lines. Is there a way to make sure the tail output (or any other output) for that matter would be spaced down as I've shown in the examples, just for this particular command (not for all commands of course), in Bash?
The simplest option would be printing manually those extra newlines, something like: printf '\n\n\n'; tail ~/SOMEFILE But if you want to: Do this just for tail Not write extra commands with every tail invocation Have a simple yet full control over the quantity of newlines then I recommend you to add a function to your aliases/rc file. For example: # Bash version # In Bash we can override commands with functions # thanks to the `command` builtin tail() { # `local` limit the scope of variables, # so we don't accidentally override global variables (if any). local i lines # `lines` gets the value of the first positional parameter. lines="$1" # A C-like iterator to print newlines. for ((i=1; i<=lines; i++)); do printf '\n' done # - `command` is a bash builtin, we can use it to run a command. # whose name is the same as our function, so we don't trigger # a fork bomb: <https://en.wikipedia.org/wiki/Fork_bomb> # # - "${@:2}" is to get the rest of the positional parameters. # If you want, you can rewrite this as: # # # `shift` literally shifts the positional parameters # shift # command "${@}" # # to avoid using "${@:2}" command tail "${@:2}" } #=============================================================================== # POSIX version # POSIX standard does not demand the `command` builtin, # so we cannot override `tail`. new_tail() { # `lines` gets the value of the first positional parameter. lines="$1" # `i=1`, `[ "$i" -le "$lines" ]` and `i=$((i + 1))` are the POSIX-compliant # equivalents to our C-like iterator in Bash i=1 while [ "$i" -le "$lines" ]; do printf '\n' i=$((i + 1)) done # Basically the same as Bash version shift tail "${@}" } So you can call it as: tail 3 ~/SOMEFILE
Display console output 1 or more lines below
1,648,150,879,000
I have a bash script that is rsyncing a large directory and the --progress function is great, but is it possible to show all this output on a single line? ie. as the files transfer, just spit it the --progress output onto the same line as the last, so that I can watch the progress without the screen scrolling?
Wrapper script Here is a draft of a wrapper script, written in Perl, that will emulate a PTY (so rsync should behave exactly as it would in a terminal), and parses the output so it can keep a running two-line display of the filename and transfer status. It looks like this: src/test.c 142 100% 0.19kB/s 0:00:00 (xfr#28, to-chk=0/30) The first line (filename, src/test.c) will change depending on the current filename output by rsync. The 2nd line will change whenever rsync outputs an updated status line. N.B.: I opted for a 2-line display (yet one that will still not scroll!) instead of a 1-line display, as at least in my typical usage, I end up with long path/filenames that would be too wide when combined with the status line. However, as you'll see below, it would be easy to modify to combine the file/pathname and status into one line. When rsync exits, the script exits with the same exit code (so you can still trap errors, etc.) Rationale Based on discussions with the OP, built-in rsync options were inadequate, their version of rsync is older, and their needs are unique. Thus, I felt that a custom script was the only way to accomplish their goal. Other options would be to use any of the many existing rsync "backup" wrapper utilities already out there, although I am not aware of any that support similar output. Source code #!/usr/bin/env perl # Custom progress wrapper for rsync use 5.012; use strict; use warnings; use autodie; use IPC::Run qw/run start pump finish harness/; my $RSYNC=`which rsync`; # Try to get rsync location from PATH chomp $RSYNC; my ($in,$out); # Input and output buffers my $h = harness [ $RSYNC, @ARGV ], '<pty<', \$in, '>pty>', \$out; local $| = 1; # Autoflush output print "\n\n\e[2A\e[s"; # Make room and save cursor position my ($file, $status) = ('',''); # Will hold filename and status lines while ($h->pump) { parse() } parse(); # Don't miss leftover output $h->finish; exit $h->result; # Pass through the exit code from rsync # Parse and display file/status lines from rsync output sub parse { for (split /[\n\r]+/, $out) { $file = $_ if /^\S/; $status = $_ if /^\s/; print "\e[u\e[0J$file\n$status\n"; } $out = ''; # Clear output for next pump } Prerequisites The script requires two non-standard modules: IPC::Run, and IO::Pty. Both of these can be installed with cpan, which comes with Perl. Many, including me, prefer cpanm, which can be installed with the following one-liner: curl -L https://cpanmin.us | perl - App::cpanminus Then, you would run: cpanm IPC::Run IO::Pty Supported terminal types This will work in practically any modern terminal, as it uses simple ANSI cursor movement and clearing codes to continually overwrite the bottom few lines of the screen. Usage Same as rsync itself. Note that you need to specify --progress yourself, but you could easily edit in some default arguments by changing the $h = harness ... line: my $h = harness [ $RSYNC, '--progress', @ARGV ], '<pty<', \$in, '>pty>', \$out; rsync binary location The script attempts to determine the rsync binary's location automatically with which, which will work in nearly all environments. You can also edit the my $RSYNC='...' line to specify a custom location if desired or required (important: change the backticks (`) to single quotes(') in that case.) Troubleshooting / extending Error output is not specifically handled, but could be, with some minor modifications to the script. While reasonably robust, this is obviously a "quick" effort that cannot account for all possible outputs from the incredibly complex rsync utility. You may need to adapt it to suit your needs somewhat, which is hopefully reasonably straightforward: all of the output comes into the $out variable, which you can process according to your needs. Conversion to 1-line display instead of 2-line display As mentioned above, I opted for a 2-line non-scrolling display to better accommodate long pathnames. However, converting the output to a 1-line display is trivial. Simply change the print ... line in the parse() sub to something like this: printf "\e[u\e[0J%-30.30s %s\n", $file, $status; or, to do away with ANSI movement codes altogether: printf "\r%-30.30s %-40.40s", $file, $status; STDOUT->flush; # $| = 1 won't help you here Then you'll see something like this instead: src/test.c 142 100% 0.19kB/s 0:00:00 (xfr#28, to-chk=0/30) You might notice that the %-30.30s is a rather arbitrary printf width, and you'd be right. You can employ something like the answer from this question to get the terminal width so you can grow/shrink that size accordingly.
Is it possible to show rsync output on a single line?
1,648,150,879,000
I tried to put the output of my program into a text file. It appends the echo commands to the file properly, but the imagemagick "compare" command is not appended to the file. It just STOUD's the "PSNR value", which is returned by compare method into terminal. Is there a way to append this commands output to the textfile too? Also if I call my script just with "./script.sh > test.txt" it doesn't print anything more than the echos to file and compare results to terminal. Here's a part of my code: ls images/toconvert/ > lsout.txt while read LINE do echo ====================== $LINE ==================== >> psnrdiff.txt echo Jpeg2000 >> psnrdiff.txt compare -metric PSNR images/toconvert/$LINE images/converted/$LINE.jp2 images/psnrDiffs/$LINE.jp2.png >> psnrdiff.txt done < lsout.txt
Various imagemagick command output to STDERR instead of STDOUT. You can redirect STDERR to STDOUT to capture the output: compare -metric PSNR .... >> psnrdiff.txt 2>&1
Shell Script Output not written to file properly
1,648,150,879,000
How can I use the output of sed in another script? For example (this doesn't work): sed -n "$COUNTER",1p /domains.csv | wget or sed -n "$COUNTER",1p /domains.csv > /myScript.sh As far as I know > lets me take the output and put it in a file, I'm just unsure of how to use this output as an argument in another script. (I am aware I can declare the output as a variable, and then use it on the next line. I'm interested in how to do this as one "command")
The | takes output and redirects it into stdin. wget needs a command line argument, not stdin so you want to pipe to xargs which will build a command line from stdin. sed -n "$COUNTER",1p /domains.csv | xargs wget Alternatively you can tell wget to take input on stdin sed -n "$COUNTER",1p /domains.csv | wget -i -
Using sed output in another script or command
1,648,150,879,000
How can I terminate a process upon specific output from that process? For example, running a Java program with java -jar xyz.jar, I want to terminate the process once the line "Started server on port 8000" appears on stdout.
That can be accomplished with the following script considering that grep -m1 doesn't work for you: #!/bin/bash java -jar xyz.jar &> "/tmp/yourscriptlog.txt" & processnumber=$! tail -F "/tmp/yourscriptlog.txt" | awk '/Started server on port 8000/ { system("kill '$processnumber'") }' Basically, this script redirects the stdout of your java code to a file with the command&> "/tmp/yourscriptlog.txt", the last & on the first line makes your code run as an isolated process and on the next line we have the number of this process with $!. Having the number of the process and a log file to tail we can finally kill the process when the desired line is printed.
Terminate process upon specific output
1,648,150,879,000
I use the command varioutput=$(awk '{print $j}' OFS=, data/damper.test_temp1.csv) because I want to extract the j th value of a line of many values seperated by , . But when I want to use $varioutput it gives me always the whole line. What am I doing wrong? Actually I want to use for ((j=1; j<=20; j++)); do varioutput=$(awk -F, -v jj="$j" '{print $jj}' data/damper.test_temp1.csv) done So I am quite confused now weather I should use it like above or not? I get an error, but when I use it like for ((j=1; j<=20; j++)); do varioutput=$(awk -F, '{print $j}' data/damper.test_temp1.csv) done I still get the line. Now the following was the solution for ((j=1; j<=20; j++)); do varioutput=$(awk -F, -v k=$j '{print $k}' data/damper.test_temp1.csv) done
You’re specifying the output field separator, not the input field separator; use this instead: varioutput=$(awk -F, '{print $j}' data/damper.test_temp1.csv) (or set FS instead of OFS). I’m also assuming that j is a placeholder above, and that you’re replacing it statically with the appropriate value (for example, print $4). To use another variable from script, you must pass it to awk e.g. for RANK variable in shell varioutput=$(awk -F, -v j=$RANK '{print $j}' data/damper.test_temp1.csv) Generally speaking, if you start using AWK for small pieces of a script as in this example, it’s better to use AWK for more of the script.
awk save value as variable
1,648,150,879,000
I'm running a process and I'm counting the number of threads with ps huH p <PID_OF_U_PROCESS> | wc -l I can run this thread with watch like this; watch -n 1 ps huH p <PID_OF_U_PROCESS> | wc -l This will output the number of threads the process is running, but usually that number doesn't change. How can I only print the new number to screen if it changed from the last time the command was run? For example: 64 65 64 (a few minutes go by) 65 Etc.
You could just pipe to uniq: while ps -o nlwp= -p "$pid"; do sleep 1; done | uniq
Watch: Only print to screen if output has changed since last output
1,648,150,879,000
I would like to echo a list all in one line, TAB separated (like ls does with files in one folder) for i in one two some_are_very_long_stuff b c; do echo $i; done will print one line per word: one two some_are_very_long_stuff b c instead I would like to break it, like ls without options does: mkdir /tmp/test cd /tmp/test for i in one two some_are_very_long_stuff b c z; do touch $i; done ls will output b one two c some_are_very_long_stuff z
You could use the columns command from GNU autogen. $ seq 60 | columns 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 With zsh, you can use print -C: $ print -C4 {1..20} 1 6 11 16 2 7 12 17 3 8 13 18 4 9 14 19 5 10 15 20 $ print -aC4 {1..20} 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 And if you need to sort them first (like ls does): $ print -oC4 {1..20} 1 14 19 5 10 15 2 6 11 16 20 7 12 17 3 8 13 18 4 9
How do I echo a line with linebreak at the end at window border? [duplicate]
1,648,150,879,000
I currently have 2 bash command line strings I use to gather data needed for a certain task. I was trying to simplify and have only one command used to gather the data without using a script. First I cat a file and pipe into a grep command and only display the integer value. I then copy and paste that value into an equation which will always be constant except for the grepped value from the first command. 1st command: cat /proc/zem0 |grep -i isrs|grep -Eo '[0-9]+$' 2nd command: echo $(( (2147483633-"**grep value**")/5184000 )) I'm stumped as to how I can accomplish this. Any guidance on this would be greatly appreciated!
Here it is as one command: echo $(( (2147483633 - $(grep -i isrs /proc/zem0 | grep -Eo '[0-9]+$') )/5184000 )) How the simplification was done First consider this pipeline: cat /proc/zem0 |grep -i isrs` This can be simplified to: grep -i isrs /proc/zem0 Thus, the whole of the first command becomes: grep -i isrs /proc/zem0 | grep -Eo '[0-9]+$' The last change is to substitute the first command into the second using command substitution: $(...). Thus, we replace: echo $(( (2147483633-"**grep value**")/5184000 )) with: echo $(( (2147483633-$(grep -i isrs /proc/zem0 | grep -Eo '[0-9]+$'))/5184000 )) One more simplification If your grep supports perl-style regular expressions, such as GNU grep, then, as suggested by jimmij in the comments, one more simplification is possible: echo $(( (2147483633-$(grep -Pio 'isrs.*?\K[0-9]+$' /proc/zem0))/5184000 ))
Using the output of `grep` as variable in second command
1,389,251,246,000
I want to run a command (rsync -av) and only display 10 lines of output at any time. I have found a similar post, but I don't want to redirect it to a file. I want the output to be displayed simultaneously, but only 10 lines at most should be visible. For example: The output has 100 lines, when its finished. At the beginning, it shows the line 1-10. Then 2-11 --> 3-12 --> 4-13 until 91-100 I tried it with: rsync -av /source /destination | tail -f -n 10 It's something like this: rsync -av /source /destination | while read i; do echo -en "\r$i"; done but with 10 lines instead of 1.
I've done such tricks in the build system of an embedded Linux distro. In my case, it was a little different. The build script restricted the scrolling region (via VT100 escape sequences) so that the log was shown in the top N-4 lines of the terminal. The bottom four lines were turned into a static area that was updated with the build progress: what is currently building, percentage progress, and such. A way achieve what you're looking for is this: Move the cursor to the bottom row of the terminal. Print ten blank lines to scroll away any existing material. Set the scrolling region to the bottom ten lines. Run the command. Reset the scrolling region. The escape sequences can be found in numerous references. The following is something I just banged up. It works with bash on an Ubuntu VM I have here. It relies on $(( ... )) arithmetic, and stty supporting -g to save tty settings in a serialized string. I avoided using \e in printf to denote the escape character, which would make this less portable. We interrogate the number of lines from the terminal because the LINES variable might not be exported. (We could instead parse out the rows parameter from the output of stty -a; then we could avoid the whole dance of putting the tty in raw mode, and obtaining the terminal emulator's response using dd. On the other hand, this method works even if the rows value from the tty driver is incorrect.) Save this script as, say last10, make it executable and then try for instance last10 find /etc. #!/bin/bash # save tty settings saved_stty=$(stty -g) restore() { stty $saved_stty # reset scrolling region printf "\033[1;${rows}r" # move to bottom of display printf "\033[999;1H" } trap restore int term exit # move to bottom of display printf "\033[999;1H" printf "\n\n\n\n\n\n\n\n\n\n" # Query the actual cursor position printf "\033[6n" # read tty response tty_response= stty raw isig -echo while true; do char=$(dd bs=1 count=1 2> /dev/null) if [ "$char" = "R" ] ; then break; fi tty_response="$tty_response$char" done stty $saved_stty # parse tty_response get_size() { cols=$3 rows=$2 } save_IFS=$IFS IFS='[;R' get_size $tty_response IFS=$save_IFS # set scrolling region to 10 lines printf "\033[$((rows-9));${rows}r" # move to bottom of display printf "\033[999;1H" # run command "$@"
Show a sliding window of output from a program
1,389,251,246,000
Scenario: $ cat libs.txt lib.a lib1.a $ cat t1a.sh f1() { local lib=$1 stdbuf -o0 printf "job for $lib started\n" sleep 2 stdbuf -o0 printf "job for $lib done\n" } export -f f1 cat libs.txt | SHELL=$(type -p bash) parallel --jobs 2 f1 Invocation and output: $ time bash t1a.sh job for lib.a started job for lib.a done job for lib1.a started job for lib1.a done real 0m2.129s user 0m0.117s sys 0m0.033s Here we see that execution of f1 was indeed in parallel (real 0m2.129s). However, diagnostic output looks like execution was sequential. I expected the following diagnostic output: job for lib.a started job for lib1.a started job for lib.a done job for lib1.a done Why does diagnostic output look like sequential execution rather than parallel execution? How to fix the diagnostic output so that it does look like parallel execution?
From the man pages of GNU parallel: --group Group output. Output from each job is grouped together and is only printed when the command is finished. Stdout (standard output) first followed by stderr (standard error). This takes in the order of 0.5ms CPU time per job and depends on the speed of your disk for larger output. --group is the default. See also: --line-buffer --ungroup --tag [...] --line-buffer --lb Buffer output on line basis. --group will keep the output together for a whole job. --ungroup allows output to mixup with half a line coming from one job and half a line coming from another job. --line-buffer fits between these two: GNU parallel will print a full line, but will allow for mixing lines of different jobs. So you should add either --line-buffer or --ungroup to your parallel command (according to your preferred behavior): $ grep parallel t1a.sh cat libs.txt | SHELL=$(type -p bash) parallel --line-buffer --jobs 2 f1 $ bash t1a.sh job for lib.a started job for lib1.a started job for lib.a done job for lib1.a done
GNU parallel: why does diagnostic output look like sequential execution rather than parallel execution?
1,389,251,246,000
I'm using gfind running on MacOS, to find text files. I am trying to see the filenames only and the birthdate of my gfind results, and then sorting them by date, and paging them. Is this achievable? For the moment I am trying gfind . -name "*.txt" -printf "%f \t\t %Bc\n" But the results are the following: todo.txt Fri Mar 4 17:47:41 2022 99AC1EF5-6BE3-556B-8254-84A8764819E0.txt Fri Mar 4 17:49:08 2022 chrome_shutdown_ms.txt Fri Mar 4 17:48:07 2022 index.txt Fri Mar 4 17:48:05 2022 index.txt Fri Mar 4 17:48:05 2022 index.txt Fri Mar 4 17:48:06 2022 index.txt Fri Mar 4 17:47:46 2022 index.txt Fri Mar 4 17:48:01 2022 index.txt Fri Mar 4 17:48:01 2022 index.txt Fri Mar 4 17:48:05 2022 index.txt Fri Mar 4 17:48:05 2022 index.txt Fri Mar 4 17:48:06 2022 index.txt Fri Mar 4 17:48:06 2022 index.txt Fri Mar 4 17:47:46 2022 index.txt Fri Mar 4 17:48:06 2022 LICENSE.txt Fri Mar 4 17:48:07 2022 english_wikipedia.txt Fri Mar 4 17:48:07 2022 female_names.txt Fri Mar 4 17:48:07 2022 male_names.txt Fri Mar 4 17:48:07 2022 Is there a way to tabulate the output in order to show some consistency as to what it looks like ? I would like to only show the filenames and the birthdate in a more elegant way. Thanks a lot in advance!
Here, you can use: gfind . -name '*.txt' -printf '%-40f %Bc\n' or gfind . -name '*.txt' -printf '%40f %Bc\n' To print the file name left-aligned or right-aligned padded with spaces to a length of 40 bytes (not characters, nor columns unfortunately). That would align them as long as file names don't contain control, multi-byte, zero-width or double-width characters, are are not longer than 40 bytes. Note that if you put the date first (here using the mtime (%T), not the Btime (%B) which I doubt is what you want as it doesn't reflect anything useful in the life of the file), and use a more useful and unambiguous timestamp format like the standard YYYY-MM-DDTHH:MM:SS[.subsecond]+ZZZZ, then you don't have to worry about alignment and it makes the sorting easier: find . -name '*.txt' -printf '%TFT%TT%Tz %f\n'
Properly tabulating 'find's output with printf and sorting them by date
1,389,251,246,000
I am running a MySQL script file from a shell script in Linux (CentOS 7). While I can capture the result in a file, I am not able to capture result metadata. Example: My test.sql file looks like this: USE dbname; SELECT * FROM `test`; INSERT INTO test values (3,'Test'); My test.sh script looks like this: #!/bin/bash mysql --password=<pwd> --user=<username> --host=<host domain> < test.sql > out.txt When I execute test.sh from the command line, I can capture the output in out.txt. But MySQL also generates metadata like no of rows affected for commands like INSERT. I am unable to capture that for the last SQL command (please refer to my SQL file example above).
You can increase verbosity. This should be enough: -vv Reason: It check for isatty and if it find it is not printing to a terminal enters batch mode. The verbosity from man and --help: --verbose -v Verbose mode. Produce more output about what the program does. This option can be given multiple times to produce more and more output. (For example, -v -v -v produces table output format even in batch mode.) --batch -B Print results using tab as the column separator, with each row on a new line. With this option, mysql does not use the history file. Batch mode results in nontabular output format and escaping of special characters. Escaping may be disabled by using raw mode; see the description for the --raw option. Depending on what you want you might also want --raw. Chasing the rabbit Else one would have to fake tty, e.g. by using script: 0<&- script -qefc "mysql -u user --password='xxx' --host=host"< test.sql >out.txt that would capture everything - but then again someone might want that. Excluding input For programs using library isatty(), and do not have a override flag for this, one can fake tty this way as well; (Compile a minimal C snippet): echo 'int isatty(int fd) { return 1; }' | \ gcc -O2 -fpic -shared -ldl -o faketty.so -xc - strip faketty.so # not needed, but ... chmod 400 faketty.so # not needed, but ... Then run by: LD_PRELOAD=./faketty.so mysql -u user --password='xxx' --host=host< test.sql >out.txt or add a shell wrapper for for example faketty: #! /bin/sh - LD_PRELOAD=/path/to/faketty.so "$@" Then $ faketty mysql ... < foo >bar
How capture MySQL result metadata in a file when run from shell script
1,389,251,246,000
When I run ras-mc-ctl --summary I get the following output: No Memory errors. No PCIe AER errors. No Extlog errors. No devlink errors. Disk errors summary: 0:0 has 15356 errors 0:2064 has 4669 errors 0:2816 has 594 errors No MCE errors. Now, I'm not particularly concerned about there errors given that presumably even my CD/DVD drive which I haven't used has them given that I only have 3 SATA devices and it is one of them, but I am regardless curious, how does this number notation line up with my physical drives? If I do lsblk I see a similar syntax which has the header MAJ:MIN (presumably Major:Minor), but the numbers there don't line up at all with the ones here. The numbers in lsblk have 8 as major for all my disks and 11 as major for my CD/DVD drive, which does not line up with the numbers given to me by ras-mc-ctl. How do I figure out which drives the numbers in ras-mc-ctl --summary correspond to and what do they mean?
lsblk will give you MAJ:MIN numbers To calculate the equivalent for ras-mc-ctl, do: d = (MAJ * 256) + MIN To go from ras-mc-ctl to lsblk, do: MAJ=int(d/256) MIN=d % 256 For your case: MAJ=(2064/256)=8 MIN=(2064%256)=16
Interpret disk errors output from ras-mc-ctl --summary
1,389,251,246,000
I'm extracting daily backup archives. I want to see only the new files since the last day. The archives contain lot of already existing files, which I don't want to overwrite, so I use the --skip-old-files option, which is fine. But I'd like to list only those files that were actually extracted and omit those that were skipped because they already exist. Example: My current command is: tar --verbose --skip-old-files --extract --file=2019-02-10.tar.gz and the output is (where file1 and file2 were already exist and file3 was new): file1.zip tar: file1.zip: skipping existing file file2.zip tar: file2.zip: skipping existing file file3.zip I need only the file3.zip in the output. Is it possible?
If this is the only process writing to the directory then you could create a temporary file, extract the files not in verbose mode, then look at those with a change time newer than the temp file e.g. MYTMP=$(mktemp) tar --skip-old-files --extract --file=2019-02-10.tar.gz find . -cnewer $MYTMP rm $MYTMP
Show only the really extracted (non-skipped) files using tar
1,389,251,246,000
I'm using a piped command to migrate a big production DB from one host to another using this command: mysqldump <someparams> | pv | mysql <someparams> And I need to extract the line 23 (or let's say the first X lines) (saved as file or simply in bash output) from the SQL passing from one server to another. What I've tried: Concatenate in output less, at least to see the output scrolling, but no luck mysqldump <someparams> | pv | mysql <someparams> | less Read about sed, but it's not useful to me Using head to write to a file, but it is empty mysqldump <someparams> | pv | mysql <someparams> | head -n 25 > somefile.txt The only requirement I have is that I cannot save this .sql file. Any idea? Thanks
With zsh mysqldump <someparams> | pv > >(sed '22,24!d' > saved-lines-22-to-24.txt) | mysql <someparams> With bash (or zsh): mysqldump <someparams> | pv | tee >(sed '22,24!d' > saved-lines-22-to-24.txt) | mysql <someparams> (though beware that as bash doesn't wait for that sed process, it's not guaranteed that saved-lines-22-to-24.txt will be complete by the time you run the next command in the script). Or you could have sed to the writing: mysqldump <someparams> | pv | sed '22,24 w saved-lines-22-to-24.txt' | mysql <someparams> To have it as output, with zsh: {mysqldump <someparams> | pv > >(sed '22,24!d' >&3) | mysql <someparams>} 3>&1 or bash/zsh: { mysqldump <someparams> | pv | tee >(sed '22,24!d' >&3) mysql <someparams> } 3>&1
Get first N lines of output from a pipe operation
1,389,251,246,000
I have a Debian on beaglebone without an X server and I need to get rid of any console output to the framebuffer device. I was trying few things I found, like console=null or kernel argument vga=0, but without luck. Any advice?
You do not have a vga in your BeagleBone. In my Lamobo R1 (ARM like the BB) I am passing to the kernel the parameters sunxi_ve_mem_reserve=0 sunxi_g2d_mem_reserve=0 sunxi_fb_mem_reserve=0 console=ttyS1,115200n8 and took out the ones: console=tty1 disp.screen0_output_mode=1920x1080p60 Why this parameters: sunxi_ve_mem_reserve=0 This eliminates the reserved memory for the video acceleration engine, saving 80MB. sunxi_g2d_mem_reserve=0 This eliminates the reserved memory for the 2D acceleration engine. sunxi_fb_mem_reserve=0 This sets the amount of total reserved memory for the framebuffer to 0. console=ttyS1,115200n8 Using the console via a PL2303HX USB to UART TTL cable console=tty1 Took it out because the terminal output was using the framebuffer. Since you are not interested in video output, you might also be interested in a BeagleBone Green, a BeagleBone without a HDMI connector. https://beagleboard.org/green
Turning off console output?
1,389,251,246,000
I want to get the result of time command into a text file but it's not working it only put blank space in the text file. I already tried this commands, A- $ x=`time` $ echo $x > log.txt $ cat log.txt $ B- $ time > log.txt real 0m0.000s user 0m0.000s sys 0m0.000s $ cat log.txt $ C- $ time > log.txt 2>&1 real 0m0.000s user 0m0.000s sys 0m0.000s $ cat log.txt $ What I really want is this. $ time > log.txt $ cat log.txt real 0m0.000s user 0m0.000s sys 0m0.000s
Use the external time command instead of the builtin: /usr/bin/time -po log.txt true \time -po log.txt true # simpler way For example: $ \time -po log.txt true $ cat log.txt real 0.00 user 0.00 sys 0.00
Sending `time` command into text file [duplicate]
1,389,251,246,000
I would like to be able to write the output of Portage commands, along with other commands, that I perform in tty (that is, the screen-wide terminals started with Ctrl+Alt+Fn where n represents an integer between 1 and 6. These terminals are started using the getty command, to my knowledge) where there is no clipboard, to a text file. Now I read on the Ubuntu forums that maybe the Unix command cat might be able to do this, if properly used. Unfortunately, following the command suggested there does not seem to add the complete output of the emerge command to a text file. See I ran: emerge dev-qt/qtwayland > cat >> /home/fusion809/output.txt where fusion809 is my username, and it only wrote four lines of output to output.txt, namely: Calculating dependencies ....... .. ....... done! [ebuild R ] media-libs/mesa-11.0.4 USE="-wayland*" ABI_X86="32*" [ebuild R ] dev-qt/qtgui-5.5.1 USE="-egl* -evdev* -ibus*" [ebuild R ] dev-qt/qtwayland-5.5.1 USE="-egl*" I also tried: emerge dev-qt/qtwayland > /home/fusion809/output.txt and: emerge dev-qt/qtwayland >> /home/fusion809/output.txt both of which wrote the same output to output.txt.
You're on the right track. In Unix/Linux there's also an error stream. Every command gets standard input, standard output, and standard error. You've been working with standard output. To also capture the standard error stream from the command use 2>. For example: emerge dev-qt/qtwayland > emerge.out 2> emerge.err Now if you want the standard output and error to go into the same file, use 2>&1 to tell the shell to send the standard error output to the same place as the standard output: emerge dev-qt/qtwayland > emerge.out 2>&1 Also, if you need to reference and learn more, you can always look this up in the shell man page man sh. Thanks for the informative and well-thought question!
How do I write the output of Portage to a txt file?
1,389,251,246,000
how do I set it so clear command clears the output that was stored in ram from my understanding, konsole keeps screen output in ram. I want to clear that when I clear the visible part of the screen with the clear command.
Clearing the ram used by whatever running process is a facility rarely offered to the user. More over, unless you precisely know the code the process is running, it is just impossible to know what is being stored where. The visible part of the screen as well as some variable amount of lines that were previously displayed (1000 per default) are kept in the scrollback buffer that can be cleared altogether via the View > Clear Scrollback and Reset menu (or keying in Ctrl+Shift+K if you kept default shortcuts) (see §2.1.3) Keep in mind that while no log is kept by konsole, the user might well have : Copied parts of the screen into the clipboard, (*1) Saved parts of the screen into whatever file via the File > Save Output As menu option or whatever other mean offered by the shell, Clearing these parts cannot obviously be achieved by konsole. 1 : Selectively clearing the clipboard history would be part of another topic. This is actually possible from the command line thanks to dbus. For example, if running Klipper, firing qdbus org.kde.klipper /klipper org.kde.klipper.klipper.clearClipboardHistory would wipe it entirely.
have clear screen clear konsole ram
1,389,251,246,000
I have a daemon that prints some information on to the terminal. I can see these informations by typing: systemctl status bot.service, this is working well, but this command doesn't listen to the new output, so if I want to see the new output, generated, then I need to retype the command. Is there a way to always listen to the daemon and let the output display in the terminal without retype the command?
There are two ways. You need elevated powers for both (eg. use sudo, or be a member of the systemd-journal group). Use journalctl: journalctl -fu bot Find the log the output goes to and tail -f it. Very likely it's /var/log/syslog. Then do: tail -f /var/log/syslog There will be other entries intermixed though.
How to listen to a daemon output?
1,389,251,246,000
I'm writing a script that shuts down the Apache service, performs a function, and turns it back on. I'd like to get some type of interactive confirmation for each process. I'm not familiar with printf, but I believe it would be required for what I want to do. Here's a portion of the script below. As I turn off the service, I would like it to say what it will do, then report OK or NOT OK on the same line. ie. Shutting down Apache service: few seconds later... Shutting down Apache service: OK #!/bin/bash # Turn off Apache echo "Shutting down Apache service:' service apache2 stop if ( $(ps -ef | grep -v grep | grep apache2 | wc -l) > 0 ) then echo "NOT OK!" else echo "OK!" fi
In bash, you can use the -n option of echo not to print the end of line. echo -n Shutting down Apache service: or, use printf printf '%s' 'Shutting down Apache service' To include the newline in printf, put \n into the template printf '%s\n' 'NOT OK!'
Interactive bash script
1,389,251,246,000
I have a professor who stores homework assignments in many files spread across different sub-directories in a lecture folder with the header "TODO:" I'd like to output all these todo's to a single text file in nano instead of navigating from one assignment file to another. I tried to make an alias for this command, since I use it so much, but whenever I try to execute it, the cursor just blinks and nothing happens. alias todo='cd /home/csc103/Desktop/shared/csc103-lectures && grep -Rw "TODO:" --after-context=6 --include="*.cpp" . > todo.txt && nano todo.txt' What am I doing wrong here? Edit by "Nothing Happens" I mean that the cursor keeps blinking and the next prompt doesn't come up. As in the left terminal pane in the image below. However, when I force-quit the process with ctrl-C I do end up in the directory I wanted the todo command alias to take me to. And there is a todo.txt file in there. Also, if it's of any relevance I'm issuing these commands on an Arch Linux install in VirtualBox.
I figured it out, thanks for the pointers guys. The problem here was that grep was recursively searching for "TODO:" in the todo.txt file and then writing those results back to the todo.txt file. When I opened todo.txt it was filled with the same text looped over and over again. Evidently, I should have used the --exclude="todo.txt" option in grep. After adding that, it works perfectly.
Grep alias piped to nano. Nothing happens when command is issued
1,389,251,246,000
This is a follow up to Replace one entire column in one file with a single value from another file combined with R Pass Variable from R to Unix I am running several scripts (Perl, python and R) in one Unix script and need to pass outputs of these scripts to Unix and combine information from different files these scripts create. I have a working code which is a combination of the above mentioned questions: The output from the R script called getenergie.R is a filename. There are several such filenames which are returned and I need to write into each of these files and replace column 11 of these files with a value which comes from another file (COP1A_report1) and is called value. RES=($(./../getenergies.R)) for pdbnames in "${RES[@]}" do #write the one value from COP1A_report1 into column 11 of a file and save as TESTING value=$(awk -F, 'NR==1{print $2;exit}' ./../COP1A_report1) awk '{$11 = v} 1' v="$value" ${pdbnames} > TESTING printf "$value ${pdbnames}\n" done What I need is a way to loop over this so that it writes one value from COP1A_report1 (row $2, line 1) into column 11 of a file called like the first filename stored in $pdbnames, save it as a unique file and go to COP1A_report1 (row $2, line 2), write that into column 11 of a file called like the second filename stored in $pdbnames and so on... What is the smartest way of doing that? I can imagine something like the code below, but something is wrong with the syntax and I do not get any errors, just an empty value variable. Any ideas? counter=1 RES=($(./../getenergies.R)) for pdbnames in "${RES[@]}" do value=$(awk -F, 'NR==$counter{print $2;exit}' ./../COP1A_report1) #NR=1 needs to be changed to go through the entire list... awk '{$11 = v} 1' v="$value" ${pdbnames} > TESTING$counter counter=$(echo $counter+1 |bc) printf "$value ${pdbnames}\n" done
You're trying to pass $counter into the awk script, so you need to use double-quotes rather than single-quotes. Single-quotes are for literal strings, double-quotes are for strings with variables in them. Because you're using double-quotes here, that means you have to escape $2 as \$2 in the awk script so that the shell doesn't substitute it's second arg (if any, empty string otherwise) into the awk script. value=$(awk -F, "NR==$counter{print \$2;exit}" ./../COP1A_report1 You should also quote other variables when you use them. e.g. awk ... > "TESTING$counter" - lack of quotes was harmless in this instance, but always quoting your variables is a good habit to get into. Same for counter=$(echo "$counter"+1 |bc) - not quoting is harmless here, but still bad practice. Finally, you're missing the -v from the awk command when you set v="$value". should be -v v="$value" counter=1 RES=($(./../getenergies.R)) for pdbnames in "${RES[@]}" do value=$(awk -F, "NR==$counter{print \$2;exit}" ./../COP1A_report1) awk -v v="$value" '{$11 = v} 1' "$pdbnames" > "TESTING$counter" counter=$(echo "$counter"+1 |bc) printf "$value $pdbnames\n" done
combining output from different scripts into different files in a loop
1,389,251,246,000
I calling an url using wget. This url gives me a response, its a Message id. I want to write the logs to a log file, with the message id as well. Also the log should be appended each time. I trying to do it in my shell script. Is it possible to do this? If so how can i do it.
wget -O - $url --append-output=logfile >> logfile Specifying - as filename for -O writes the output to stdout. My shell does not hate using logfile for both append operations. It might work for you as well.
wget, logging the output and the response
1,389,251,246,000
I found these ways, for example, to output colored text in a simple way to the screen: RED="\033[0;31m" # Red color (via ANSI escape code); NC='\033[0m' # No color (via ANSI escape code); echo -e "${RED}This text is red. ${NC}" # -e flag allows backslash escapes; or: printf '\e[1;34m%-6s\e[m' "This is blue text" I also found: tput setaf 1; echo "this is red text" But I never used tput and I'm not sure it's shipped with all major distros (Debian, CetnOS, Arch, and so forth). My question How to output colored text in a given named, common color (like "Red") in a simple way I could count on to work on all major distros, without using "messy" color codes?
Codes (ANSI colour codes) The codes are not distro dependent. They are terminal dependent. Some terminals will not support them. However they are probably supported by most. Names Use variables to give names e.g. red="$(tput setaf 1)" echo "${red}hello" note: don't use capitals for shell variables, capitals should be reserved for environment variables. Names and codes The codes refer to a colour number. the colour for each number is defined by the terminal, and in non-standard. The user can change it.
Output colored text in all major distros without using non-word color codes (like \033[0;31m)
1,389,251,246,000
grep include /etc/nginx/nginx.conf output: include /etc/nginx/modules-enabled/*.conf; include mime.types; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; desired output: include /etc/nginx/modules-enabled/*.conf; include mime.types; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;
awk '/include/ {$1 = $1; print}' < your-file Would do that. By assigning something to $1, that forces awk to rebuild the record by joining the fields (which by default are separated by whitespace (at least space and tab, possibly others depending on locale and awk implementation)) with OFS (space by default). A sed equivalent: sed -E '/include/!d; s/[[:space:]]+/ /g; s/^ //; s/ $//' < your-file [[:space:]] includes [[:blank:]] which includes at least space and tab; [[:space:]] also includes vertical spacing characters¹, which is potentially useful here if the file has MSDOS CRLF line endings in that it would remove those spurious CRs at the end of lines. ¹ such as vertical tab, form feed which shouldn't occur in your input, line feed, the line delimiter which would not be in the record awk processes as awk processes the contents of each line in turn
How to remove multiple white spaces between words?
1,389,251,246,000
I'm doing some hands on pen testing and following some guides to get an understanding of the tools of the trade. I'm following along with the guide here and I understand everything except for the last page. I need assistance understanding sudo -l below. I know that it details what the current user can do. However, what does the output below mean? And how about the command below (excluding touch)? It kind of confuses me because after running that command (exploit?), I was able to get root. From my understanding, the line is saying to run the command as root or elevate to root, zip the file called exploit, and place it in tmp/exploit. I believe I'm wrong but that's where my understanding of that line stops. I'm confused as to how I got root with that command and what that line is doing.
For your first question, the indicated lines of output are telling you that you are permitted to run /bin/tar and /usr/bin/zip via sudo as the root user without even needing to provide zico's password. For your second question, we get the answer from zip's manual page: --unzip-command cmd Use command cmd instead of 'unzip -tqq' to test an archive when the -T option is used. So, since you're privileged to run zip as the root user through sudo, the exploit is simply telling zip "hey, when you're testing this archive, use the command sh -c /bin/bash to test it, would you?" and it's helpfully doing so, giving you a root shell. The exploit file is just there to provide zip something to compress, so that there would be something to "test". It's never being run or anything and indeed in your demonstration is simply an empty file. $ sudo -u root zip /tmp/exploit.zip /tmp/exploit -T --unzip-command="sh -c /bin/bash" is instructing sudo to, as the root user, run this command: $ zip /tmp/exploit.zip /tmp/exploit -T --unzip-command="sh -c /bin/bash" This command will take the file /tmp/exploit and put it into a new archive, /tmp/exploit.zip. The -T switch tells zip to then Test the integrity of the archive, and the --unzip-command switch is telling zip how to test the archive. This last thing is the actual exploit: because zip is being run as root, running sh -c /bin/bash gives you a shell as the root user.
Understanding sudo and possible exploit
1,389,251,246,000
I'd like to make fixed height output from any command using piping: some_command | magic_command -40 If, for example, some_command prints 3 lines, magic_command should add 37 newlines, or if some_command prints 50 lines, magic_command should cut extra lines (like head -40)
POSIXly: { command; while :; do echo; done; } | head -n 40 On GNU system: { command; yes ""; } | head -n 40
Padding output with newlines
1,389,251,246,000
i'd like to display the output of lsb_release -ds in my Conky display. ftr, in my current installation that would output Linux Mint 18.3 Sylvia. i had thought of assigning the output of that command to a local variable but it seems Conky doesn't do local vars. maybe assigning the output of that command to a global (system) variable? but that's a kludge and it's not at all clear that Conky can access global vars. sounds like an exec... might do it but the docs stress that that's resource inefficient and since this is a static bit of info (for any given login session) it seems a waste to keep running it over and over. so, what to do? suggestions most welcome.
You should prefer the execi version of exec, with an interval, where you can give the number of seconds before repeating: ${execi 999999 lsb_release -ds}
how to display lsb_release info in Conky?
1,389,251,246,000
I work in Maxima a lot (start it on the terminal with "rlwrap .../maxima" and sometimes I want to save a few (several) screens worth (scrolling) of calculations. I realize I can use xmaxima, a variant that can then save it to a text file - that works. But I also sometimes use scipy/python in the terminal, or even others. In general, is there a way to save several screens of interactive program input/output from the bash terminal to a file (possibly preserving 'word art', or 2D display)? I use terminator, though not sure it matters. Also, sometimes I work on a debian system and other times on Linux Mint.
This is what the script tool is for. It will save an entire terminal session - inputs and outputs: $ script sessionlog.txt [ do stuff ] $ exit $ ls sessionlog.txt
save several bash screens of program input/output
1,389,251,246,000
I just ran fsck on my newly bought SD card for my Raspberry Pi and it shows the following output: fsck from util-linux 2.25.2 fsck.fat 3.0.27 (2014-11-12) /dev/mmcblk0p1: 75 files, 2539/7673 clusters It doesn't say clean anywhere and exists with error code 1333 (if I do echo $! after running fsck). Is this bad or not? I don't know, sorry.
The return status is in $?. $! holds the PID of the last background process.
Don't know if fsck failed or not
1,389,251,246,000
I searched a lot and finally came here to ask my question. Recently my screen got damaged and now has some rows of dead pixels. They are located at the very top of the screen and making it very difficult to read for example the url of a website. So I wondered if it would be possible to set the output from 1680x1050 to 1680x950 and just don't use the first 100 pixels that were damaged?
No, you cannot just select the pixels that are undamaged, stem reworks don't work like that, unfortunately. If you change the resolution it will reduce the quality of the entire screen's display itself. But the screen can be replaced if it is a flat monitor or a laptop.
Trim monitor output (because of dead pixels)
1,389,251,246,000
I have fired an ls -1 command that runs and displays a long list of values. When the command ends I can not see the output which is outside the screen vertical length. How can I see those previous entries ? Is there a way to see the output progressively like : Display first 15 rows User hits a keystroke. Then display the next 15 records
You can pipe the output of ls to pipe as follows $ ls | less Then you are able to use less to browse the output, for example with Page Up and Page Down. You can exit less by pressing q. Type man less to find out more ways to scroll the output.
How can I see output from a command progressively?
1,389,251,246,000
completely new to bash so any assistance is much appreciated. I'm looking for a script to do the following, its a pretty simple script but I cant seem to get it: I want to run a command, this command will return either successful or some other string in the output. if the output does not contain the word successful I want it to sleep for 5min and run again until it does contain successful. It would look something like this until (SOMECOMMAND) &> /dev/null do if(SOMECOMMAND contains successful); break; else sleep 300 done echo -e "\nThe command was successful."
You could do something like: #!/bin/bash output= count=0 until [[ $output =~ successful ]]; do output=$(somecommand 2>&1) ((count++)) sleep 300 done printf '\n%s\n' "Command completed successfully after $count attempts." This will check if output contains successful, if you want to check that the output is exactly "successful" you can change the =~ to ==. $( ... ) is command substitution which is being used to set the parameter output to the...output of somecommand.
poll for the right output of a command
1,389,251,246,000
In a case where I write a query for dpkg-query to list me some package, I would like it to return me the package name if it finds something. If it doesn't find anything, I don't want it to output: no package found matching {package-name} I would like it to output nothing at all. The reason for this is because my query is in a script and if it returns that my script breaks. How can I achieve this?
You can redirect dpkg-query's stderr to /dev/null to silence the error message, as in dpkg-query --list <package> 2>/dev/null.
How to ignore warnings from dpkg-query when it doesn't find anything?
1,389,251,246,000
I'd like to be able to export the output (and error messages) of my cygwin terminal in a file, especially since I have to click a lot of buttons in order to "mark" the stuff in the cygwin terminal (and it's desirable to minimize the amount of clicking I do).
The stderr output of an executable can be redirected to a file with the following syntax: mycommand 2> error.txt If you want to redirect stdout (i.e. the regular program output) to the file, the command should be: mycommand > output.txt To redirect both the stderr and stdout output to the same file (similar to as seen on terminal), use: mycommand > output_and_error.txt 2>&1
For cygwin, how do I export the output in a terminal into a file?
1,389,251,246,000
I want to save the output from a script so I can view it later. However, when I save the output to a file (script > some/file) and view it later, there's no color, even though script originally outputs in several colors to make the output easier to read. It makes sense that the color isn't saved since the resulting file is just plain text, but is there any way I can take any given script, and reproduce the output later with the same styling without invoking the script again?
Some programs (including whatever you put in your script) detect whether the output is a terminal or a file, and turn off colors for that case. If you run your script using the program script, that avoids this problem, by capturing all of the characters into a file named typescript, e.g., script -c script (where the latter is of course, your script) and later cat typescript Depending on the system, the script program may use different arguments. The first (defaulting to typescript) is where the script program writes its output: the one on Debian/Ubuntu/etc is in a package named "bsdutils", and the command must be given using a -c option, e.g, "script -c script". on a BSD system, there is no -c option, and the command can be given be given as parameters after the file, e.g,. "script typescript script". Though supported on (probably) every POSIX system, script is not part of POSIX.
Maintain color from script output for later viewing
1,389,251,246,000
Let's say I am calling ls -la, which produces a very long output. Is there any key/command which lets my console scroll up to the first line of the output?
If the output is very long you could use the less command like below: your_command_here | less And then scroll all the way down by pressing keys like Enter, Space etc. For more see the less manpage. You could even use more you_command_here | more more works like less but uses different key combinations to page through the text. For more see the more manpage. Now you might remember that very old quote : less is more
Go to first line of console output of a command
1,380,717,179,000
I am running a java program from a OS X 10.8 bash terminal, and are trying to rederect the output it is producing. However, when either running this throug a pipe, or rederecting it to a file, the output is blank, however I see the output in the terminal. To illustrate this: > java program.java 13/10/02 14:18:30 WARN some 13/10/02 14:18:30 INFO log 13/10/02 14:18:30 INFO messages ... > java program.java > log > cat log > Can the java program be set up so that it is writing to another stream than stdout, but a stream that still produces output in the terminal. Is such a thing possible?
There are three standard files open for each program, stdin (standard input), stdout(standard output), and stderr (standard error). Writes to both stdout and stderr is output in the terminal by default. It is a common convention to write errors and log messages to stderr instead of stdout in order to not mix log or error messages with actual program output. You can redirect stderr using 2>, for instance: command 2> log
Pipe not picking up stdout
1,380,717,179,000
Environment: aptitude called in a script. I'm having problems with this command: aptitude search '?virtual' |grep ^v |grep -v i386|sort|uniq In particular if I do: aptitude search '?virtual' |grep ^v |grep -v i386|sort|uniq|grep adblock I get (as one of the results): v adblock-plus-element-hiding-hel - instead of what I want: v adblock-plus-element-hiding-helper - How do I get aptitude to print the full package name in a script?
You need to tell aptitude not do do any special column formatting. --disable-columns This option causes aptitude search and aptitude versions to output their results without any special formatting. In particular: normally aptitude will add whitespace or truncate search results in an attempt to fit its results into vertical “columns”. With this flag, each line will be formed by replacing any format escapes in the format string with the corresponding text; column widths will be ignored. So do this instead: $ aptitude search '?virtual' --disable-columns|grep ^v |grep -v i386|sort|uniq v a52dec - v a52dec-dev - v aac-tactics - v aalib1 - v aalib-bin - v acl-dev - v ada-compiler - v aide-binary - v alphy - ...
Have aptitude search print full package name
1,380,717,179,000
I know that you can connect to various background processes to watch their console output, but is there a way to view the output of all processes at once? Likely it would scroll quickly and be hard to read, but is it possible?
Well, you can spawn several processes in your shell in the background and then (if they all use their stdout or stderr) you can get lots of information intermingled in the console - and by intermingled I mean it can possibly even mix data from several processes in the middle of a line. What you are probably looking for is logging to a file (system services usually use something in /var/log) and then viewing the file(s). There are couple of utilities for this: tail (important option -F, which monitors the file and prints any lines added), less can (in the follow mode) do the same interactively (i.e. you can switch back and forth between following the file and scrolling back). most is another interesting file pager utility, more is the "classic" one found almost everywhere (even on DOS and Windows). Last but not the least, tee might be of interest - it duplicates its stdin to stdout and to a file, which can often come in handy.
How to view output for ALL processes simultaneously?
1,380,717,179,000
I am working on vxlan tunneling between Linux - commercial routers. I need to debug some interface settings. The command sudo ip -d link show DEV gives me a great output but the output format is like a long single line as below. katabey@leaf-1:mgmt:~$ sudo ip -d link show vxlan_10 11: vxlan_10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master bridge state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 52:6d:3d:aa:b5:bf brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 vxlan id 10010 local 10.1.1.1 srcport 0 0 dstport 4789 nolearning ttl 64 ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx bridge_slave state forwarding priority 8 cost 100 hairpin off guard off root_block off fastleave off learning off flood on port_id 0x8002 port_no 0x2 designated_port 32770 designated_cost 0 designated_bridge 8000.50:0:0:3:0:3 designated_root 8000.50:0:0:3:0:3 hold_timer 0.00 message_age_timer 0.00 forward_delay_timer 0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on neigh_suppress on group_fwd_mask 0x0 group_fwd_mask_str 0x0 group_fwd_maskhi 0x0 group_fwd_maskhi_str 0x0 vlan_tunnel off isolated off addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 It would be great to have the output like vxlan id 10010 local 10.1.1.1 srcport 0 dstport 4789 I remember a couple of years back Linux system engineers I used to work with doing command | python ... but I was not able to find/recall the command. (I have Python installed). Any other solutions (especially single liners) are welcome.
try: your-command |grep -Eo '(vxlan id|srcport|dstport) [0-9]+|local [0-9.]+'
Formatting command output that is a long single line
1,380,717,179,000
Below command outputs a table of space delimited text, is there a tool to remove the unnecessary spacing here while still keeping the columns aligned? $ sudo ss -ltpn State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 32 10.218.108.1:53 *:* users:(("dnsmasq",pid=10242,fd=9)) LISTEN 0 128 *:22 *:* users:(("sshd",pid=1111,fd=3)) LISTEN 0 32 fd42:9324:ab98:50fb::1:53 :::* users:(("dnsmasq",pid=10242,fd=13)) LISTEN 0 32 fe80::c024:c5ff:fe68:999e%lxdbr0:53 :::* users:(("dnsmasq",pid=10242,fd=11)) LISTEN 0 128 :::22 :::*
Turns out that ss can do this for you if you just pipe the output or redirect it to a file. For example, on my system, without piping, I get this: $ sudo ss -ltpn State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:53939 0.0.0.0:* users:(("spotify",pid=4152748,fd=115)) LISTEN 0 10 0.0.0.0:57621 0.0.0.0:* users:(("spotify",pid=4152748,fd=96)) LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=822,fd=3)) LISTEN 0 128 127.0.0.1:10391 0.0.0.0:* users:(("Enpass",pid=2193055,fd=38)) LISTEN 0 5 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=818,fd=8)) LISTEN 0 5 127.0.0.1:9292 0.0.0.0:* users:(("emacs",pid=178419,fd=13)) LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=314,fd=4),("systemd",pid=1,fd=106)) LISTEN 0 5 127.0.0.1:34512 0.0.0.0:* users:(("purevpnd",pid=839,fd=6)) LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=822,fd=4)) LISTEN 0 5 [::1]:631 [::]:* users:(("cupsd",pid=818,fd=7)) LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=314,fd=6),("systemd",pid=1,fd=128)) If I simply pipe to cat, however, I get: $ sudo ss -ltpn | cat State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess LISTEN 0 128 0.0.0.0:53939 0.0.0.0:* users:(("spotify",pid=4152748,fd=115)) LISTEN 0 10 0.0.0.0:57621 0.0.0.0:* users:(("spotify",pid=4152748,fd=96)) LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=822,fd=3)) LISTEN 0 128 127.0.0.1:10391 0.0.0.0:* users:(("Enpass",pid=2193055,fd=38)) LISTEN 0 5 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=818,fd=8)) LISTEN 0 5 127.0.0.1:9292 0.0.0.0:* users:(("emacs",pid=178419,fd=13)) LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=314,fd=4),("systemd",pid=1,fd=106)) LISTEN 0 5 127.0.0.1:34512 0.0.0.0:* users:(("purevpnd",pid=839,fd=6)) LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=822,fd=4)) LISTEN 0 5 [::1]:631 [::]:* users:(("cupsd",pid=818,fd=7)) LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=314,fd=6),("systemd",pid=1,fd=128)) I also get the same output if I just redirect to a file: sudo ss -ltpn > file. For a more general solution, you can use column. For example, given this input file: $ cat file State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:53939 0.0.0.0:* users:(("spotify",pid=4152748,fd=115)) LISTEN 0 10 0.0.0.0:57621 0.0.0.0:* users:(("spotify",pid=4152748,fd=96)) LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=822,fd=3)) LISTEN 0 128 127.0.0.1:10391 0.0.0.0:* users:(("Enpass",pid=2193055,fd=38)) LISTEN 0 5 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=818,fd=8)) LISTEN 0 5 127.0.0.1:9292 0.0.0.0:* users:(("emacs",pid=178419,fd=13)) LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=314,fd=4),("systemd",pid=1,fd=106)) LISTEN 0 5 127.0.0.1:34512 0.0.0.0:* users:(("purevpnd",pid=839,fd=6)) LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=822,fd=4)) LISTEN 0 5 [::1]:631 [::]:* users:(("cupsd",pid=818,fd=7)) LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=314,fd=6),("systemd",pid=1,fd=128)) I can pass it through column -t to pretty print it: $ column -t -N"State,Recv-Q,Send-Q,Local Address:Port,Peer Address:Port,Process" <(tail -n +2 file) State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 0.0.0.0:53939 0.0.0.0:* users:(("spotify",pid=4152748,fd=115)) LISTEN 0 10 0.0.0.0:57621 0.0.0.0:* users:(("spotify",pid=4152748,fd=96)) LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=822,fd=3)) LISTEN 0 128 127.0.0.1:10391 0.0.0.0:* users:(("Enpass",pid=2193055,fd=38)) LISTEN 0 5 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=818,fd=8)) LISTEN 0 5 127.0.0.1:9292 0.0.0.0:* users:(("emacs",pid=178419,fd=13)) LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=314,fd=4),("systemd",pid=1,fd=106)) LISTEN 0 5 127.0.0.1:34512 0.0.0.0:* users:(("purevpnd",pid=839,fd=6)) LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=822,fd=4)) LISTEN 0 5 [::1]:631 [::]:* users:(("cupsd",pid=818,fd=7)) LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=314,fd=6),("systemd",pid=1,fd=128))
How to remove unneccessary spaces from table output in shell while still keeping columns aligned?
1,380,717,179,000
Task: I'm using grep to search some text files, piping the results from one grep (excluding some lines) to another (matching some lines) + displaying some context using the -C parameter as shown below: grep -v "Chapter" *.txt | grep -nE -C1 " leaves? " Problem: This works very well when printing the results, but produces very large files (~ several GB) and takes forever when I write it to file like so: grep -v "Chapter" *.txt | grep -nE -C1 " leaves? " > out.txt Troubleshoot: The grep returns only 1345 lines (according to wc), printout takes a few seconds The output in the large output files looks legit, aka actual results from input files. Replacing the -C operator with -A or -B produces good output files in the KB size. Questions: Why is this happening? Is there something about the -C that breaks things this way? Or is there a different issue I am overlooking? Any hints appreciated! Running this in the MacOS terminal. I was following this man.
Try changing the directory where you're writing out.txt. For example change this command to this: $ grep -v "Chapter" *.txt | grep -nE -C1 " leaves? " > /tmp/out.txt Example Here you can see what's happening when you enable verbose output in your Bash shell. $ set -x $ grep -v "Chapter" *.txt | grep -nE -C1 " leaves? " > out.txt + grep --color=auto -nE -C1 ' leaves? ' + grep --color=auto -v Chapter file01.txt file02.txt file03.txt file04.txt file05.txt file06.txt file07.txt file08.txt file09.txt file10.txt out.txt Notice that it's taking the argument *.txt and expanding it, and it includes the the file out.txt. So you're literally parsing this file as you write out to it. Why? If you think about what a shell does when output from 1 command is piped into the next it makes sense. The shell parses the commands you just gave it, looking for pipes (|). When it encounters them it has to run the ones from the right in order to set up the redirection of STDIN/STDOUT between the commands occuring within the pipes. You can use the sleep command to see how the shell parses things as more pipes are added: $ sleep 0.1 | sleep 0.2 | sleep 0.3 | sleep 0.4 + sleep 0.2 + sleep 0.3 + sleep 0.4 + sleep 0.1 $ sleep 0.1 | sleep 0.2 | sleep 0.3 | sleep 0.4 | sleep 0.5 + sleep 0.2 + sleep 0.3 + sleep 0.4 + sleep 0.5 + sleep 0.1 Doing this with echo + writing to a file also shows the order via the file accesses & the stat command: $ echo "1" > file1 | echo "2" > file2 | echo "3" > file3 | echo "4" > file4 + echo 2 + echo 3 + echo 4 + echo 1 $ stat file* | grep -E "File|Access: [[:digit:]]+" + grep --color=auto -E 'File|Access: [[:digit:]]+' + stat file1 file2 file3 file4 File: ‘file1’ Access: 2018-08-11 23:55:20.868220474 -0400 File: ‘file2’ Access: 2018-08-11 23:55:20.865220576 -0400 File: ‘file3’ Access: 2018-08-11 23:55:20.866220542 -0400 File: ‘file4’ Access: 2018-08-11 23:55:20.867220508 -0400
Outputting context (-C) for grep produces massive files
1,380,717,179,000
Say I've got a text file file.txt and program program.rb. program.rb writes stuff to file.txt as program.rb finds it. How can I view the population of the text file from the terminal?
This is commonly done with tail -f: $ tail -f file.txt The tail utility outputs the tail of a file or stream, i.e. the last part of it (the 10 last lines by default). With the -f flag, it will not exit once it has arrived at the end, but continue polling the file or stream for new data and output it whenever it arrives. This is commonly done to manually monitor log files, or, as in your case, to view the partial result of a running program. See also question 291932 to learn the difference between tail -f and tail -F.
How can I view the file output of a program in a text file as it's being populated?
1,380,717,179,000
I'm using Amazon Linux and writing a script in bash. I want to output both stderr / stdout (preferably in the order that they occur) to a file as well as the console. However, this command isn't working ... node test.js 2>&1 >> /tmp/output | tee --append /tmp/output The output gets sent to the file, but it is not getting output to the console as it is happening. How can I correct the above to view the output?
The >> /tmp/output already sends all output to the file, leaving nothing to be sent to tee. So the command should read node test.js 2>&1 | tee --append /tmp/output.
How do I output stderr/stdout of my script to both a file and the console?
1,380,717,179,000
I wanted to capture to a file the errors being returned on the command line from grep. For example, grep foo.lookup No such file in directory I want to output that to a log file. This is my shell script: lookUpVal=1 var1=$(grep $lookUpVal foo.lookup) >>lookup.log 2>$1 It creates the file lookup.log but doesn't write the error on it.
If I understand it correct, you want to capture the output of grep into a variable and append any error to the logfile. You could say: var1=$(grep $lookUpVal foo.lookup 2>>lookup.log) The $(...) syntax denotes command substitution, i.e. outputs the result of the command into a variable. By default it would capture the STDOUT of the command into the variable and the STDERR is printed to the console. In order to redirect the STDERR to a file, you would need to perform the redirection within the command itself, i.e. within $(...).
Redirect grep error output to file
1,380,717,179,000
I have a very simple ksh script and at certain points I want to write to a log file. I use the following commands in two places... print "Directory listing 1:\n" > ${LogFile} ll >> ${LogFile} (Note: The second time this command is used print Directory listing 2) My problem is, when I view the log file afterwards, only the second execution of these commands work! So there's no "Directory listing 1" and accompanying "ll" output. I have tested and tested the script to ensure that there's nothing wrong my logic. I've added print test commands just before each so I know they get executed. Is there something I've done wrong or I'm not realising?
Whenever you do a redirection with > (your first line), the ${LogFile} is truncated to 0 and then written. If I understand right, you do the above twice, the first stuff is overwritten by the second. What you have to do is along the lines: > ${LogFile} # This just truncates if there was anything there, writes nothing ... echo "First round" >> ${LogFile} ls -l >> ${LogFile} ... echo -e "\nSecond round" >> ${LogFile} ls -l >> ${LogFile} ...
Unsure about the behaviour of my script when writing to log file
1,380,717,179,000
Say, for example that after running a number of commands: $ cd /opt/something $ find . -name *aa | grep 11 $ clear $ <more commands go here> there was a part of the output that was needed but not saved; the command which produced it as well as the arguments might not be recalled entirely. Is there a way to search through stdout (even though clear may have been called several times).
stdout is transient, or ephemeral. Unless you take some action to save it, it's gone and inaccessible as soon as it's output. If you want to re-use the output of a command multiple times, then you need to save it somewhere - a scalar variable (e.g. with command substitution), an array variable (e.g. with readarray/mapfile), or a file (e.g. with redirection or tee). When working with filenames, you should always use NUL as the separator, as all other characters are valid characters in pathnames (most common tools these days have a -0, -z, -Z, or similar option for working with NUL-separated input & output). This means that it's unsafe to use command substitution with find (although it's fine with other programs that produce text that isn't filenames). e.g. find . name '*aa' -print0 > files.list0 grep -z 11 files.list0 or readarray -d '' -t files < <(find . -name '*aa' -print0) printf '%s\0' "${files[@]}" | grep -z 11 BTW, the *aa should be quoted in that find command, otherwise the shell will expand it to all matching files (and only the first will belong to the -name predicate, the rest will be errors).
How to search for strings in the output of previously run commands