date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,448,891,626,000 |
How can I make tools like grep treat multilines escaped with a backslash as a long single line?
Let's say I have some files that looks like this:
testvar=test1 \
test2 \
test3
othervar=oth1 \
oth2
If I now grep for testvar it will return testvar=test1 \,
but is there any way I can have it return testvar=test1 test2 test3?
More or less how to treat those multiple escaped lines as a single line?
Note: If this can't be done with grep, hints about how to do this with other tools like sed or awk to get the same result are also welcome.
|
With awk setting the field separator and record separators to your use case:
awk 'BEGIN { FS="\n"; RS ="" } /testvar/' yourfile
testvar=test1 \
test2 \
test3
See the manual entry for multiple line records.
| Can I make grep treat escaped multiline as a single line? |
1,448,891,626,000 |
The following shell script works but removes colored formatting generated by rspec:
#!/bin/bash
OUTPUT=`rspec`
echo "$OUTPUT"
How to preserve the colors?
|
It's common for programs with colorized output to disable it if they're not being run directly in a TTY, since you might be piping the output to a log file or to another process that expects plain text. Typically the programs offer a switch to manually force colors enabled, and rspec has one (--color), but for some reason it ignores it if you're not running in a TTY, which is really unusual behavior.
I think your only options are to edit rspec to take out that check (see def color in rspec-core-2.11.1/lib/rspec/core/configuration.rb), or run it within a program that will trick it into thinking it has a TTY, like expect
| How to capture text formatting in bash? |
1,448,891,626,000 |
I need to produce JSON configuration files
with echo and tee called from my Python script.
By trial-and-error I've found out that I have to use single quotes.
Yet I don't understand all the behaviour that I came across
when using Python's run().
The following code prints my questions:
#!/usr/bin/env python3
from subprocess import run
conf_file="""{
"alt-speed-down": 50,
}"""
print("Question 1. It does not work with double quotes. Why?")
run(f"""echo "{conf_file}" """, shell=True)
print("It works with single quotes.")
run(f"""echo '{conf_file}'""", shell=True)
conf_file="""{
\"alt-speed-down\": 50,
}"""
print("""Question 2. It does not work with double quotes, even when I escape the quotes.
Whereas when I type in my shell:
echo "\"This is a quoted string.\""
it works. Why?
""")
run(f"""echo "{conf_file}" """, shell=True)
print("""Question 3. It works with single quotes, even with escaped quotes.
whearas when I type in my shell:
echo '\"this is quoted\"'
I get the backslashes printed. Why aren't
the backslashes printed when called with Python's run()?""")
run(f"""echo '{conf_file}'""", shell=True)
I use Bash as my shell. Why does escaping double quotes differ when done from my Bash shell compared to Python's run. Am I not accessing my Bash shell with specifying
shell=True in run()?
P.S. I know that generating JSON with json module is a way to do this, but in my case it is mostly copying already existing JSON from
my backed up configuration files. I want to avoid reading such JSON files into a string in my script - the script is meant to be run on the newly reinstalled OS where such backups won't be initially available. That is why I need to have many string variables in my Python string that store such JSON configuration files
|
About the quotes, leaving aside the newlines, this:
conf_file="""{ "alt-speed-down": 50, }"""
assigns the string { "alt-speed-down": 50, } to the variable. Then when you run run(f"""echo "{conf_file}" """, shell=True), the shell sees the string
echo "{ "alt-speed-down": 50, }"
which is different from the one with single quotes:
echo '{ "alt-speed-down": 50, }'
conf_file="""{ \"alt-speed-down\": 50, }"""
Here, the backslashes escape the double-quotes, and are removed by Python, so this is equivalent to the first one. Escaping the quotes isn't necessary here, but would be if you had "{ \"alt-speed-down\": 50, }" instead.
If you want to have the backslashes intact in the Python string, you need to use r'' strings, e.g. r'{ \"alt-speed-down\": 50, }' (or the same with double-quotes, r"{ \"alt-speed-down\": 50, }" actually works too, and the backslashes aren't removed, even though they're required to not end the quoted string.)
In the shell, backslashes aren't processed within single quotes, so
echo '\"this is quoted\"'
passes to echo the string \"this is quoted\". Though some implementations of echo would process escapes like \n, regardless of what happens in the shell command line processing.
Whereas with
run(f"""echo '{conf_file}'""", shell=True)
you have no backslashes in sight.
In short, the quoting rules are different between the shells and Python.
See:
https://docs.python.org/3/tutorial/introduction.html#strings
What is the difference between the "...", '...', $'...', and $"..." quotes in the shell?
Why is printf better than echo?
Like mentioned in the comments, there's likely better ways of producing JSON (or YAML, or whatever) from Python than manually printing strings. E.g. the json module:
>>> import json
>>> obj = {}
>>> obj["alt-speed-down"] = 50
>>> json.dumps(obj)
'{"alt-speed-down": 50}'
| Why does escaping double quotes from Python's run (with shell=True) is different than in Bash? |
1,448,891,626,000 |
I had a problem where I wanted to find the length of each path from a find command. My first attempt was to run something like this:
find . -exec sh -c "echo {} | wc -c" \;
I got this idea from this answer. (The command above is not my question, I'm just using it as an example and it's completely contrived. Also, I may need more than one pipe sometimes.)
But when I ran it, there were errors in the output, likely due to special characters in the output paths. Unfortunately, I don't have the knowledge to troubleshoot which paths caused the issue and the error messages weren't informative. Regardless...
I later stumbled upon this answer:
The find command executes the command directly. The command, including the filename argument, will not be processed by the shell or anything else that might modify the filename. It's very safe.
That seems very convenient. So convenient, in fact, that the -exec sh -c ... "cure" seems worse than the disease.
So my question is, what should I do when I need to pipe commands with find and my paths may have special characters? Is there a generic solution to this problem where I don't have to consider a bunch of caveats? I am using bash.
Note: This is a similar question: how best to send the output of a find + exec command to a pipeline? The difference is, I'm not necessarily trying to pipe the output outside of -exec. i.e., if find ... -exec ... foo {} | bar \; is the way to go, that's perfectly fine by me. I'm just looking for a generic path of least resistance, the structure of the command isn't important to me.
|
Pass the filename as an argument to the shell script:
find . -exec sh -c 'printf "%s\n" "$1" | wc -c' sh {} \;
or for multiple files per shell invocation:
find . -exec sh -c 'for x in "$@"; do printf "%s\n" "$x" | wc -c; done' sh {} +
Your command
find . -exec sh -c "echo {} | wc -c" \;
would insert the filename as-is on the shell command line. It'd only work for filenames that don't contain whitespace or characters that are special to the shell. E.g. something like a Don't stop me now.mp3, this&that.txt would cause problems. (The first would produce an unterminated quoted string, and the second would start echo in the background, and then try to run a command called that.txt.)
On the other hand, sh -c ... sh {} \; (or ... {} + has find pass the filename(s) as a distinct argument(s) to the shell, where they'll then be available in the positional parameters and can be used without them getting mixed with the shell syntax. ("$1" for the first, "$@" for the whole list. Remember the quotes.)
For the case of checking the filename length, you could also get it with "${#var}" in the shell, except that it gives the length in characters as per the current locale, while wc -c counts bytes.
| How do I handle strange path characters when piping find output? |
1,448,891,626,000 |
The OSC52 escape sequence tells the terminal to put arbitrary text in the system clipboard. I want to use this fact to be able to copy to the local clipboard from a remote Vim session through ssh, as explained here. However, I am getting conflicting information on whether the terminal I use, urxvt (also called rxvt-unicode) supports this escape sequence.
I have found a perl script that may or may not implement this functionality, called clipboard-osc, and I have added it to my urxvt configuration file ~/.Xdefaults with the line URxvt.perl-ext-common: clipboard-osc. I haven't been able to make it work, and the information on this perl script (and on escape sequences in urxvt in general) is very scarce.
So, does urxvt support the OSC52 escape sequence for clipboard integration? And if so, how can I use it and what are the possible pitfalls to avoid?
|
I figured out a solution to my problem. I will post it here for future reference.
urxvt does not support the OSC52 escape sequence by default, which is a pity. However, urxvt is highly extensible through perl scripts, so there are perl scripts out there that add suppport for OSC52. An example is this small script by GitHub user parantapa. With this, you can add support for OSC52 in two simple steps:
Copy the script to ~/.urxvt/ext/52-osc
Source it in urxvt by adding the following line to your ~/.Xdefaults configuration file: URxvt.perl-ext-common: 52-osc
For completeness and future-proofness, here is the full script.
#! perl
=head1 NAME
52-osc - Implement OSC 32 ; Interact with X11 clipboard
=head1 SYNOPSIS
urxvt -pe 52-osc
=head1 DESCRIPTION
This extension implements OSC 52 for interacting with system clipboard
Copied from GitHub user parantapa, who also reports most code came from:
http://ailin.tucana.uberspace.de/static/nei/*/Code/urxvt/
=cut
use MIME::Base64;
use Encode;
sub on_osc_seq {
my ($term, $op, $args) = @_;
return () unless $op eq 52;
my ($clip, $data) = split ';', $args, 2;
if ($data eq '?') {
my $data_free = $term->selection();
Encode::_utf8_off($data_free); # XXX
$term->tt_write("\e]52;$clip;".encode_base64($data_free, '')."\a");
}
else {
my $data_decoded = decode_base64($data);
Encode::_utf8_on($data_decoded); # XXX
$term->selection($data_decoded, $clip =~ /c/);
$term->selection_grab(urxvt::CurrentTime, $clip =~ /c/);
}
()
}
| Does urxvt support the OSC52 escape sequence? |
1,448,891,626,000 |
I have used answer of zsh kill Ctrl + Backspace, Ctrl + Delete to configure following key binding:
Ctrl+Backspace: delete until the beginning of current word,
Ctrl+Delete: delete until the end of current word,
Ctrl+Shift+Delete: delete until the end of the line.
This have been done using these commands:
$ bindkey -M emacs '^[[3;5~' kill-word
$ bindkey -M emacs '^H' backward-kill-word
$ bindkey -M emacs '^[[3;6~' kill-line
To know how to encode the keys (i.e., the ^[[3;5~ part), I used the "trick" detailed in the answer: "type Ctrl+C Ctrl+Delete to see what the value is on your system".
Problem
I would like to bind Ctrl+Shift+Backspace to the backward-kill-line command (i.e. delete everything between the cursor and the beginning of the line).
However, when I type Ctrl+C Ctrl+Shift+Backspace, my prompt only shows ^H — i.e. the same key combination as Ctrl+Backspace.
|
Your terminal sends the same escape sequence for Ctrl+Shift+Backspace as for Ctrl+Backspace, so there's no way for zsh to distinguish between the two. The only solution is to configure your terminal to send different escape sequences. Not all terminals permit this.
Some terminals, such as xterm, rxvt, iTerm2 and Emacs term, allow you to configure escape sequences for each keychord manually. Consult your terminal's documentation.
For example, for xterm, you can put the snippet below in your .Xresources. Load it with xrdb -merge ~/.Xresources. Many environments load this when you log in; if yours doesn't, add this command to your X11 startup file.
XTerm.VT100.translations: #override \
Ctrl Shift <Key>BackSpace: string("\033[27;6;8~") \n
Then you can use this escape sequence¹:
bindkey -M emacs '^[[27;6;8~' backward-kill-word
With terminals based on vte, including Gnome-terminal, Guake and Terminator, you're out of luck. They don't have any way to configure key bindings. They may be willing to add ad hoc support for a specific key though.
¹ I chose this sequence to be compatible with xterm's modifyOtherKeys mode. I'd normally recommend to enable modifyOtherKeys, which is mostly backwards compatible, but the particular key chord you want is only enabled at level 2, which is a pain to cope with (e.g. Ctrl+letter doesn't send the corresponding control character).
| How to key bind 'backward-kill-line' to Ctrl+Shift+Backspace? |
1,448,891,626,000 |
I'm trying to post-process the output of script into a more readable form, similar to Removing control chars (including console codes / colours) from script output, but I've noticed that col doesn't always work.
For instance,
$ cat -v uncolored
foo^H^H^Hbfoo^H^H^Hafoo^H^H^Hr^M
$ col -bp < uncolored
baroo
Why doesn't col -bp output just bar? Where are the extra two os coming from?
|
^H in this case is backspace, AKA dec/hex 8 or oct 10 or \b. All it is doing is moving the cursor; take this example:
$ printf 'bravo\10\10X'
braXo
We have moved the cursor back 2, but we only wrote over one letter, the v. We didn't write over the o, so it remains. If you want to get rid of the rest of the letters, you have to overwrite them with something, usually a space character:
$ printf 'bravo\10\10X '
braX
http://wikipedia.org/wiki/Backspace#%5eH
| col produces incorrect output |
1,448,891,626,000 |
In multi-color-compatible terminals, one can set a color from a 256-color palette by using ESC[38;5;Nm, and any RGB color using ESC[38;2;R;G;Bm.
I've been wondering though, where do the "2" and "5" numbers come from and why exactly "2" and "5"?
|
The 2 and 5 come from ITU T.416 (the same as ISO 8613-6), entitled Open Document Architecture (ODA) and Interchange Format: Character Content Architectures.
Quoting from ISO/IEC 8613-6 : 1994 (E), page 41:
The first parameter element indicates a choice between:
0 implementation defined (only applicable for the character foreground colour)
1 transparent;
2 direct colour in RGB space;
3 direct colour in CMY space;
4 direct colour in CMYK space;
5 indexed colour.
and there are several paragraphs after that explaining what parameters would follow this parameter (but that wasn't the question).
Further reading:
Why only 16 (or 256) colors? (ncurses FAQ)
Can I set a color by its number? (xterm FAQ)
| In the SGR number 38 and 48, where do the 2 and 5 numbers come from? |
1,448,891,626,000 |
For example, I need to change a " into the word quote in
to change
a string with a " at some point
into
a string with a quote at some point
I have tried:
$ echo 'a string with a " at some point' | awk 'sub(",quote)'
awk: cmd. line:1: sub(",quote)
awk: cmd. line:1: ^ unterminated string
awk: cmd. line:1: sub(",quote)
awk: cmd. line:1: ^ syntax error
$
$ echo 'a string with a " at some point' | awk 'sub(\",quote)'
awk: cmd. line:1: sub(\",quote)
awk: cmd. line:1: ^ backslash not last character on line
awk: cmd. line:1: sub(\",quote)
awk: cmd. line:1: ^ syntax error
whereas
$ echo 'a string with a " at some point' |
awk 'sub("string","rope")'
=>
a rope with a " at some point
works for the string-rope word.
|
echo 'duck " cat' | sed 's/"/quote/'
Or in awk, since sub takes a regular expression, mark it as such with the usual // form:
echo 'duck " cat' | awk 'sub(/"/,"quote")'
| How to substite a " in awk? |
1,448,891,626,000 |
I am using cssh (cluster-ssh) to ssh to multiple machines simultaneously. Everything works great, except that cssh intercepts F10 key (which in cssh opens menu.
This is very unfortunate, because I am using F10 a lot, for example to close midnight commander.
Is there a way to configure cssh so that it ignores F10 and lets it through ?
I am using LXDE/Openbox on Debian Wheezy
UPDATE: in the past, I had similar problem with terminator terminal emulator eating F10 when using midnight commander. This problem was resolved by adding the following to my /usr/share/themes/Clearlooks/gtk-2.0/gtkrc
binding "NoKeyboardNavigation" {
unbind "<shift>F10"
}
class "*" binding "NoKeyboardNavigation"
This however has no effect on cssh. Therefore I suspect, this is not caused by the window manager, but rather by cssh itself.
|
This behavior isn't actually part of cssh, but rather the widget toolkit it's using, Tk, which is why it doesn't show up in the list of configurable hotkeys and setting use_hotkeys to no doesn't disable it. I couldn't find a non-programmatic way to fix it, but if you're building cssh yourself (not hard) you can make a small change to the code to rebind F10 so it does nothing. Add the following line to lib/App/ClusterSSH.pm in the create_menubar() function:
$windows{main_window}->bind("all", "<Key-F10>" => sub {});
Patch:
diff --git a/lib/App/ClusterSSH.pm b/lib/App/ClusterSSH.pm
index cc71507..de4706e 100644
--- a/lib/App/ClusterSSH.pm
+++ b/lib/App/ClusterSSH.pm
@@ -1737,6 +1737,7 @@ sub create_menubar() {
my ($self) = @_;
$self->debug( 2, "create_menubar: started" );
$menus{bar} = $windows{main_window}->Menu();
+ $windows{main_window}->bind("all", "<Key-F10>" => sub {});
$windows{main_window}->configure( -menu => $menus{bar}, );
$menus{file} = $menus{bar}->cascade(
| cssh intercepts F10 |
1,448,891,626,000 |
I have always been using below method to highlight background of text in shell .
tput smso;printf " TEXT ";tput rmso;
How can I achieve the same thing without using tput( I mean some way formatting like \e[0m for colors in printf) ?
|
The escape sequences for that may be terminal specific. That's the whole point of using tput. tputs looks up the correct escape sequence in a database based on the value of the $TERM variable.
On my terminal:
$ tput smso | sed -n l
\033[3m$
$ tput rmso | sed -n l
\033[23m$
So I could do:
$ printf '\033[3m%s\033[23m\n' "stand out"
But I can't be sure that would work on other terminals.
If you don't want to call tput each time, you can run it once and store the output:
smso=$(tput smso) rmso=$(tput rmso)
printf '%s\n' "${smso}stand out${rmso}"
Note that smso is "Start Mode Stand-Out", it is not for reverse video, though many terminals use reverse video to make text stand out. If you want reverse video, it's tput rev (cancelled by tput sgr0), if you want to set the background colour, use tput setab 4 for ANSI colour codes (with 4 being the ANSI colour blue number).
| highlighting text in shell |
1,448,891,626,000 |
I need to execute a command with escaped argument(s) using sh -c. I know the string looks pretty ugly but simple ones don't cause a problem.
The output of the echo when passed to sh -c is different then when run stand alone and I am struggling to figure out how to fix it.
Background:
[1] I am escaping the arguments to the echo command since this is passed in and I wanted to make sure extra command were not inserted.
[2] I am using sh -c as I wanted to have the ability to use shell facilities like | or nohup etc..
[3] Probably not relevant but this is being run through an ssh connection to a remote server.
The command and output of the sh -c command is:
sh -c "echo \a\\\\\b\;\c"
;c
The command and output of the stand alone command is:
echo \a\\\\\b\;\c
a\\b;c
Any help would be much appreciated!
|
\ is used to escape a few characters (like \ itself, ", $, `) and remove newline characters inside double quotes, so
sh -c "echo \a\\\\\b\;\c"
is like
sh -c 'echo \a\\\b\;\c'
For that sh, \ is used to escape every character, so it becomes the same as
echo 'a\b;c'
UNIX conformant echos expand \b into a backspace character (^H) which, when send to a terminal moves the cursor back one character to the left, so the ; overwrites the a, which is why you see ;c.
echo is a non-portable command whose behavior varies greatly from shell to shell and system to system. It is best avoided especially if given arguments that may contain backslashes or may start with a dash. Use printf instead.
sh -c 'printf "%s\n" "a\\\\b;c"'
to output a\\b;c.
(3 backslashes would have been enough, but it's safer to remember that within double-quotes, you need two backslashes to get one).
For echo implementations that expand escape sequences like yours, you'd have to double the number of backslashes.
sh -c 'echo "a\\\\\\\\b;c"'
If you want to avoid the double quotes, you'll only need to escape the ; character which is the only one special to the shell (in addition to \ which you've already escaped in the double-quote version).
sh -c 'echo a\\\\\\\\b\;c'
If you want to use double quotes instead of single quotes, again you'll have to double the number of backslashes.
sh -c "echo a\\\\\\\\\\\\\\\\b\\;c"
And if you want to get rid of the quotes, you need to escape the space and ; again for the outer shell.
sh -c echo\ a\\\\\\\\\\\\\\\\b\\\;c
| Shell escape characters for sh -c |
1,448,891,626,000 |
TL;DR - I know how to overwrite lines of output normally but none of the methods I've used previously (e.g. printf '\e[1A\e[2K') or have found online seem to work when the line being overwritten is longer than width of terminal (e.g. a line-wrap was triggered). Disabling line-wrap effectively truncates the displayed text. Is there some other trick or tool or some way to handle this scenario that I'm missing?
I have a bash script that I share between my desktop (Fedora) and my phone (Termux on Android). Functionality-wise, I don't have any issues and everything works as expected. But script is fairly lengthly and the terminal output is a bloody mess. Recently, I learned that I could use ANSI Escape Codes from bash to overwrite a previous line of output and significantly clean up the output from my script, while still having an idea of its progress and seeing whatever error if it encounters one. My understanding of these is still pretty basic but from my testing it seems that printf recognizes \e as the start of an ASCII escape and then based on this, ESC[#A "moves cursor up # lines" and ESC[2K "erase the entire line".
Anyway, where I ran into an issue was that I was expecting all but the last line to be overwritten and instead multiple other lines were still being displayed. Initially, I thought this was due to some Termux bug but I later was able to confirm is due to the size (width) of the terminal (I am able to recreate the issue in gnome-terminal by resizing my window or by increasing the length of the text). Basically, what I am seeing is that if the line of output that I wish to overwrite is longer than the width of the terminal, then it appears that the line "wraps" the remaining text onto a new line and the overwrite only replaces the wrapped portion of the text.
Here is a snippet that recreates the issue I'm having in my script:
# create an array with variable-length texts to simulate status messages
arrStatusTexts=( );
for i in {10..200..10}; do
arrStatusTexts+=("$(printf '%*s\n' $i ' ' | tr ' ' X)");
done
# print out status at each step, overwriting output of each previous step as we go
echo "";
echo "--------------------";
echo "Steps of process"
echo "--------------------";
for ((i=0; i < ${#arrStatusTexts[@]}; i++)); do
[[ 0 != "$i" ]] && printf '\e[1A\e[2K';
printf 'Step %s of %s: %s\n' "$(($i + 1))" "${#arrStatusTexts[@]}" "${arrStatusTexts[$i]}";
done
From my desktop terminal window, before running the above, I get:
$ stty size
34 135
Edit: From both my desktop and from termux, my TERM variable shows as:
$ echo "$TERM"
xterm-256color
What I would like to see as the final output after running the above for loop is something like:
Step 20 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
What I actually see after for loop completes is:
Step 13 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 14 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 15 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 16 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 17 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 18 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 19 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Step 20 of 20: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Basically it is overwriting steps 1-12 correctly because those are smaller than the width of my terminal window. But for steps 13 onward, the length of each line "wraps" and only the wrapped portion is cleared.
Edit 2022-Sep-01: Found more oddities. Based on an answers here and here, using the \e[6n sequence, I was attempting to get the cursor position prior to each printf statement in order to see if that would somehow be useful.
altering the above to:
# from: https://unix.stackexchange.com/a/183121/379297
function pos () {
local CURPOS
read -sdR -p $'\E[6n' CURPOS
CURPOS=${CURPOS#*[} # Strip decoration characters <ESC>[
echo "${CURPOS}" # Return position in "row;col" format
}
export -f pos;
arrCursorPos=( );
for ((i=0; i < ${#arrStatusTexts[@]}; i++)); do
# get cursor position
arrCursorPos+=("$(pos)");
printf 'Step %s of %s: %s\n' "$(($i + 1))" "${#arrStatusTexts[@]}" "${arrStatusTexts[$i]}";
done
for ((i=0; i < ${#arrCursorPos[@]}; i++)); do
echo "cursor pos $(($i + 1)): '${arrCursorPos[$i]}'";
done
The output I got for the status messages is the same as above but here's the output of the 2nd array which holds cursor positions:
cursor pos 1: '19;1'
cursor pos 2: '20;1'
cursor pos 3: '21;1'
cursor pos 4: '22;1'
cursor pos 5: '23;1'
cursor pos 6: '24;1'
cursor pos 7: '25;1'
cursor pos 8: '26;1'
cursor pos 9: '27;1'
cursor pos 10: '28;1'
cursor pos 11: '29;1'
cursor pos 12: '30;1'
cursor pos 13: '31;1'
cursor pos 14: '33;1'
cursor pos 15: '34;1'
cursor pos 16: '34;1'
cursor pos 17: '34;1'
cursor pos 18: '34;1'
cursor pos 19: '34;1'
cursor pos 20: '34;1'
At first, I thought this was not working correctly because I was getting the same pos for indexes 15-20. After clearing screen (Ctrl+L) and rerunning a few times I saw different output
cursor pos 1: '11;1'
cursor pos 2: '12;1'
cursor pos 3: '13;1'
cursor pos 4: '14;1'
cursor pos 5: '15;1'
cursor pos 6: '16;1'
cursor pos 7: '17;1'
cursor pos 8: '18;1'
cursor pos 9: '19;1'
cursor pos 10: '20;1'
cursor pos 11: '21;1'
cursor pos 12: '22;1'
cursor pos 13: '23;1'
cursor pos 14: '25;1'
cursor pos 15: '27;1'
cursor pos 16: '29;1'
cursor pos 17: '31;1'
cursor pos 18: '33;1'
cursor pos 19: '34;1'
cursor pos 20: '34;1'
And realized what is going on. For the last few array elements, it is getting to the last row (in my case 34 - as is reported in col1 by stty size). At that point, any new lines that are output cause the displayed text to scroll and I still end up on the last line (34). So this method does seem like it might be a reliable way of keep track of initial cursor position.
I also tried an alternate approach (suggested here, here, and here involving exec < /dev/tty and stty. Using the function here and modifying the snippet to:
function extract_current_cursor_position () {
export $1
exec < /dev/tty
oldstty=$(stty -g)
stty raw -echo min 0
echo -en "\033[6n" > /dev/tty
IFS=';' read -r -d R -a pos
stty $oldstty
eval "$1[0]=$((${pos[0]:2} - 2))"
eval "$1[1]=$((${pos[1]} - 1))"
}
export -f extract_current_cursor_position;
arrCursorPos=( );
for ((i=0; i < ${#arrStatusTexts[@]}; i++)); do
# get cursor position
extract_current_cursor_position pos1
arrCursorPos+=("${pos1[0]} ${pos1[1]}");
printf 'Step %s of %s: %s\n' "$(($i + 1))" "${#arrStatusTexts[@]}" "${arrStatusTexts[$i]}";
done
for ((i=0; i < ${#arrCursorPos[@]}; i++)); do
echo "cursor pos $(($i + 1)): '${arrCursorPos[$i]}'";
done
Rerunning this after clearing screen, I got:
cursor pos 1: '10 0'
cursor pos 2: '11 0'
cursor pos 3: '12 0'
cursor pos 4: '13 0'
cursor pos 5: '14 0'
cursor pos 6: '15 0'
cursor pos 7: '16 0'
cursor pos 8: '17 0'
cursor pos 9: '18 0'
cursor pos 10: '19 0'
cursor pos 11: '20 0'
cursor pos 12: '21 0'
cursor pos 13: '22 0'
cursor pos 14: '24 0'
cursor pos 15: '26 0'
cursor pos 16: '28 0'
cursor pos 17: '30 0'
cursor pos 18: '32 0'
cursor pos 19: '32 0'
cursor pos 20: '32 0'
This one seems like it would work too. Though I am not sure why the extract_current_cursor_position function subtracts 2 from the y and 1 from the x values. I would probably need to look into that or remove it the subtractions
I still have to look into ncurses options (e.g. tput). I did already check that the ncurses package is at least available on Termux but I'll come back and fill in more info as I test more.
The obvious changes that I could make are:
1. Don't change my script and just live with the multi-line clutter like I've been doing. But I would strongly prefer to fix my output, unless it becomes too much work.
2. Shorten all the status texts and reduce printing variables in output till everything is smaller than the smallest screen's width (stty size on my phone reports 17 48 so e.g. limit everything to 48 chars per line). While I don't mind some refactoring and putting in more effort, the idea of doing hundreds of text changes seems very tedious and teaches me nothing about what is actually going on. I would rather save this as a last resort for if there turns out not to be a better way. That and this would probably lose meaningful info or require me making other compromises to how things are displayed.
3. Same as 2 but use print -v msg to put the output in a variable then print truncated text e.g. printf '%s\n' "${msg:0:48}". Same issues as with 2 but I would also likely lose some meaningful info.
4. Similar to above but rather than truncating, just keep track of the previous message's length and divide by terminal width to determine the number of printf '\e[1A\e[2K'; statements I should use. Still a bit of work but less than if I had to edit every message and should give a better end result.
Curious if there is an easier way to go about this that I am missing. Maybe some way way to "grep" text that was already printed and clear to a specific offset from current position (adding a unicode char or asterisk or something to beginning of each status line would be pretty easy search/replace). Or some command or builtin that I am unaware of? My google fu hasn't turned up any obvious solutions besides what I listed above.
I don't need a fully POSIX-compliant solution; just one that works in bash with standard tools (python/perl/awk one-liners are all fair game... but rewriting my entire bash script into one of those is not). Considering this domain usually deals with desktop questions, I'm not expecting familiarity with Termux but if the solution requires a graphical session (e.g. won't work on ssh) or a tool only available on x86_64 architectures (Termux uses ARM builds) then it probably won't work. Most common bash/linux tools seem to work fine there tho. My desktop currently has bash 5.1.8 (and I will be upgrading to Fedora 36 soon) and Termux has the ARM version of bash 5.1.16.
|
About half-way through writing a solution to your question I realised that this is why (n)curses terminal virtualisation is so much easier: it hides all the nasty terminal dependant details (well most of them, well some of them). Using terminfo directly is painful - but better than raw escape sequences.
This is bash code. It shouldn't be hard to rewrite as sh though.
# Output a printf style format string and arguments and return the cursor
# to the beginning of the line. DO NOT use newline `\n`.
#
lineOut() {
local rows cols len lines
# Number of rows/columns on the terminal device
rows=$(tput lines)
cols=$(tput cols)
# Output
printf "$@"
# How many lines we wrote
len=$(printf "$@" | unexpand -a | wc -m)
lines=$(( len / cols ))
if tput am
then
# Cursor does not wrap when writing to the last column
len=$(( len - (cols * lines) ))
[[ $len -eq 0 ]] && (( lines-- ))
fi
# Move up the necessary number of lines to column 1
printf '\r'
for (( ; lines > 0; lines-- )); do tput cuu1; done
}
# Populate the arrStatusTexts (demo only) to simulate status messages
arrStatusTexts=()
for i in {10..200..10}; do
arrStatusTexts+=("$(printf '%*s\n' "$i" ' ' | tr ' ' X)")
done
# Output
for ((i=0; i < ${#arrStatusTexts[@]}; i++)); do
lineOut 'Step %s of %s: %s' "$(($i + 1))" "${#arrStatusTexts[@]}" "${arrStatusTexts[$i]}";
sleep 1
done
printf '.\n'
The terminfo codes used by tput are documented in man 5 terminfo. Running with a terminal type that does not have the necessary sets of control codes (try export TERM=dumb, for example) the solution degrades cleanly.
While working on a solution I found u7 (user7), which happens to trigger the "What is your cursor position?" question of the terminal:
# Magic to read the current cursor position (origin 1,1)
tput u7; read -t1 -srd'[' _; IFS=';' read -t1 -srd'R' y x
It's no longer relevant to the solution proposed here but it may come in useful as a bonus.
| Accurately overwriting previous lines of bash terminal output when text is wider than terminal (e.g. wraps mutliple lines)? |
1,448,891,626,000 |
I started tweaking around with my bash prompt lately and I find myself not understanding how the escape character works. I have the following:
PS1="\[$RED\]\342\224\214\342\224\200"
In this I get it, \[ escapes the [ character and \xxx escapes my UTF-8 characters. But in the following line I get a weird result:
PS1+="$([·\$?·!=·0·]·&&·echo·\[$RED\]\342\234\227\·)"
This will always print X in my prompt, yet if I escape the first $ it will print it only when exit status of any command is non zero. I do not understand why. Wasn't $(commands) supposed to output the result of given commands? If I escape it like so \$() is the whole sequence escaped or just the dollar sign? If I don't escape why doesn't it print $? It just prints the X. I have the same question for the $ inside the square brackets. Why do I have to escape it?
Also I believe this qualifies as another question but is there any way of printing the actual exit status in my prompt?
|
In a double-quoted string, command substitutions ($(...)) and variable expansions ($foo) are processed, and the backslash in front of the dollar sign prevents that, removing the backslash. This happens during the assignment PS1="$(...)" or PS1="\$(...)".
But the same expansions are also processed when the prompt is printed, so if the dollar sign was escaped on assignment, the resulting PS1 has an unescaped dollar
sign, and the expansion happens when the prompt is printed.
So, with an unescaped command substitution, the command only runs once, when the prompt is assigned. With the backslash, it runs every time the prompt is printed.
The difference should be easy to test with these two:
PS1="$(date) "
PS1="\$(date) "
The \[ sequence is different, though. It's only relevant when the prompt is processed, not in regular double-quoted strings. It's used to mark parts of the prompt that don't print any visible characters. Moreover, it only works in the prompt before expansions, so something like PS1='$(echo "\[...\]")' will likely not do what you want.
| How does the escape character '\' work in bash prompt? |
1,448,891,626,000 |
Using tmux to send commands along from one terminal to another, I realize that
$ tmux send -t mySession "text" ENTER
correctly sends text, but
$ tmux send -t mySession "up" ENTER
sends text again, probably because up is interpreted not as text, but as keyworded key up arrow.
Similarly,
$ tmux send -t mySession "3" ENTER
correctly sends 3, but
$ tmux send -t mySession "-3" ENTER
tmux: unknown option -- 3
usage: send-keys [-lRM] [-t target-pane] key
fails with this error message, and this naive try to escape
$ tmux send -t mySession "\-3" ENTER
sends 3 again, not the expected -3.
Anyway, I'm pretty sure that I've missed something about the way tmux interprets and understand its argument. What am I missing here?
How do I ensure that mytmuxcommand "<text>" ENTER will always be interpreted as "send actual <text> then send ENTER key"?
|
To send a string literally you can use the -l option to send-keys, but as you might still have more options after the -l you need to use something like '' (an empty string) to no longer be looking for options beginning -.
You cannot mix and match the literal with keynames like Enter, so finally you need to give two commands, eg:
tmux send-keys -t session -l '' -3 \; send-keys -t session Enter
| Escape keywords with tmux send |
1,448,891,626,000 |
terminfo(5) manual page describes a set of capabilities wnum (maximum number of definable windows), cwin (define a window), wingo (go to window), wind (resize current window), but only one terminal definition in terminfo master file uses any of them (tvi9065; it sets wnum=0).
Do any hardware terminals, or terminal emulators, exist that support these capabilities?
|
Short: few terminals have provided these features. Good luck on finding one.
Long: determining if a terminal supports the windowing features is misleading, because the features which are most-used from terminfo are those used in curses. But it's a (weak) clue.
Both ncurses and the AT&T SVr4 terminal descriptions include a few.
Just considering these capabilities listed in terminfo(5):
maximum_windows wnum MW maximum number of
definable windows
create_window cwin CW define a window #1
from #2,#3 to #4,#5
goto_window wingo WG go to window #1
set_window wind wi current window is
lines #1-#2 cols
#3-#4
ncurses lists only a few using wind (none using the others, since stating zero windows is redundant):
Data General d412-unix, etc. (see manual for d413)
Datapoint dp8242
SCO (e.g., OpenServer) sco-ansi
Those particular entries were added a while back (nothing recent):
# 10.1.14 (Sat Nov 22 19:59:03 EST 1997)
# * add vt220-js, pilot, rbcomm, datapoint entries from esr's 27-jun-97
# version.
# * add hds200 description (Walter Skorski)
# * add EMX 0.9b descriptions
# * correct rmso/smso capabilities in wy30-mc and wy50-mc (Daniel Weaver)
# * rename xhpterm back to hpterm.
# 1998/9/26
# * format most %'char' sequences to %{number}
# * adapt IBM AIX 3.2.5 terminfo - T.Dickey
# * merge Data General terminfo from Hasufin <[email protected]> - TD
# 2002-05-25
# * add kf13-kf48 strings to cons25w -TD
# * add pcvt25-color entry -TD
# * changed a few /usr/lib/tabset -> /usr/share/tabset.
# * improve some features of scoansi entry based on SCO's version -TD
# * add scoansi-new entry corresponding to OpenServer 5.0.6
There also is a comment on hds200 which indicates that wind was possible, but conflicted with another use.
The AT&T terminal descriptions likewise had few that used windowing. The SCO terminfo which was the source for much of ncurses in 1995-1996 had a commented-out wind in the description of the Concept AVT:
# Info:
# Concept AVT with status line. We get the status line using the
# "Background status line" feature of the terminal. We swipe the
# first line of memory in window 2 for the status line, keeping
# 191 lines of memory and 24 screen lines for regular use.
# The first line is used instead of the last so that this works
# on both 4 and 8 page AVT's. (Note the lm#191 or 192 - this
# assumes an 8 page AVT but lm isn't currently used anywhere.)
#
avt+s|concept avt status line changes,
is3=\E[2w\E[2!w\E[1;1;1;80w\E[H\E[2*w\E[1!w\E2\r\n,
tsl=\E[2;1!w\E[;%p1%dH\E[2K, fsl=\E[1;1!w, eslok, hs,
dsl=\E[0*w, lm#191, smcup=\E[2;25w\E2\r, rmcup=\E[2w\E2\r\n,
.wind=\E[%i%p1%{1}%+%d;%p2%d;%p3%{01}%+%d;%p4%{01}%+%dw
A comment on one which I had from an OSF/1 machine says this:
# EXECUTION ENVIRONMENT:
#
# This entry does not use any of the fancy windowing stuff of the
# 2626. Indeed, terminfo does not yet handle such stuff. Since
# changing any window clears memory, it is probably not possible to
# use this for screen opt. ed is incredibly slow most of the time.
# It may due to the exact padding.
#
# Since the terminal uses xoff/xon this is intended only for cost
# computation, so that the terminal will prefer el or even dl1 which
# is probably faster! \ED\EJ\EC is also being used -
# apparently ed is only extra slow on the last line of the window.
# The padding probably should be changed.
hp2626|hp2626a|hp2626p|2626|2626a|2626p|2626A|2626P|hp 2626,
but (see manual) on closer inspection you probably would notice that each of these terminals has a different feature which wind does not entirely address (it's too broad a range of possibilities, and too few terminals that provided window-like features to justify their use in screen-optimization).
| Which terminals or terminal emulators support hardware windows? |
1,448,891,626,000 |
I want to write a bash script that takes in two parameters, running them as commands and dump their output into files. myscript.sh contents:
$1 > file1.tmp
$2 > file2.tmp
This works fine for the following example:
myscript.sh 'echo Hello World' 'echo Hello World 2'
file1.tmp now contains the text Hello World and file2.tmp contains text Hello World 2
However this breaks down when using a command that contains a pipe:
myscript.sh 'ls | grep stuff' 'ls | grep morestuff'
ls: cannot access |: No such file or directory
ls: cannot access grep: No such file or directory
ls: cannot access stuff: No such file or directory
ls: cannot access |: No such file or directory
ls: cannot access grep: No such file or directory
ls: cannot access morestuff: No such file or directory
It appears the pipe is being escaped because I get similar output as the first error if I for example run this:
ls \|
ls: cannot access |: No such file or directory
How can I unescape the pipe in my script?
|
You're passing a piece of shell code into the script. To the script, this is just text. For it to be interpreted as a full-blown shell command, you will have to eval it:
eval "$1" >file1.tmp
eval "$2" >file2.tmp
This works when $1 and $2 are simple things like echo hello because they are simple commands (not lists or compound commands).
An analogy with another programming language would be if you passed a piece of C code into a C program as a text string. The code would have to be compiled (and linked) somehow before it could be executed as part of the program.
| unescape pipe in parameters |
1,448,891,626,000 |
New to BASH and Linux and need some quick help. I have written a quick bash script that asks the user for a username and password from the command line and passes that info to a remote server later via SSH. Problem is, one user has a ' character in the password and the value is causing EOF errors when using SSH. The following is my code to capture the username and password
echo "Please type in username:"
read username
read -s -p "Enter Password: " password
and the following is where I send the information to the remote server. Note, there is an array of servers I must send this to.
echo "Adding username and password..."
ssh root@${dssAssocArray[$key]} "echo username=$username > /etc/smbcredentials"
ssh root@${dssAssocArray[$key]} "echo password=$password >> /etc/smbcredentials"
Is there some simple way to dereference the ' (if that's the correct term) or any other special character that might cause an escape?
|
Just make it:
echo "Adding username and password..."
ssh "root@${dssAssocArray[$key]}" 'cat > /etc/smbcredentials' << EOF
username=$username
password=$password
EOF
| How to interpret ' character in a string passed through SSH |
1,448,891,626,000 |
I'm writing a shell script that appends binary contents to a file. I tried using this command:
echo -en '\x61\x62\x63..' >> /tmp/myfile
but that caused the following output:
-en \x61\x62\x63..
Is there any way I could append the contents to a file rather than having to remove all contents every time?
Note
I'm trying to do this on a system that only has /bin/sh, this command works fine with bash but not when using the default shell.
|
In bash, echo is a builtin function, so you are getting that behavior. In sh it is not a builtin. Looks like sh uses its builtin "echo" command which is different.
So try using /bin/echo rather than without the /bin/.
| /bin/sh - echo >> operator not working as expected |
1,448,891,626,000 |
I'm using gnome-terminal as my terminal emulator, which sets TERM=xterm-256color. The key sequences sent when pressing the function keys F1, F2, … are as printed by
$ infocmp -L1 | grep _f | sort -V
key_f1=\EOP,
key_f2=\EOQ,
key_f3=\EOR,
key_f4=\EOS,
# gap 4
key_f5=\E[15~,
# gap 5
key_f6=\E[17~,
key_f7=\E[18~,
key_f8=\E[19~, # no gap 4 here
key_f9=\E[20~,
key_f10=\E[21~,
# gap 5
key_f11=\E[23~,
key_f12=\E[24~,
# gap 4
key_f13=\E[1;2P,
key_f14=\E[1;2Q,
key_f15=\E[1;2R, # no gap 5 here
key_f16=\E[1;2S,
# gap 4
key_f17=\E[15;2~,
# gap 5
key_f18=\E[17;2~,
…
I wonder why there are gaps (marked with # gap …) in the key sequences. The "missing" key sequences (for instance \E[16~) are not used otherwise, as indicated by infocmp -L1 | grep -F '\E[16~').
Modern keyboards group function keys into groups of four keys each, so a gap between these groups is understandable. Some historic keyboards grouped some of their function keys into groups of five keys each, so a gap between those groups is understandable too. However, the gaps are sometimes (but not always) between groups of 4 and sometimes (but not always) between groups of 5 keys, which causes some of the function keys to build groups of just 1 or 2 keys – I haven't heard of any keyboard with such a layout.
What is the reason for these key sequences? Compatibility between keyboard layouts (with groups of 4 or 5 function keys each) seems unlikely, as gaps from the one layout would break the other layout and vice versa.
|
gnome-terminal copies the most commonly-used keyboard configuration of xterm.
In turn, xterm uses key assignments to match DEC VT220, e.g., its LK201 keyboard. Those account for most of the gaps. The different encoding used in F1-F4 and F13-F16 are not from the VT220. F1-F5 on a VT220 are local function keys by default (normally do not send anything to the host).
Rather, the F1-F4 codes are the top row of the VT100 numeric keypad which did not fit into the X keyboard configuration because NumLock was reserved. F13-F16 adapt that workaround to xterm's modified function keys (e.g., where the shift modifier causes xterm to send the ;2 part of the key sequence).
Some of the VT220 keys have other names (e.g., F16 is the Help key), but xterm has no use for those names (because they are application-specific). If it did use those names in the terminal description, there would be more gaps in the listing. But the VT220-specific gaps relate to the groups of function-keys on the DEC hardware terminal.
$ infocmp -L1 | grep _f | sort -V
key_f1=\EOP,
key_f2=\EOQ,
key_f3=\EOR,
key_f4=\EOS,
# gap 4
key_f5=\E[15~,
# gap 5
key_f6=\E[17~,
key_f7=\E[18~,
key_f8=\E[19~, # no gap 4 here
key_f9=\E[20~,
key_f10=\E[21~,
# gap 5
key_f11=\E[23~,
key_f12=\E[24~,
# gap 4
key_f13=\E[1;2P,
key_f14=\E[1;2Q,
key_f15=\E[1;2R, # no gap 5 here
key_f16=\E[1;2S,
# gap 4
key_f17=\E[15;2~,
# gap 5
key_f18=\E[17;2~,
…
The xterm-r6 terminal description does not have these changes for F1-F4 (and F13-F16), but shows the expected gaps:
> infocmp -L1 xterm-r6 | grep key_f | sort -V
key_f1=\E[11~,
key_f2=\E[12~,
key_f3=\E[13~,
key_f4=\E[14~,
key_f5=\E[15~,
...
key_f6=\E[17~,
key_f7=\E[18~,
key_f8=\E[19~,
key_f9=\E[20~,
key_f10=\E[21~,
key_f11=\E[23~,
key_f12=\E[24~,
key_f13=\E[25~,
key_f14=\E[26~,
...
key_f15=\E[28~,
key_f16=\E[29~,
...
key_f17=\E[31~,
key_f18=\E[32~,
key_f19=\E[33~,
key_f20=\E[34~,
key_find=\E[1~,
(gnome-terminal isn't actually a VT220 emulator because it lacks almost all of the character-set features, but rather an xterm-imitator).
| Historical reason for function keys having non-consecutive key sequences in xterm |
1,448,891,626,000 |
I need to identify what this escape sequences represents. I see this sequence is autogenerating on my server's console, but I'm not sure what is the reason for that.
Escape sequence:
^[[[D
I've checked this chart of escape sequences as reference: http://ascii-table.com/ansi-escape-sequences-vt-100.php , but haven't found anything matching.
|
NOTE: This is as I understand things, so it might be a bit off!
The characters ^[ typically signifies the Escape key itself. That's a Control (^) + a open square bracket ([).
excerpt from Escape characters - ASCII escape character
The ASCII "escape" character (octal: \033, hexadecimal: \x1B, or ^[, or, in decimal, 27) is used in many output devices to start a series of characters called a control sequence or escape sequence. Typically, the escape character was sent first in such a sequence to alert the device that the following characters were to be interpreted as a control sequence rather than as plain characters, then one or more characters would follow to specify some detailed action, after which the device would go back to interpreting characters normally. For example, the sequence of ^[, followed by the printable characters [2;10H, would cause a DEC VT102 terminal to move its cursor to the 10th cell of the 2nd line of the screen. This was later developed to ANSI escape codes covered by the ANSI X3.64 standard. The escape character also starts each command sequence in the Hewlett Packard Printer Command Language.
On systems where you're using UTF-8 this escape sequence is actually 2 characters, so it's now ^[ followed by an additional [.
excerpt from ANSI escape code - Sequence Elements
There is a single-character CSI (155/0x9B/0233) as well. The ESC+[ two-character sequence is more often used than the single-character alternative, for details see C0 and C1 control codes. Only the two-character sequence is recognized by devices that support just ASCII (7-bit bytes) or devices that support 8-bit bytes but use the 0x80–0x9F control character range for other purposes. On terminals that use UTF-8 encoding, both forms take 2 bytes (CSI in UTF-8 is 0xC2, 0x9B)[discuss] but the ESC+[ sequence is clearer.
Knowing the above 2 pieces of information this would make your escape sequence Esc+[+D which works out to be, big surprise, the backspace character.
excerpt from ANSI Escape sequences
Esc[ValueD Cursor Backward: Moves the cursor back by the specified
number of columns without changing lines. If the cursor is
already in the leftmost column, ANSI.SYS ignores this
sequence.
References
Esc Key
End-of-transmission character
Control Characters
Escape Characters
ANSI escape code
ANSI Escape sequences - (ANSI Escape codes)
| Weird escape sequence |
1,448,891,626,000 |
I have a settings and a zsh session
~ bindkey | grep help
"^[H" run-help
"^[h" run-help
Why when i press "Control + [ + h" word under cursor removes and nothing happens, but if i press "Alt + h" man page opens correctly?
|
^[ actually means Escape character. Check here: https://en.wikipedia.org/wiki/ASCII
In your case it seems your ALT key works as a synonym for Escape key:
https://en.wikipedia.org/wiki/Alt_key
| Problem with understanding keys bindings |
1,448,891,626,000 |
Here are my tries:
The last one worked, but it breaks copy-pasting (adding a lot of spaces when copied). Is there a better way?
Copyable text:
$ PS1='\['$'\x1b[0m\]$ '
$ echo -e "\x1b[41;37mWarning text\x1b[0m"; echo Normal text
Warning text
Normal text
$ echo -ne "\x1b[41;37mWarning text"$'\n'"\x1b[0m"; echo Normal text
Warning text
Normal text
$ echo -ne "\x1b[41;37mWarning text"$'\n'"\x1b[47;30m"; tr </dev/zero \\0 \ |head -c 80; echo -ne "\x1b[A"; echo Normal text
Warning text
Normal text
$
$ t="Warning text";echo -ne "\x1b[41;37m";echo -n "$t";{ tr </dev/zero \\0 \ |head -c $(bc <<<"$(stty -a <&3|grep -Po '(?<=columns )[0-9]+')-$(wc -c<<<"$t")+1"); } 3<&0;echo -e "\x1b[0m";echo "Normal text"
Warning text
Normal text
$
|
Found the solution myself (in this related question). Use this:
echo -e '\x1b[41;37mWarning text\x1b[K\x1b[0m';echo Normal text
The documentation says about \x1b[K:
K EL Erase line (default: from cursor to end of line).
ESC [ 1 K: erase from start of line to cursor.
ESC [ 2 K: erase whole line.
| How to change the background color for exactly one line? |
1,448,891,626,000 |
Suppose:
a=b; b=c; c=d
Then eval echo \$$a produces the output:
c
If we want to extract the output d using just input a, I tried the following way:
(i) eval echo \$(eval echo \$$a) produces the error:
syntax error near unexpected token '('
(ii) eval echo \$\(eval echo \$$a\) produces the output:
c
I am not able to understand why escape slashing the bracket got rid of the error.
Also, could someone please explain why I didn't get the output as d in the second instance?
|
First, a word of caution:
From a security standpoint, it's a really bad idea to use eval in any shell script unless you know exactly what you're doing. (And even then, there are virtually zero instances where it is actually the best solution.) As a beginner to shell scripting, please just forget that eval even exists.
For further reading, see Eval command and security issues.
To get the output d, you could use:
eval echo \$${!a}
Or:
eval eval echo \\\$\$$a
Where you went wrong was in passing the unescaped parentheses characters to echo. If they are preceded by an unquoted $, it is command substitution. But if the $ is quoted and not the parentheses, it isn't valid shell syntax.
| Using the eval command twice |
1,448,891,626,000 |
I want to list the components of the current working directory in a text file
ls -1 > textfile
Output looks fine with more.
1010661085645
1010729039145
1010747080245
1010849051345
1010859053445
1011046075845
However when I view this textfile with emacs, several strange characters appear
[0m[01;34m1010661085645[0m
[01;34m1010729039145[0m
[01;34m1010747080245[0m
[01;34m1010849051345[0m
[01;34m1010859053445[0m
[01;34m1011046075845[0m
Can anyone explain what is going on here?
|
Those "strange characters" are escape sequences for coloring the output.
This will print that number in blue color:
echo -e '\033[01;34m1010729039145\033[0m'
See man console_codes for more details.
You can tell ls in which cases it shall output colors:
--color[=WHEN]
colorize the output; WHEN can be 'never', 'auto', or 'always' (the default); more info below
Looks like your ls is actually an alias for ls --color=always. (Type type ls to verify.)
Usually ls --color=auto will do the right thing: It will print colors on screen but no into files.
If you really want the color in your file, your have to decide, if you want to see the actual escape sequences or the interpreted colors.
For example the less command will default to printing the actual sequences, but you can tell it to show the colors instead with the -R option:
-R or --RAW-CONTROL-CHARS
Like -r, but only ANSI "color" escape sequences are output in "raw" form.
Try less textfile vs. less -R textfile.
| Strange characters when the output of `ls` is redirected to a file |
1,448,891,626,000 |
I keep a list of photos burned to dvds for incremental backups but I got into a strange situation where a file was already burned into a dvd but tar keeps saying it's not. Here the steps to reproduce:
mkdir tarbug && cd tarbug
mkdir -p photos/{1,2,3}
touch "photos/1/P06-23-08_15.21[1].jpg"
touch "photos/2/P06.jpg"
touch "photos/3/P01.jpg"
Create a list of all files (normally I append to list once a dvd is burned):
find photos -type f > list
Now give me new files excluding files in list and don't show directories:
tar cf - photos -X list | tar tf - | grep -v '/$'
Result:
photos/1/P06-23-08_15.21[1].jpg
That shouldn't show as it's in the file list. I figured out it's brackets that need to be escaped by backslash? but how I'm supposed to know that? any list of characters tar doesn't like or uses for something else?
|
If you look at tar's manpage, you'll see:
-X, --exclude-from FILE
exclude patterns listed in FILE
The info doc gives further details.
So list needs to be a list of patterns, not file names. A pattern, as normal, means a shell wildcard pattern. So, at minimum, *, [], and ] are special. Possibly { and } as well.
The documentation also mentions you can change this by passing --no-wildcards. Then the names will be matched literally, not as patterns. --no-wildcards needs to go before the -X.
[This is GNU tar I checked. Other tar implementations probably differ.]
| What characters TAR doesn't like? |
1,448,891,626,000 |
This question is inspired by this one on SU. How can I print bold or color using lp and ANSI escape sequences? I know how to display, for example, bold text in the terminal:
$ echo -e '\033[01;1mthis text will be bold\033[00;0m,this will not'
this text will be bold,this will not
However, when I pipe this directly to lp I get a file that looks like this:
01;1mthis text will be bold00;0m,this will not
So, I figure the way to do this would be to use groff to create a postscript file and print that. It looks like groff should be able to do this, I know it can correctly convert a man page to a ps file and keep whatever is bold in the man page bold in the postscript. However, the groff documentation is enormous and kind of esoteric to someone with no postscript experience. I have tried various combinations of options, all of which result in a postscript file that looks like the line above:
echo -e '\033[01;1mthis text will be bold\033[00;0m,this will not' | groff >a.ps
echo -e '\033[01;1mthis text will be bold\033[00;0m,this will not' | groff -c >a.ps
echo -e '\033[01;1mthis text will be bold\033[00;0m,this will not' | groff -Pc >a.ps
echo -e '\033\[01;1m\]this text will be bold\033\[00;0m\],this will not' | groff -Tascii >a.ps
echo -e '\033[01;1mthis text will be bold'| groff -man >a.ps
echo -e '\033[01;1mthis text will be bold'| groff -mdoc >a.ps
echo -e '\033[01;1mthis text will be bold'| groff -me >a.ps
echo -e '\033[01;1mthis text will be bold'| groff -mm >a.ps
echo -e '\033[01;1mthis text will be bold'| groff -ms >a.ps
So, how can I use lp and groff or any other tool combination to print bold or colored text from the terminal?
|
That's more the reverse of what groff is designed to do.
What you're looking for can be achieved at least with this combination of tools:
aha
wkhtmltopdf
pdf2ps from ghostscript
Like:
printf '\e[31;1mfoo\e[mbar\n' |
aha |
wkhtmltopdf - - |
pdf2ps - output.ps
A bit overkill but it does the trick. You can probably skip the last part as PDF is as easily printed as postscript nowadays:
printf '\e[31;1mfoo\e[mbar\n' |
aha |
wkhtmltopdf - output.pdf
Or you can feed it directly to lp for printing.
| Can groff create a ps file with interpreted ANSI escape characters? |
1,586,378,066,000 |
From time to time it happens that I cat a binary either from curl or from the local filesystem. In most cases the broken terminal can be fixed with reset. In other cases, particularly if the binary is large, the terminal will be stuck for several minutes printing output like this:
aka
c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;
2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;
2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;
2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;2c1;
I have three questions regarding this scenario;
What does 2c1 mean and why is the terminal printing this?
Have you seen a cat in the wild, guarding against this undesired behavior in an interactive session?
Do you have any suggestions on how to program such a cat (in cee or golang)
My initial instinct was to wrap cat in a function to detect this, but I soon realized that it is fairly difficult to get right and would have numerous edge-cases.
function cat() {
# warn user if
# - argument 1 is a large executable
# - argument 1 to the previous command in the a pipe-chain looks like a large binary
# abort if
# - session is interactive and we are able to detect 2c1 garbage
}
A practical solution could be to always use less (with LESSPIPE) when looking at "unsafe" input, but this question is not about pagers. I am aware of less and lesspipe. I use them actively every day. Perhaps less+lesspipe is the solution to this problem, that the author(s) of less implemented some 20-30 year ago facing the same issue.
However, cat is different from a "pager" in more than one way... Primarily cat is non-interactive. This is significant to me.
The suggestion about less+lesspipe is genuenly good (imho) in practical terms, but I am more concerned with the nitty-gritty of control characters, special escape sequences and how different terminals handle these inputs.
I am more interested in the technical nitty-gritty details of control characters and how terminals or shells interpret "garbage" and control characters. I am not asking "how would you solve this problem". I am asking "why is the terminal handling binary files like this".
|
You interact with a terminal or terminal emulator through a serial line or pseudo-tty device (which emulates a serial line).
Though there is a software module in the kernel that sits in the middle as a kind of adaption layer and does some transformation (discussed briefly later), you typically:
send a stream of bytes to the terminal over that serial line which the terminal interprets either as glyphs to render on its screen or special instructions to change its behaviour
in return, the terminal sends a stream of bytes to the computer over another wire on that serial line to tell the computer what you typed or to respond to some of the control queries it received.
For instance, a terminal could be configured as a ISO8859-1 (aka latin1 terminal) which means that when it receives the 0x53 0x74 0xe9 0x70 0x68 0x61 0x6e 0x65, it interprets it as rendering the S, t, é, p, h, a, n, e glyphs at the current cursor position on its screen. And conversely, when the user types S, the terminal sends the 0x53 byte.
Byte values in the range 0 to 0x1f are interpreted as control characters. That is, they are not represented as glyphs, but have special meaning.
For instance:
0x7 (BEL) generate an audio or video alert
0x8 (BS) moves the cursor to the left
0xa (LF) moves the cursor down
0xd (CR) moves the cursor to the first column of the screen
0x9 (TAB) moves the cursor to the next tabulation
There are only 32 control characters in that range and most terminals have many more features or way you can control them. So beside those, you can send sequences of more than one byte to control your terminal. For most terminals and for most of those sequences, the first byte is 0x1b (ESC) followed by one or more bytes.
For instance, while there are control characters to move the cursor left or down as seen above, there is none to move it to the right or up (as originally, in tele-typewriters, right would be done with "space", but in CRT terminals that erases what's under the cursor, and you wouldnt go up with a tele-typewriter as that would likely create a paper jam), so escape sequences had to be introduced for those, on most terminals 0x1b 0x5b 0x43 and 0x1b 0x5b 0x41 respectively (incidently, that's also the byte sequence many terminals send upon pressing the Right and Up for those that have such keys).
Now among the escape sequences that terminals support are some that:
change the text or background colour and other graphic rendering attributes
change the charset. For instance, there's no Greek character in latin1, and terminals (from the pre-Unicode days and still today) support switching to a different charset to display letters of other languages, or box drawing characters.
set the position of tab stops
can query information from the terminal such as the cursor position, colour, window title, size...
can affect how input is processed. For instance, some terminals support entering a mode whereby upon pressing Shift+A for instance, it doesn't send a 0x41 (ASCII A) character but a sequence of bytes that encodes information about modifiers (shift, alt, ctrl...) and keycode.
some X11 terminal emulators recognise escape sequences to change the font, the window size, display JPEG images, send the screen contents to a printer...
In a text file, you usually only have bytes (or byte sequences if UTF-8 or other multibyte charsets) representing graphical characters. The only control characters you'll find in text files are NL (0xa, aka LF) and TAB (0x9).
When you do cat file.txt, cat just reads the contents of file.txt and writes it to its stdout. If stdout is a serial or pseudo-tty device file (/dev/ttyS0, /dev/pts/0 for instance) that has a terminal line discipline pushed onto it as would be the case if you run that command from an interactive shell in a terminal emulator, the line discipline translates those NLs to CR+NL (though NLNL may be translated to just CRNLNL) so the terminal upon receiving CRNL will move the cursor to the start and then down.
So the text in the contents of the file will be displayed on the terminal screen provided the text in the file is encoded in the character set of the terminal.
Now, bytes in executable files or other random binary files are not intended to represent characters, they can have any value including ones in the range 0 to 31, so when sent to a terminal, the terminal will do what it's told and interpret them as control characters, which may make it do anything as listed above and much more and render it completely unusable.
To guard against that, first you don't send those files to a terminal as that wouldn't make sense, or if you don't know whether a file may be a text file (or a file intended to be viewed verbatim by a terminal with escape sequences intended to be interpreted by a terminal) or not, you can use a tool that either removes the control characters (at least all those but TAB and NL) or give them a visual graphical representation.
That's what the -v and -t options as supported by many cat implementations do. Where with -v, all but NL and TAB are converted to some ^X notation for bytes 0 to 31 and 0x7f, M-^X for bytes 0x80 to 0x9f and 0xff and M-X for bytes 0xa0 to 0xfe which are common visual representations of non-ASCII characters. And -t does it just for TAB (changed to ^I).
Or you can use a pager such as less or vim's view which do that by default (at least as long as you don't use the -r/-R raw options) and are a bit smarter in that they don't transform non-ASCII characters that are meant to have graphical representations in your locale and make it clearer what bytes have been transformed by using colouring or standout modes.
Or you can use tools dedicated to previewing non-text files such as hexdump -C or xxd.
See also the l command of sed which does something similar to cat -vte and is standard (contrary to cat -vte) in a less ambiguous way:
sed -n l < a-file
| Is there a safe cat? |
1,586,378,066,000 |
If I want the ANSI color 0 to be red, in the urxvt terminal, I need to pass the sequence \e]4;0;red\a to the latter:
printf '\e]4;0;red\a'
I found the general syntax here:
OSC 4 ; c ; spec BEL
Inside tmux, it doesn't work, maybe because it's consumed by tmux before the terminal. So, I need to protect it via another sequence found here:
printf '\ePtmux;\e\e]4;0;red\a\e\\'
Now if I want to apply a style to the text, like underlining it for example, whether I'm inside or outside tmux doesn't matter. The same sequence seems to always work:
printf '\e[4m underline \e[0m'
I thought that maybe this difference could be explained because I've set up some options in ~/.tmux.conf.
In particular, I set the option terminal-overrides to add and set the unofficial terminfo extensions Ss and Se to change the shape of the cursor inside tmux as explained in man tmux (section TERMINFO EXTENSIONS):
set-option -as terminal-overrides ',*:Ss=\E[%p1%d q:Se=\E[2 q'
But when I tried to apply a style to the text inside tmux, I started the latter without any configuration:
tmux -Ltest -f/dev/null
Inside tmux, why don't you need to protect the sequence \e[4m underline \e[0m like you need to for \e]4;0;red\a?
|
tmux is not XTerm (even if you are using it inside XTerm). It acts as its own terminal emulator (and, of course, multiplexer) on top of whatever terminal you happen to be using. The page you linked is XTerm control sequences, which (while very useful) is not applicable to every terminal in existence. For screen, the online manual page screen(1) lists the control sequences it accepts. tmux(1) does not contain a similar section, but there is an old description of various control sequences for terminals dated 1984, reflecting at least what its authors aimed for several years ago albeit not strictly documenting what its current behaviour is, in its source code in tools/ansicode.txt.
In any case, the SGR sequences for setting text attributes are more universally supported than the "Operating System Commands", such as the one you use to change the color palette. From the same linked page:
CSI Pm m Character Attributes (SGR)
Ps = 0 -> Normal (default).
Ps = 1 -> Bold.
Ps = 2 -> Faint, decreased intensity (ISO 6429).
Ps = 3 -> Italicized (ISO 6429).
Ps = 4 -> Underlined
Ps = 5 -> Blink (appears as Bold in X11R6 xterm).
Ps = 7 -> Inverse.
Ps = 8 -> Invisible, i.e., hidden (VT300).
Ps = 9 -> Crossed-out characters (ISO 6429).
Ps = 2 1 -> Doubly-underlined (ISO 6429).
Ps = 2 2 -> Normal (neither bold nor faint).
Ps = 2 3 -> Not italicized (ISO 6429).
Ps = 2 4 -> Not underlined.
Ps = 2 5 -> Steady (not blinking).
Ps = 2 7 -> Positive (not inverse).
Ps = 2 8 -> Visible, i.e., not hidden (VT300).
Ps = 2 9 -> Not crossed-out (ISO 6429).
Ps = 3 0 -> Set foreground color to Black.
Ps = 3 1 -> Set foreground color to Red.
Ps = 3 2 -> Set foreground color to Green.
Ps = 3 3 -> Set foreground color to Yellow.
Ps = 3 4 -> Set foreground color to Blue.
Ps = 3 5 -> Set foreground color to Magenta.
Ps = 3 6 -> Set foreground color to Cyan.
Ps = 3 7 -> Set foreground color to White.
Ps = 3 9 -> Set foreground color to default (original).
Ps = 4 0 -> Set background color to Black.
Ps = 4 1 -> Set background color to Red.
Ps = 4 2 -> Set background color to Green.
Ps = 4 3 -> Set background color to Yellow.
Ps = 4 4 -> Set background color to Blue.
Ps = 4 5 -> Set background color to Magenta.
Ps = 4 6 -> Set background color to Cyan.
Ps = 4 7 -> Set background color to White.
Ps = 4 9 -> Set background color to default (original).
(I have a feeling Pm was meant to be Ps to match the items.)
It makes sense that tmux would support these directly, as they are often used by applications, and users might be frustrated by lack of support.
It might also be worth noting that the Linux console uses a different escape sequence to set palette index 0 to red: \033]P0ff0000\033\\. In general it is OSC P n rr gg bb ST where n is the palette index (in hex) and rr gg bb is the color (also in hex).
| Why is there no need to escape a sequence to apply a style to the text in a terminal inside tmux? |
1,586,378,066,000 |
What does this command? I know that, the CSI n ; m H is for move the cursor to n row and m column, but what does command from title? ^[[H^[[2J ?
|
That's a visual representation (where ^[ represents the ESC character) of the sequence to clear the screen and bring the cursor to the top in xterm-like terminals at least:
$ TERM=xterm tput clear | cat -v
^[[H^[[2J
To find out about those escape sequences, look at the ctlseqs.txt document shipped with xterm. There, you'll find:
ESC [
Control Sequence Introducer (CSI is 0x9b).
and:
CSI Ps ; Ps H
Cursor Position [row;column] (default = [1,1]) (CUP).
and:
CSI Ps J Erase in Display (ED).
Ps = 0 -> Erase Below (default).
Ps = 1 -> Erase Above.
Ps = 2 -> Erase All.
Ps = 3 -> Erase Saved Lines (xterm).
(note that ^[[2J doesn't clear the saved lines or alternate screen).
tput clear (or clear) on xterm-like terminals does the same as printf '\e[H\e[2J'. For that it queries the terminfo database to know what the sequence of character is for the clear capability for the terminal whose name is stored in the $TERM environment variable. If you dump the entry for the xterm terminal in the terminfo database with infocmp xterm, you'll see in it:
$ infocmp -1 xterm | grep clear
clear=\E[H\E[2J,
Which is another way to find out about a given escape sequence:
$ infocmp -L -1 | grep J
clear_screen=\E[H\E[2J,
clr_eos=\E[J,
(here using the Long name for the capabilities). Then, you can do man 5 terminfo for a description of those capabilities.
| Terminal - ^[[H^[[2J - caret square bracket H caret square bracket 2 J |
1,586,378,066,000 |
su "$uname" -c "cat > ~/scripts/syncdownloads.sh <<ENDMASTER
"#!/bin/bash"
login="$flogin"
pass="$fpass"
host="$fhost"
remote_dir="$fremote_dir"
local_dir="$flocal_dir"
base_name="$(basename "$0")"
lock_file="/tmp/$base_name.lock"
trap "rm -f $lock_file exit 0" SIGINT SIGTERM
if [ -e "$lock_file" ]
then
echo "$base_name is running already."
exit
else
touch "$lock_file"
lftp -p 22 -u "$login","$pass" sftp://"$host" << EOF
set sftp:auto-confirm yes
set mirror:use-pget-n 5
mirror -c -P5 "$remote_dir" "$local_dir"
quit
EOF
rm -f "$lock_file"
trap - SIGINT SIGTERM
exit
fi
ENDMASTER"
The above is my part of my code, basically I want a way to insert the values at the top of the file that the user has entered (from login to local_dir) but then leave the rest of the lines till the ENDMASTER statement exactly as they appear now.
I've tried commenting every line individually and together with single and double quotes but when I run the bash script it always completes the commands from base_name onwards regardless and when I open the file generated the rest is blank after trap.
I am new to bash so any help would be greatly appreciated as I cannot find anything relevant on-line.
|
How about turning parameter substitution in the here-document off? (See Example 19-7 on www.tldp.org for more details.)
Maybe your script will look better like this (although it is still pretty dense):
su "$uname" -c "cat > ~/scripts/syncdownloads.sh << 'ENDMASTER'
###### Using `$()` to create a sub shell so that we don't have to escape
###### special characters.
$(
###### The parameter substitution is on here
cat <<INNERMASTER
#!/bin/bash
login="$flogin"
pass="$fpass"
host="$fhost"
remote_dir="$fremote_dir"
local_dir="$flocal_dir"
INNERMASTER
###### No parameter substitution
cat <<'INNERMASTER'
base_name="$(basename "$0")"
lock_file="/tmp/$base_name.lock"
trap "rm -f $lock_file exit 0" SIGINT SIGTERM
if [ -e "$lock_file" ]
then
echo "$base_name is running already."
exit
else
touch "$lock_file"
lftp -p 22 -u "$login","$pass" sftp://"$host" << EOF
set sftp:auto-confirm yes
set mirror:use-pget-n 5
mirror -c -P5 "$remote_dir" "$local_dir"
quit
EOF
rm -f "$lock_file"
trap - SIGINT SIGTERM
exit
fi
INNERMASTER
)
ENDMASTER"
This way you don't have to escape anything.
| Bash script to create script with values embedded |
1,586,378,066,000 |
In Bash 4.2.25, the set and env output is not escaped, so shell escapes and any non-printable characters won't be copy-pasteable. Take for example this shell session:
$ export foo=$'a\nbar=\baz'
$ env | grep -A 1 foo
foo=a
baraz
Ditto for for example colors - They are printed literally, and can mess up the terminal. How do you print all variables with their values in a way that the output can be copied and pasted to give the same environment?*
* Obviously with the standard caveats about readonly variables, special variables like $_ and the like.
|
You could do:
printvars() (
eval 'declare() { printf declare; printf " %q" "$@"; echo; }'"
$(declare -p)"
)
printvars
That could be easily extended to omit read-only variables like:
printvars() (
eval 'declare() {
[[ $1 = *r* ]] && return
printf declare; printf " %q" "$@"; echo
}'"
$(declare -p)"
)
| How to print shell variables and values to be able to copy/paste them? |
1,586,378,066,000 |
I have something like this:
A=$(curl https://mysite.com)
and the curl request returns the string \"Hello World\".
When I now want to print A to the console using one of:
echo "$A"
printf '%s' "$A"
the \ dissappear and it just says "Hello World". What can I do to get \"Hello World\" on the console?
|
If printf '%s' "$A" doesn't show any backslash then they're not there. You can check with curl https://mysite.com alone.
Maybe you were confused by the output of:
bash-5.0$ typeset -p A
declare -- A="\"Hello World\""
That's bash outputting shell code that could be used to define that $A variable. In the syntax of the shell language, the \ is there to escape the following " to tell the shell that that " is part of the data and not the closing quote.
It could also have output:
declare -- A='"Hello World"'
Which does just same (and is safer). Or A=\"Hello\ World\" or $'"Hello World"', etc which are other forms of quoting in the shell language syntax.
| Print Variable containing backslashes |
1,586,378,066,000 |
If I have the string and copy that with CTRL+SHIFT+C
https://test.invalid/?foo=bar()&baz=$quz{}
And I paste that into the terminal I see the following,
https://test.invalid/\?foo\=bar\(\)\&baz\=$quz\{\}
However, I don't want the ?, (, ), {, }, and = escaped, as I'm using the paste string to fill out curl,
curl "CTRL+SHIFT+C"
How can I disable this character escaping behavior?
|
The problem isn't kitty. If you run /bin/sh and paste you can test that. The problem, in my case, was actually zsh. And specifically oh-my-zsh which has this in the ~/.zshrc conf,
# Uncomment the following line if pasting URLs and other text is messed up.
# DISABLE_MAGIC_FUNCTIONS=true
Uncommenting that fixed my problem.
https://github.com/ohmyzsh/ohmyzsh/issues/5499 original issue, but still broken for me with massively newer stuff.
| How can I disable kitty's paste from escaping? |
1,586,378,066,000 |
I don't quite understand why I am getting some odd behavior trying to generate a screenshot file name when using grim/slurp.
If I do this:
grim -g "$(slurp)" "$HOME/screenshot-$(date).png"
I get:
screenshot-Sat 27 Jun 2020 06:02:36 PM EDT
Which is ok except that the time does not sort in my file folder. So, I tried this:
grim -g "$(slurp)" "$HOME/screenshot-$(date +\"%y%m%d_%T\").png"
But in that case I get this:
screenshot-"200627_19:35:39.png"
Can someone explain why quotation marks are appearing around the whole date string (none are in the expression), and how to elminate them?
|
$() creates a new quoting context, so you don't need to escape quotes inside it. That's one of the reasons why $() is preferred to ``, in which mixing nesting and quotes would lead to a nightmare.
For comparison:
$ echo "screenshot-$(date +\"%y%m%d_%T\").png"
screenshot-"200701_14:56:19".png
$ echo "screenshot-`date +\"%y%m%d_%T\"`.png"
screenshot-200701_14:56:27.png
You can leave the backslashes out and just use "screenshot-$(date +"%y%m%d_%T").png". Or, since % and the others aren't special, just leave the inner quotes out altogether:
$ echo "screenshot-$(date +%y%m%d_%T).png"
screenshot-200701_14:57:23.png
| Unintended quotation marks appearing in grim file name |
1,586,378,066,000 |
Is it possible to escape a space for input on a command? I want to do this:
!git\ clone
I love using ! to run past commands but not being able to use the space often limits the functionality.
|
You can use
!?git clone
From man bash:
!?string[?]
Refer to the most recent command preceding the current position in the history list containing string. The trailing ? may be omitted if string is followed immediately by a newline.
Also try Alt + . that repeats the last argument of the previous command (and pressing it again moves to the preceding command), and Alt + Ctrl + y which repeats the first argument of the previous command, or n-th if preceded by Alt + n.
| How do I escape spaces when using bash history interaction? |
1,586,378,066,000 |
I want to substitute this word
${ARRAY1[@]} with $1 on vim substitution command
I did
:%s:\$\{ARRAY1[\@]\}:\$1:g
But give me error about number of repetition,i try also "" but doesn't work
How to do?
Thanks
|
:%s/\${ARRAY1\[@\]}/$1/
worked for me. Apparently, you must escape [ and ] but not { and }.
I always use / instead of : as seperation, but
%s:\${ARRAY1\[@\]}:$1:g
works as well.
| How to escape correctly this word on vim? |
1,586,378,066,000 |
Consider a double-quoted command substitution with backslash escapes inside it, like this:
echo "$(echo '\\')"
It prints out \\, whereas I would have expected it to print out only one backslash. My thinking (which is incorrect) was that it went through the whole string, replacing all backslash escapes, and then it would do the command substitution. Evidently, the order is opposite.
However, if we do
echo "$(echo '\')$"
it prints out \$. I would've expected that if it's doing the command substitutions first and then evaluating that string's backslash escapes, that the \ and the $ might combine to make a single $ in the end. But it doesn't.
Where do backslash escapes fit into the order of things?
(The context for this question is that I'm working on something that will properly escape regex characters in a string for insertion into a sed command.)
|
I think it's best understood by the output of echo $(echo '\\') (i.e. a variant without the outer quotes), which results in \\. The point is that there's no literal string interpretation of the backslash(es) when the command substitution entity $(...) is expanded. (This is similar in case you have escape characters stored in strings; there won't be a re-interpretation as a literal string.)
| Command substitutions vs backslash escapes in a quoted string |
1,586,378,066,000 |
I have a ruby terminal script that I would like to print out hyperlinks. I achieve that like so:
puts("\e]8;;https://example.com\aThis is a link\e]8;;\a")
This works perfectly fine in a "normal" terminal (gnome-terminal btw) window.
But I need to run this script within GNU screen, where the escape-sequence simply has no effect. Other sequences (like colors for example) work fine, the hyperlink one (which according to the source might be a gnome-terminal-only thing) doesn't. (screen is running inside gnome-terminal)
How can I get screen to acknowledge my link sequence and display it properly?
|
You can pass through some text to the terminal screen itself runs in by putting it inside an ESC P, ESC \ pair (\033P%s\033\\ in printf format).
So you should bracket inside \eP..\e\\ pairs all the parts of the sequence, except for the text which will appear on the screen ("This is a link"):
printf '\eP\e]8;;https://example.com\a\e\\This is a link\eP\e]8;;\a\e\\\n'
printf '\eP\e]8;;%s\a\e\\%s\eP\e]8;;\a\e\\\n' https://example.com 'This is a link'
Or, from C:
puts("\eP\e]8;;https://example.com\a\e\\This is a link\eP\e]8;;\a\e\\");
printf("\eP\e]8;;%s\a\e\\%s\eP\e]8;;\a\e\\\n", url, title);
Putting the replacement text too inside \eP..\e\\ may result in screen losing track of the cursor position.
This is documented in the GNU screen manual:
ESC P (A) Device Control String
Outputs a string directly to the host
terminal without interpretation
The "string" should should be terminated with a ST ("string terminator") escape, ie. \e\\ -- thence \eP..\e\\.
| Using terminal escape-sequences within GNU screen |
1,586,378,066,000 |
I have an interesting challenge for how to escape quotes in a bash script.
My bash script has a long curl call with a large -d json structure passed.
#!/bin/bash
Value4Variable=Value4
curl -s -X POST -H "Content-Type: application/json" -H "Accept: application/json" -d \
'{"Variable1":"Value1",
"Variable2":"Value2\'s", # escape of quote doesnt work because of the previous single quote and backslash
"Variable3":"Value3",
"Variable4":"'"$Value4Variable"'",
"Variable5":"Value5"
}' \
https://www.hashemian.com/tools/form-post-tester.php
What's the best way to add a single quote into the json structure? I've tried various combinations but no success.
|
There are multiple ways to escape long strings with different quotes. The simplest is to end the single quote and escape the single quote:
'...'\''...'
But there are some nicer things you can do. Heredocs are a good way to avoid the quoting issue:
curl -s -X POST https://www.hashemian.com/tools/form-post-tester.php \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d @- <<EOF
{
"Variable1":"Value1",
"Variable2":"Value2's",
"Variable3":"Value3",
"Variable4":"$Value4Variable",
"Variable5":"Value5"
}
EOF
@- tell curl to read from stdin and <<EOF starts the heredoc which will be feed into the stdin of curl. The nice thing with the heredoc is that you do not need to escape any quotes and can use bash variables inside them sidestepping the need to worry about how to escape the quotes and making the whole thing much more readable.
| bash escaping quotes |
1,586,378,066,000 |
#!/bin/bash
wineuser=tom
su $wineuser -c "sed -i '$ialias ptgui "wine ~/.wine/drive_c/Program\ Files/PTGui/PTGui.exe\"' /usr/people/$wineuser/config/cshrc.csh"
The acutal line inserted to tom's cshrc.csh should look like
alias ptgui 'wine ~/.wine/drive_c/Program\ Files/PTGui/PTGui.exe'
the sed command '$ialias...' $i is inserting before the last line in the file, it is not a defined variable.
Tips or manuals explaining multi-level escapes also welcome.
|
The Theory
The rules are :
inside a ' delimited string, nothing gets interpreted and anything but a ' doesn't have special meaning. This means that only a ' need escaping but it also mean that, in order to escape it, you need the '\'' construct. (The first ' ends the string, the following \' adds a literal ' (the escape prevents the start of a new string) and the last ' start a new string. since those three strings are not separated by a delimiter (usually a space), they will be seen as a single continuous string.
Inside a " string, variable expansion occurs and escapes are interpreted. Therefore, if you want a literal \, you will need to escape it. Also, if you want a literal $, you also need to escape it (else it will be interpreted as variable expansion).
You can escape your command differently by combining those two quoting style as you want. One possible solution for your specific question is listed below.
Also, keep in mind that sed also interprets \ as an escape character, meaning you also need to escape that character when using it literally inside a sed script.
Example
Personally, I find it easier to start by the inner most code unit and work my way outwards, quoting each time in the inner code unit the special characters for the outer quoting method used.
For example, if you wanted to replace the string It's a test by It's my test in the file /home/user/I have spaces in my name.txt and put the result in the file /home/user/NoSpaces.txt, you could use " quoting around it to make it easier, like so :
sed "s/It's a test/It's my test/" "/home/user/I have spaces in my name.txt" >/home/user/NoSpaces.txt
Now, if you wanted to store this inside a variable, you would have to add another quoting level.
You could do it using ' (which mean only having to escape other ' characters in your current string):
myvar='sed "s/It'\''s a test/It'\''s my test/" "/home/user/I have spaces in my name.txt" >/home/user/NoSpaces.txt'
You could also do it using by using " style quotes:
myvar="sed \"s/It's a test/It's my test/\" \"/home/user/I have spaces in my name.txt\" >/home/user/NoSpaces.txt"
The rule of thumb is to always go with the solution that looks the cleanest to you.
A solution
That being said, for your specific problem, you could try this :
cmd="sed -i '\$ialias ptgui '\\''wine ~/.wine/drive_c/Program\\\\ Files/PTGui/PTGui.exe'\\' /usr/people/$wineuser/config/cshrc.csh"
su "$wineuser" -c "$cmd"
Note that I would suggest to use append instead of insert command for sed, but that is outside the scope of the question.
| How do I properly escape this long su + sed command? |
1,586,378,066,000 |
I am trying to figure out why this happens in bash.
Ok this is easy enough.
$ echo -e 'a\txy\bc'
a xc
Ok this is easy enough.
$ echo -e 'a\txy\b\b\b\b\b\b\b\b\bc'
ac xy
Ok this is easy enough.
$ echo -e 'a\txy\b\b\b\b\b\b\b\b\b\bc'
c xy
Now, why has c not dropped off the left end?
$ echo -e 'a\txy\b\b\b\b\b\b\b\b\b\b\b\bc'
c xy
I expected the output to be:
<a tab>xy
But clearly that isn't the case. Anyone got a pointer as to what might be happening? Thanks.
|
This is nothing to do with the echo command. You'd see this same behaviour if you wrote the output using cat, printf, or some other program. This is an aspect of your terminal.
And terminals can differ amongst themselves in this regard. The terminfo database will or won't have, for your terminal, an auto_left_margin capability, known as bw in termcap. That tells programs whether backspace can be used to wrap around the left margin, as it can on some terminals. If you'd used a terminal with automatic left margins, the c would have appeared on the previous line.
And if you'd reprogrammed your tabstops, you'd have seen yet further different behaviour.
Interesting things can happen when one combines TAB and BS, by the way. The 25-year-old warning in the termcap manual about backspacing over the margin when there's no automatic left margin capability reported or when the cursor is on the first row, reprinted everywhere from the System V Interface Definition to the FreeBSD manual, may seem quaint and overcautious at first blush; but the world has known terminal control code processing that did not get this quite right.
Further reading
Zeyd M. Ben-Halim, Eric S. Raymond, and Thomas E. Dickey. terminfo. FreeBSD Manual pages.
https://superuser.com/a/711019/38062
Jonathan de Boyne Pollard (2001). The CSRSS Backspace Bug in Windows NT 4/NT 2000/NT XP.. Frequently Given Answers.
| echo with backspace |
1,586,378,066,000 |
If I do:
ssh-keygen -N password123\$ -f bobskeys
Is \ escaping the $ character or becoming part of the password?
Or rather, will bash be doing any escaping before ssh-keygen gets the password value?
Do I need to escape the $ character?
I'm running Centos 5.5 x64 and bash 3.2.25
|
When you type:
ssh-keygen -N password123\$ -f bobskeys
the shell will execute ssh-keygen with arguments -N password123$ -f and bobskeys.
If you want to pass password123\$ as an argument you need to single-quote it:
ssh-keygen -N 'password123\$' -f bobskeys
or backslash the backslash:
ssh-keygen -N password123\\$ -f bobskeys
Otherwise the ssh-keygen process will not see the backslash.
| Is the `\` character escaping or becoming part of my ssh key password |
1,586,378,066,000 |
According to the manpage for lesskey,
the following keys are bound to left-scroll and right-scroll:
\e[ left-scroll
\e] right-scroll
\e( left-scroll
\e) right-scroll
\kl left-scroll
\kr right-scroll
The arrow keys and Esc-( / Esc -) work fine,
as does Esc-] for scroll-right,
but Esc-[ does not work for scroll-left.
Instead, it just shows this in the command line prompt:
ESC[
Adding this line to ~/.lesskey functions as a workaround:
\e[ left-scroll
But why doesn't it work by default?
I have observed this behavior in XFCE4 on Debian stretch as well as Kubuntu 18.04,
as well as a variety of terminal emulators.
$ less --version
less 487 (GNU regular expressions)
Copyright (C) 1984-2016 Mark Nudelman
less comes with NO WARRANTY, to the extent permitted by law.
For information about the terms of redistribution,
see the file named README in the less distribution.
Homepage: http://www.greenwoodsoftware.com/less
I considered filing a bug report,
but the list of known bugs and feature requests
suggests that it is a known behavior:
Enhancement requests
[ . . . ]
Ref number: 175
Implemented in version: 322
Add alternate command for ESC-[.
Is this actually a limitation in less,
or is it a quirk in how terminals handle the Escape key?
|
It's because ESC [ also happens to be the start of the sequence of characters sent by several function keys on some terminals, such as Home or End or PageUp... Those are also bound to some action in less.
So when less receives a ESC [, it's waiting for more (without timing out like some other applications do).
On my terminal ESC[6~ is PageDown. If I press Alt+[, I see ESC [ like you do. But then I can enter 6 and ~ and that does scroll down.
If you run TERM=vt100 less instead of less, where the vt100 terminfo page doesn't have any entry for Home or End... you see ESC [ works.
If however, you add \e[ left-scroll to your ~/.lesskey (and run lesskey to compile it into ~/.less), you'll notice the ESC [ works to scroll left, but all the Home, PageUP... stop working, they all scroll left and the extra characters they send cause some beeping.
Looking at the code, it's all about the order the table of commands are processed. The ~/.less command table is added last, but inserted at the head, so its entries are processed before the builtin ones.
Upon a ESC [ input, if the entry for \e[ left-scroll is found first, it is processed, if the one for \kD forw-screen (which on many terminals translates to \e[6~ forw-screen) is found first, then it's only a prefix match, and less waits for more input.
So, it is kind of a bug. The fix would be to add a timeout. Adding a short timeout would however mean giving up on multi-key commands; a long timeout to give enough time for a user to enter a multi-key command would probably make ESC [ too unresponsive.
Now, terminals that have PageUp, etc, would also generally have Left and Right key which are also bound to left-scroll, right-scroll, so you may want to use that instead for scrolling.
| Why doesn't Esc-[ scroll left in `less`? |
1,586,378,066,000 |
I'm pretty confused about under which circumstances do I need to escape metacharacters in pathnames. Consider the following example:
I'm in the following working directory:
$ echo $PWD
/home/helpermethod/foo[b]ar/foo/bar
Now I want to strip off everything behind foo[b]ar. I'm using parameter substitution to perform this task:/home/helpermethod/foo[b]ar
$ path=$PWD
$ basename=foo[b]ar
$ stripped_path=${path%$basename/*}/$basename
This doesn't seem to work because the basename string needs to be properly escaped:
$ basename=foo\[b\]ar
$ stripped_path=${path%$basename/*}/$basename
Okay, now I have the stripped_path I was looking for
$ echo "$stripped_path"
/home/helpermethod/foo[b]ar
But if I now test if this is a valid directory
$ [[ -d $stripped_path ]]
the test command always returns false (i.e. a return value != 0). What's the problem here? Do I need to unescape the stripped_path?
|
stripped_path=${path%"$basename"/*}/$basename
Use double quotes to do literal string matching as opposed to pattern matching. One of the cases where you need to quote variables. Another case is in your:
echo $PWD
above which should have been:
echo "$PWD"
Or even better:
printf '%s\n' "$PWD"
Or
pwd
| Confused about when to escape metacharacters in pathnames |
1,586,378,066,000 |
I captured the output of a script that uses tput to draw certain things on screen. When I perform cat myoutput then everything is well seen (looks like terminal reinterprets it from beginning), but when I edit or pipe that output I see plenty of ansi sequences and all the stuff previous to destructive printing like tput clear and the like.
How can I a postprocess it so I only get the final "render"?
Even better, the origin of this is that I am currently teeing my script so it prints everything to a file aside from to the terminal
with exec > >(tee /dev/tty)
is there a way to tell the stdout channel to "render" everything before saving?
|
What you want is a program that understands these terminal control sequences, and is able to render the final view. Well, such a program is called a terminal emulator. Some of them are graphical – like the program you launch to use your shell, e.g., gnome-terminal or alacritty, others are primarily headless.
The older screen or the more modern tmux are the relevant ones here.
write an "outer" script:
create a named pipe
Start your "inner" script (so the one that outputs stuff) in tmux, in the background
in your outer script, read from the fifo (this blocks because nothing has been written),
once that read finishes, instruct tmux to output a screenshot
in your inner script, write something to the named pipe to signal you're at a state to be taken a screenshot of
Putting it together, something like
#!/usr/bin/zsh
# outer script
# SPDX-License-Identifier: WTFPL
# Make a temporary directory
tmpdir="$(mktemp -d)"
# Prepare the FIFO
fifo="${tmpdir}/fifo"
mkfifo "${fifo}"
# Start the inner script in tmux
tmux new-session -d -s "${tmpdir}" -e "FIFO=${fifo}" ./inner-script.sh …
#^ ^ ^ ^------+-----^ ^------+--------^ ^ ^
#| | | | | | |
#\------run tmux, the terminal emulator
# | | | | | |
# \---in tmux, run the "new-session" command to, well, get a new session
# | | | | |
# \---detach that session, i.e. *don't* connect it to the current terminal
# | | | |
# \--specify the session name. Conveniently, we abuse the name
# of our temporary directory, as it's inherently unique
# | | |
# \-- for the started command, set the environment
# variable FIFO to the name of our FIFO
# | |
# \-- launch your script
# …with its arguments--/
# Wait for something to be written to our FIFO
cat "${fifo}" > /dev/null
# instruct tmux to take a "pane shot"
tmux capture-pane -t "${tmpdir}" -p > "/path/to/capture"
# ^------+-----^ ^ ^---------+--------^
# | | |
# \-------------------------------- target (session name as above)
# | |
# \----------------------- print to stdout
# |
# \----------- redirect stdout to file
# Finally, clean up session and temporary directory
tmux kill-session -t "${tmpdir}"
rm -rf "${tmpdir}"
You only need to add the writing of something to the fifo to your inner-script.sh, e.g., echo done > "${FIFO}"; sleep 100.
If you already have a "recorded" output, your inner-script.sh might simply be cat recording.txt; echo done > "${FIFO}"; sleep 100
| How to "render" ouput from a command playing with tput so only the final terminal-postprocessed result is kept? [duplicate] |
1,586,378,066,000 |
There is no issues to solve (that I know of). I am trying to better understand ssh.
On my system the ssh escape character is tilde (~). If I ssh somewhere I can use it as expected:
me@local$ ssh remote
me@remote$ # here I press tilde, control+z
[1]+ Stopped ssh remote
me@local$
All is well. However, if I run vim on the remote, suddenly vim sees the escape character. It seems like the escape character is no longer sent to ssh but instead through ssh, as if it was not an escape character.
me@local$ ssh remote
me@remote$ vim
# vim opens, here I press tilde, control+z
[1]+ Stopped vim
me@remote$
My question is, how is vim affecting the behavior of ssh's escape character?
|
From comments:
How does it work if you press the Return/Enter key before you press the tilde and Control+Z keys? - Sotto Voce
if I do this, it does put ssh in the background. Why does Return/Enter change the behavior? And where is this documented? - kill -9
Answering in reverse because the 2nd question has a more concise answer than the 1st:
Where is this (pressing Return before ~) documented?
The ESCAPE CHARACTERS section of the Openssh man page describes the command escape sequence to make the ssh client program take actions rather than forward keystrokes to the remote server. The second paragraph has this sentence:
The escape character must always follow a newline to be interpreted as special.
I wouldn't blame you if you find this to be a little vague and easy to overlook. It used to be worse! I first encountered the ~. exit command in the 1990s with the SunOS 4.x tip command, whose man page said:
A tilde (~) appearing as the first character of a line is an escape signal which directs tip to perform some special action.
This description left the reader to infer that, for the ~ key to be the first character of a line, the preceding keystroke would have to be Return/Enter. Inference is not a good thing in documentation. Most people missed the point, and were frustrated when tip seemed to ignore their ~. (quit) commands. The ssh man page has improved the description to be a little more explicit.
The reason these ~ escape sequences (which seem to date back to rlogin in 1981's 4.1BSD Unix) have the requirement to be preceded by Return/Enter is to prevent the appearance of ~. (perhaps in a file upload to the remote server) from unintentionally disconnecting the session.
Why do my remote sessions running vim seem to need the extra Return keypress?
(my paraphrase of your question)
I can't tell you precisely why your remote session invoking vim needs an "extra" Return keypress compared to running some other command. But I can offer a couple of educated guesses:
When you've run vim on the remote server and pressed Return, you've typed some other keystrokes before you typed ~Ctrl-Z. Perhaps you pressed the arrow or the h, j, k, l cursor movement keys. Or Ctrl-L to refresh the display.
If you haven't pressed any keys between invoking vim and typing ~Ctrl-Z, then perhaps the vim program is sending a character sequence to your terminal program asking for status and your terminal program is sending back characters as the answer. This kind of query and answer are typically not displayed by terminal programs. The answer sequence comes from your terminal program just like your keystrokes, and your ssh client sees other characters preceded the ~ instead of Return.
The second guess is less likely than the first, but I have encountered it on some occasions.
Ultimately...
The most reliable way to send an escape sequence/command to your ssh client program is to think of the escape character as actually being two characters: Return followed by tilde. For most commands, this makes the sequence three keystrokes instead of two.
| Vim intercepts ssh escape |
1,586,378,066,000 |
That's a pretty weird title but I'm having trouble articulating this question:
When I run kitty --version in my terminal it prints its version out to stdout, however the text is styled and colored:
In order to achieve this the process had to output ANSI escape codes to stdout, however I don't see them when I hexdump the output:
$ kitty --version | xxd -g 1 -c 10 -u
00000000: 6B 69 74 74 79 20 30 2E 31 39 kitty 0.19
0000000a: 2E 31 20 63 72 65 61 74 65 64 .1 created
00000014: 20 62 79 20 4B 6F 76 69 64 20 by Kovid
0000001e: 47 6F 79 61 6C 0A Goyal.
I'd expect to see at least a few escape characters and other ANSI sequences here but I don't. This leads me to believe that kitty is able to "predict" whether its output will appear in a terminal that can process the escape codes.
How is it able to do that? Or is it a feature of the terminal emulator perhaps?
|
Read man isatty, or https://linux.die.net/man/3/isatty
isatty - test whether a file descriptor refers to a terminal
| How can processes eliminate escape codes when its output is piped? [duplicate] |
1,586,378,066,000 |
This is the $PS1 of my Bash shell on a freshly installed Ubuntu 18.04:
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] \$
I can well understand every other part of the prompt:
${debian_chroot:+($debian_chroot)}: If it's set, show it, but add parentheses around it; if unset, show nothing
\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] \$: Standard user@host:cwd $ prompt with colors enabled using CSI escape sequences
I'm confused about the first part:
\[\e]0;\u@\h: \w\a\]
AFAIK, \[ and \] expands to \001 and \002 to tell GNU readline that the characters between them should not be counted for "length of prompt". The question would boil down to the meaning of this:
\e]0;\u@\h: \w\a
It's <ESC>]0;user@host: cwd<ALARM>. What does that do? (Note: CSI is <ESC>[ not <ESC>], or I would have understood)
|
This is an XTerm escape sequence, which sets the icon name and window title. It is supported by most graphical terminal emulators (and some non-graphical terminal emulators too).
| What does the leading part of Bash's default PS1 in Ubuntu mean? |
1,586,378,066,000 |
I'm trying to get DSR on busybox (to get terminal size), but echo '\x1b[6n' does not report anything, it outputs \x1b[6n.
|
I see two issues:
To make echo handle escape sequences you need to add the option -e. This isn't a speciality of the BusyBox shell, it applies to bash and other implementations, too.
Deducting from this SO question the ANSI code should be \x1b[6n.
Additionally it may be a good idea to suppress the finishing newline. Putting it all together I think the call should look as follows:
echo -en "\x1b[6n"
BusyBox limitations
Older versions of BusyBox don't support \e and \x escape sequences in echo; in this case \033 must be used. This seems to be fixed in 1.23.1.
Make sure to set the following in the configuration:
Busybox Settings
Busybox Library Tuning
Query cursor position from terminal → enabled
SSH/PuTTY limitations
SSH swallows the returned position. To see the answer in your remote terminal you can use
echo -en "\e[6n"; cat
and press Ctrl + C after that.
| How to get Device Status Report on Busybox?; |
1,586,378,066,000 |
I have wrote following meta-project file for qmake that is designed to build all .pro files in the following subdirectories (for historical reasons and because of other toolchains, the file names do not match folder names). Which essentially boils down to something like this (throwing aside project-specific stuff)
THIS_FILE=make-all.pro
TEMPLATE=subdirs
FIND= "find -name \'*.pro\' -printf \'%p\\n\'"
AWK= "awk \'{$1=substr($1,3); print}\'"
RMTHIS= "awk \'!/$$THIS_FILE/\'"
SUBDIRS= $$system($$FIND | $$AWK | $$RMTHIS )
I need to get a list of folders that contain .pro files within a Bash script, so I decided to copy the method
#!/bin/bash
FIND="find -name '*.pro' -printf '%h\\n"
AWK="awk '{\$1=substr(\$1,3);printf}'"
SUBDIRS=$($FIND | $AWK)
Apparently this doesn't work, awk was spewing error invalid char '' in the expression. Trying to execute same lines in Bash directly had shown that awk actually works only if double quotes are used
find -name '*.pro' | awk "{\$1=substr(\$1,3);printf}"
Replacing the line in question with
AWK='awk "{$1=substr($1,3);printf}" '
gave no working result, the output of script is empty, unlike the output of manually entered command. Apparently
find -name '*.pro'
In script finds only files in current folder, while its counterpart in command line of bash find it in subfolders. What is wrong and why qmake works differently as well?
|
I'm not familiar with qmake syntax, but from your sample it uses quotes and variables in very different ways than shells. So you can't just use the same code.
http://unix.stackexchange.com/questions/131766/why-does-my-shell-script-choke-on-whitespace-or-other-special-characters covers what you need to know, so here I'll just summarize what's relevant for your question.
In a nutshell, you cannot simply stuff a shell command into a string. Shells such as bash do not parse strings recursively. They parse the source code and build commands as lists of strings: a simple command consists of a command name (path to an executable, function name, etc.) and its arguments.
When you use an assignment like AWK="awk '{\$1=substr(\$1,3);printf}'", this sets the variable AWK to a string; when you use the variable as $AWK outside double quotes, this turns the value of the variable into a list of strings by parsing it at whitespace but it does not parse it as shell code, so characters like ' end up being literally in the argument. This is rarely desirable, which is why the general advice on using variables in the shell is to put double quotes around variable expansions unless you know that you need to leave them off and you understand what this entails. (Note that my answer here does not tell the whole story.)
In bash, you can stuff a simple command into an array.
#!/bin/bash
FIND=(find -name '*.pro' -printf '%h\\n')
AWK=(awk '{$1=substr($1,3);print}')
SUBDIRS=$("${FIND[@]}" | "${AWK[@]}")
But usually the best way to store a command is to define a function. This is not limited to a simple command: this way you can have redirections, variable assignments, conditionals, etc.
#!/bin/bash
find_pro_files () {
find -name '*.pro' -printf '%h\\n'
}
truncate_names () {
awk '{$1=substr($1,3);print}'
}
SUBDIRS=$(find_pro_files | truncate_names)
I'm not sure what you're trying to do with this script (especially given that you change the find and awk code between code snippets), but there's probably a better way to do it. You can loop over *.pro files in subdirectories of the current directory with
for pro in */*.pro; do
subdir="${pro%/*}"
basename="${pro##*/}"
…
done
If you want to traverse subdirectories recursively, in bash, you can use **/*.pro instead of */*.pro, but beware that this also recurses into symbolic links to directories.
| Escaping double quotes for variables in bash and qmake |
1,586,378,066,000 |
Has anyone tried something like
network \
--activate \
--onboot=yes \
--bootproto=static \
--etc. \
--etc. \
--etc....
In order to make a kickstart file more readable? Do backslash escapes work?
I'm looking at CentOS 7, so latest and greatest, more or less.
|
Nope, the only places in a kickstart file that you can use line continuations are the %pre and %post scripts. That is because whatever you place between %pre (or %post) and its %end is just given as-is to the interpreter (possibly given by --interpreter).
Every other line in the kickstart file (apart from %packages and %addon) is executed line-by-line.
Notes:
I never used any addons, not sure if you can actually place something meaningful between %addon and its %end and whether that can have line continuations.
You can use \ and then preprocess your file, for example with sed -z 's/\\\s*\n//'*
* (-z is specific to GNU sed)
| Multiline entries in a kickstart file using backslash escapes |
1,586,378,066,000 |
One of my Emacs keybindings is C-', which works well in GUI. In terminal however, it is not being recognised. I understand that I need to figure out the actual characters sent to the terminal by C-' and map it in the emacs config.
Following the advice of Where do I find a list of terminal key codes to remap shortcuts in bash?, sed -n l is returning back to me a an empty line, even without the ending $. Does Terminal not recognise the C-' sequence at all?
|
Terminals transmit bytes, not keys. Keychords like Ctrl+' have to be encoded as sequences of bytes. Apart from printable characters with no modifier or with just Shift, most keychords have no corresponding characters, and are instead transmitted as escape sequences, beginning with the escape character (the character with the byte value 27, which you can write as \e in Emacs strings). But many keychords don't have a traditional standard escape sequence, and many terminals either don't transmit these keychords or strip information about modifiers (transmitting Ctrl+' as just the ' character).
Some terminals allow you to configure escape sequences for each keychord. In Terminal.app, you can do this from the keyboard preferences.
For Ctrl+', pick either \033[39;5~ or \033[27;5;39~: these are two emerging standards, the libtermkey scheme and the xterm scheme. See Problems with keybindings when using terminal for more information.
Emacs translates escape sequences into its internal key representation through input-decode-map or local-function-key-map (or function-key-map before Emacs 23). Put either of these in your init file:
(define-key input-decode-map "\033[39;5~" [(control ?\')])
(define-key input-decode-map "\033[27;5;39~" [(control ?\')])
| Find OS X terminal key combination/escape sequence for Ctrl-' |
1,586,378,066,000 |
I have a huge uncompressed tar file (1 tb) and I want to check it, therefore I am trying to extract it to see if everything goes well. Since it is going to take a long time, I'd like to have some info printed on screen while extracting. Too bad the checkpoint actions suggested here don't work as intended. If try this command:
tar -xf big_fat_backup.tar --checkpoint=10000 --checkpoint-action=ttyout='%{%Y-%m-%d %H:%M:%S}t (%d sec): #%u, %T%*\r'
The meta characters in the string are not expanded (except for the %u), and I have the following output:
%{%Y-%m-%d %H:%M:%S}t (%d sec): #10000, %T%*
I simply copied from the manual, so what am I doing wrong?
Bonus question: If I create the archive with the -W switch and no errors are printed, should I be sure that the archive was written correctly?
|
These meta characters for --checkpoint-action were introduced in version 1.28, which was released a week ago.
A way to get approximate progress status on demand is to check the position of the tar process in its input file. You can see that with lsof -p1234 where 1234 is the PID of the tar process. On Linux, you can check the pos: line of /proc/1234/fdinfo/3.
If you want a progress report on screen, you can filter the archive through pv.
<big_fat_backup.tar pv -bt | tar -xf -
If you want to be sure that the archive is written correctly, check the exit status of the tar command. This goes for any other command as well: an exit status of 0 means success, a nonzero value means a failure.
| Tar actions not working as intended |
1,586,378,066,000 |
I have several binary files with the character 0x04 in them, and I'd like to add an escape \ character before each. Is there a script I can use to do this without needing to manually edit each one?
|
You can use GNU sed as in the following example:
for file in /path/*; do
sed -i 's/\x04/\\&/g' "$file"
done
Be aware that -i option modify the file in place, so be sure to have a backup, something should go wrong.
| Modifying a set of binary files |
1,586,378,066,000 |
Why is this checkmark not printed correctly when executing my shellscript, even though echo alone outputs corrctly?
script:
#!/bin/sh
YELLOW='\033[1;33m'
NC='\033[0m' # no color
echo "enter your provider's auth code"
stty -echo
read authcode
oathtool --totp -b $authcode | xclip -i
echo "${YELLOW}\u2714 code copied to clipboard${NC}"
terminal output: \u2714 code copied to clipboard
standalone command:
echo "\u2714 checkmark"
terminal output:
✔ checkmark
My shell is zsh 5.8 (x86_64-debian-linux-gnu)
using the rxvt-unicode-256color terminal
|
Your shebang has /bin/sh, and I doubt /bin/sh is zsh on your system. zsh is not one of Debian's approved sh implementations. And anyway I would advise against using non-standard-sh syntax in sh script.
\u2714 is not standard echo syntax. It will (likely) be standard inside $'...' in the next major revision of the POSIX standard, but not in arguments to echo or printf, and anyway Debian's default sh implementation (dash) doesn't support $'...' yet.
\uXXXX is recognised by zsh's printf (in the format argument or arguments for %b), print (except with -raw) and echo (except with -E or if the bsdecho option is enabled) builtins and inside its $'...' (that's the first shell where those \uXXXX sequences were introduced inside $'...', the $'...' quotes themselves being from ksh93).
Change the shebang to #! /bin/zsh - if you want to use zsh features in your script.
Then, you can use more zsh features such as:
print -P '%B%F{yellow}\u2714 whatever%f%b'
Where %F{yellow} with Prompt expansion is expanded to the correct code for your terminal to change the Foreground colour to yellow (and %f to reset the foreground colour to the default, and %B / %b to enable / disable bold like with your \033[1m).
In zsh, you can also do:
IFS= read -rs 'authcode?enter auth code: '
To issue a prompt and enable silent input more reliably.
In any case, like in sh, you'll likely want -r and IFS= to avoid read mangling the input.
You could also use set -o pipefail for that oathtool | xclip pipeline to be considered as having failed if either oathtool or xclip failed (not just xclip).
Note that if the locale's character encoding doesn't have the U+2714 character, print will fail with a zsh: character not in range error.
In standard sh + utilities syntax, the syntax should rather be:
#! /bin/sh -
bold_yellow='\033[1;33m'
nc='\033[m' # no color
if [ "$(locale charmap)" = UTF-8 ]; then
check_mark='\0342\0234\0224'
else
check_mark='[X]'
fi
saved_tty_settings=$(stty -g)
trap 'stty "$saved_tty_settings"' EXIT ALRM TERM INT QUIT
stty -echo
printf >&2 "enter your provider's auth code: "
IFS= read -r authcode &&
secret=$(oathtool --totp -b "$authcode") &&
printf %s "$secret" | xclip -i &&
printf '%b\n' "${bold_yellow}${check_mark} code copied to clipboard${nc}"
Here hardcoding the colouring escape sequences, and hardcoding the UTF-8 encoding of U+2714 only if the locale's charmap is UTF-8.
Or you could do:
check_mark=$(printf '\342\234\224' | iconv -f UTF-8 2> /dev/null) ||
check_mark='[X]'
To get the check_mark in the locale's charmap if it has one (and UTF-8 is supported on the system) and [X] otherwise. GB18030 is the only other charmap that I know that has U+2714. It's also an encoding of Unicode, used mostly in China (like in the zh_CN.gb18030 locale on Debian).
| echo checkmark in shellscript (zsh) how-to |
1,586,378,066,000 |
This is my service
[Unit]
Description=Cleanup service I made
[Service]
ExecStart=/home/me/scripts/cleanup.sh -d /home/me/scripts/testfolder/ -f ".*/*.[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}.*.log"
User=root
Group=root
The service runs, but my logs are getting FLOODED with
/lib/systemd/system/cleanup.service:5: Ignoring unknown escape sequences: ".*/*.[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}.*.log"
I have tested the script, I know it works. And I know it works in crontab as well.
What I tried:
Tried putting everything in double quotes
Changed ExecStart to "/bin/bash -c ""
Tried using the systemd-escape chars for directory path (replacing "/" with "-"). However this is for paths not the regex.
That entry is a regex, I cant really change its entry. Do I need to escape in a different way? Or simpler, how do I tell systemd to ignore the problem and stop dumping "Ignoring unknown sequences" into syslog. Just run the script as is?
|
You need to escape each backslash with another backslash.
... ".*/*.[0-9]\\{4\\}-[0-9]\\{2\\}- ...
See man systemd.service section COMMAND LINES, or in more recent versions systemd.syntax.
| Stop systemd from flooding logs with "Ignoring unknown escape sequence" |
1,586,378,066,000 |
I use a string that contains \ in a sed expression and I want to keep it in the output of sed
$ A=w
$ B="\ "
$ echo word | sed "s/$A/$B/"
ord
I want to obtain \ ord instead of ord.
What is the most elegant way of doing it?
|
You need to escape both the shell, and sed:
$ A=w
$ B="\\\ "
$ echo word | sed "s/$A/$B/"
\ ord
| How to obtain "\ " instead " " after a sed substitution? [duplicate] |
1,586,378,066,000 |
I am using script command to record everything from terminal. But when I am opening the generated file its having lots of junk character.
Can anyone help me to remove these junk characters from file or any other alternate way?
This file look like this:
ossvm10(0)> ls -lrt /usr/opt/temip/mmexe/mcc_fcl_pm.exe^M
^[[00m-rwxr-xr-x 1 root root 387517 Feb 18 2013 ^[[00;32m/usr/opt/temip/mmexe/mcc_fcl_pm.exe^[[00m^M
^[[m^[]0;temip@ossvm10:/home/dharmc^G[/home/dharmc]^M
ossvm10(0)> script -a unit_testing_TEMIPTFRLIN_00202_CR#9961.txtsum /usr/opt/temip/mmexe/mcc_fcl_pm.exe^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^[[1P^H^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^H^[[1P^[[1P^H^G^G^G^G^G^G^G^G^M
06046 379^M
^[]0;temip@ossvm10:/home/dharmc^G[/home/dharmc]^M
|
You can simply run:
dos2unix <filename>
This will remove all the ^M characters from the file. ^M is the carriage-return character generated in a DOS environment. The command dos2unix just converts the file from DOS to Unix format.
To remove the ^H and ^G characters, use sed:
sed -i 's/\^H//g;s/\^G//g' <filename>
| How to remove junk characters from the file generated by script command in linux [duplicate] |
1,586,378,066,000 |
In the answer of thisquestion:
https://superuser.com/questions/163515/bash-how-to-pass-command-line-arguments-containing-special-characters
It said that we have to put an arg of a function in Doublequotes to escape the entire argument (as in "[abc]_[x|y]").
But what if the special character is " at the start ("[abc]_[x|y]"), we can't do the following:
program ""[abc]_[x|y]" anotheragument
How can I escape the " in this case?
|
If I understand you correctly, you have a regex pattern in a variable and you would like grep to use it without giving any special meaning to regex metacharacters. If this is the case, the -F (fixed strings) option to grep is what you want:
grep -F "$var" your_file
Your system may also have a special command (fgrep) that is equivalent to the above:
fgrep "$var" your_file
| How to escape a Whole Variable? |
1,463,773,129,000 |
I have a command with a working .desktop Exec key as follows:
Exec=env XDG_CONFIG_HOME=/home/bean/.config/gedit/ gedit %U
I would like to use the $HOME variable instead and it works in the terminal but not when used in the .desktop file. Please correct me if I'm wrong, but I assume this is because of improper "escape characters".
I have tried numerous variations of the command with \ and {} but to no avail. After looking around I'm not even sure if what I want can be accomplished.
|
There are no special characters there that you would need to escape, so no it's not an escaping issue. I just read through the freedesktop.org .desktop file specification and it does not explicitly mention environment variables, neither to allow nor disallow them. Nevertheless, neither $HOME nor ~ seem to be expanded in the exec field of a .desktop file.
So, it looks like what you're attempting is not possible.
| Proper use of escape characters in desktop file |
1,463,773,129,000 |
I working on my ANSI formatting code for jQuery Terminal. It almost working but I have one issue it's related to 0A ansi code that should move cursor (also other 0 cursor codes: B C D E F)
I've tested with ervy library and it make a difference If I remove \x1b\[0[A-D] from the string before outputting to terminal.
I'm not sure If I process the ANSI escapes correctly, I'm splitting the output into lines before each line I'm increasing the y position and set x to 0 and when there is cursor ANSI escape code I move the cursor. And I'm using array of arrays to hold the output of the screen. (Not sure but I think it will be faster if I don't join the lines into one strings till the end, but I created this that way because it was first idea not because of speed).
I'm not sure that I should do if there is 0.
Here is the output of two plots from ervy library
Correct plot
plot without zero codes.
I'm trying to debugg my code (I think that on one point the plot look on the second screen), but I don't know how to process 0 ANSI escape codes.
I'm using Wikipedia as reference.
|
The behavior seen in DEC VTs is easier to understand with ZDM (zero default mode) in mind. From ECMA-48:
A parameter value of 0 represents a default parameter value which may be different from 0.
For cursor movement sequences the spec defines the default as "1", therefore these are all equal in ZDM:
CSI A (omitted param defaults to 1)
CSI 0 A (0 has the special meaning of the default value)
CSI 1 A (param happens to be the default value)
As far as I am aware, all CSI sequences implemented in DEC devices follow the ZDM scheme.
Later on ZDM was removed from the specs, thus "0" now should read as number and not as special placeholder for a default value anymore. But the DEC devices did not switch that behavior. Thus it boils down to the question, whether a VT100+ compatible emulator can be spec conform at all.
| What ANSI escape 0x1b[0A and other 0 value codes should do? |
1,463,773,129,000 |
I have a very simple chat-like tool that runs within a GNU screen session. The screen window is split, the top part is running tail -f file.txt and the bottom part is running a script with the following content:
#!/bin/bash
while : ; do
read -p "Message: " msg
ctime=$(date +"%H:%M:%S")
echo "[${ctime}] User: ${msg}" >> file.txt
done
Very simple, but gets the job done with the requirements I have. There's only one problem: When I press the ESC or any of the arrow keys, it inserts an escape-sequence, like ^[[D for example. And this messes up the file, resulting in terrible output.
So my question is simple: How can I escape the input from read so it's safe to write to the file?
I've tried echo "[${ctime}] User: ${msg}" | strings >> file.txt which made it a lot better, there were no big mess-ups anymore (e.g. nothing was overwritten or wrongly put out), but things are still not perfect (e.g. entering te^[[Dst would turn into te\n[Dst (the \n being an actual new line)).
|
How about a slightly different approach? Rather than remove the escape characters and sequences, you can allow users to use them to edit the input line with read -e.
If you want, you can take this even further by recording chat message history, like this:
...
read -e -p "Message: " msg
history -s "$msg"
...
With this, if someone makes a typo in a message, they can hit up-arrow, use left- and right-arrow to edit and fix the typo, then hit return to send the corrected message.
| How to safely escape a variable string (user input) in bash? |
1,463,773,129,000 |
I'm trying to get an output of one function and pass it to other.
set -x
OUTPUT=$(git diff --name-only --diff-filter=AM develop... | sed 's/.*/"&"/')
./bin/phpcs $OUTPUT
My main problem is that the first function returns a list of files and files may contain spaces. So I'm wrapping them in double-quotes, but when I pass it to my other function other single quotes are added.
Output:
++ git diff --name-only --diff-filter=AM develop...
++ sed 's/.*/"&"/'
+ OUTPUT='"test file.php"
"testfile.php"'
+ ./bin/phpcs '"test' 'file.php"' '"testfile.php"'
Main end goal to have call equal to:
./bin/phpcs "test file.php" "testfile.php"
|
Your issue is that you are adding double quotes to the text outputted by git diff. This mangles the pathnames and makes it even harder to correctly parse the file list, especially if any pathname happens to contain quotes.
The single quotes that you see in the tracing output is just added by bash to make it easier to see how strings are delimited (e.g. to show that "test, file.php", and "testfile.php" are three separate arguments). Remember that the tracing output of a shell is debugging output meant for giving you a hint of what the shell is doing, and is not generally the actual code run by the shell.
To safely use the pathnames that git diff --name-only outputs, consider also using the -z option. This causes git to output the pathnames as a nul-delimited list. You may then use xargs -0 to execute any command on the elements of that list:
git diff --name-only -z --diff-filter=AM develop... |
xargs -0 ./bin/phpcs
You could also read the list of files into an array. For example, in the bash shell:
readarray -d '' -t names < <( git diff --name-only -z --diff-filter=AM develop... )
./bin/phpcs "${names[@]}"
This obviously relies on git diff not outputting too many names.
| Bash script: Avoid single quotes added to string |
1,463,773,129,000 |
In the following scenario, which is the way to pass ${MY_ENV_VAR} in the payload?
I will have to escape:
a) the single quotes of the payload
b) the double quotes of the value of the text json field
I need ${MY_ENV_VAR) to be interpolated of course.
#!/bin/bash
COMMAND=${MY_ENV_VAR}
curl -X POST --data-urlencode 'payload={"channel": "#alerts", "username": "k8s-cronjobs-bot", "text": "Command ${MY_ENV_VAR} run with success", "icon_emoji": ":ghost:"}' ${SLACK_WEBHOOK}
|
With jq:
$ payload='{"channel": "#alerts", "username": "k8s-cronjobs-bot", "text": "", "icon_emoji": ":ghost:"}'
$ MY_ENV_VAR='"foo"'
$ echo "$payload" | jq --arg cmd "$MY_ENV_VAR" '.text = "Command " + $cmd + " run with success"'
{
"channel": "#alerts",
"username": "k8s-cronjobs-bot",
"text": "Command \"foo\" run with success",
"icon_emoji": ":ghost:"
}
So your script would look like:
#!/bin/bash
COMMAND=${MY_ENV_VAR}
payload='{"channel": "#alerts", "username": "k8s-cronjobs-bot", "text": "", "icon_emoji": ":ghost:"}'
payload=$(echo "$payload" | jq -r --arg cmd "$COMMAND" '.text = "Command " + $cmd + " run with success"')
curl -X POST --data-urlencode "payload=$payload" "${SLACK_WEBHOOK}"
| Pass variable within json payload in shell script |
1,463,773,129,000 |
I have many files in a directory that begins with parenthesis. They are generated by Dropbox due to conflict. Any combination of escaping does not seem to help:
rm -rf "(*"
rm -rf "\(*"
rm -rf \(*
AS frostschutz mentioned, they do not seem to be ASCII characters. How can I find out if this is the case, and what is the work around?
|
The shell interprets the commandline with certain rules which you have to consider here:
You can escape shell metacharacters with \ so that it behaves like an ordinary character.
You can use single or double quotes and inside these most (with double quotes) or all (with single quotes) shell metacharacters lose their special meaning.
Quotes don't have to be at word boundaries, so that rm th"is fil"e would be the same as rm "this file".
The characters []?* can be used for filename expansion. They may not be quoted or escaped for this purpose.
So possible solutions for your case are rm -rf '('*, rm -rf "("* and rm -rf \(*. I don't know why the last one didn't work in your case. Perhaps there is some whitespace in front of the parenthesis?
With the following line you should be able to see if there are any funny characters in your filenames:
for i in *; do od -c <<< "$i"; done
| Deleting all files that begins with parenthesis |
1,463,773,129,000 |
The problem is simply as follow:
watch "psql -d postgresql://user:pass@host:5432/dbname -c 'select id,name from table where name <> 'not available' order by id;'"
this 'not available' comparison has to be single quoted as is for Postgres. But I cannot figure out a way to escape these singles quotes properly as the psql command (I mean, the select...) is itself already single quoted, and is also inside the doubles quotes required by the call of psql for watch.
How to solve this?
I almost seen every possible syntax errors by using \ or multiple double/single quotes for escaping these single quotes.
|
You can't escape single-quotes inside single-quotes. You can, however, fake it with:
'\''
Explanation: that's an end-single-quote (i.e. to end the current quoting), followed by an escaped single-quote, and then start single-quote again. it works the same as, say, 'a'b'c' (quoted a then an unquoted b then a quoted c - all together, that's just abc...and 'a'\''b' is just a'b)
watch "psql -d postgresql://user:pass@host:5432/dbname -c \
'select id,name from table
where name <> '\''not available'\''
order by id;'"
(newlines added to improve readability. the sql command will work the same with or without them)
Note: when using postgres (or sqlite or mysql, etc) it's best to use a language that supports placeholders, so you don't need to worry about quoting. Their CLIs are good for interactive queries and some scripting (in SQL, not sh) but the nested quoting required to pass sql code from sh is clumsy and easy to get wrong (not impossible, just far more effort than it would be in other languages).
e.g. in perl DBI (i haven't included any of the boilerplate login stuff, just the select with a prepared statement and a placeholder):
# you could use string literals, but i'll use some
# variables for this example
my $exclude = 'not available';
my $sql = 'select id,name from table where name <> ? order by id';
my $sth = $dbh->prepare($sql);
$sth->execute($exclude);
The ? is a placeholder, DBI will replace it with whatever value you provide - quoting it automatically if required, depending on the data type of the column. i.e. just give it the data for the column, let it worry about quoting and escaping.
BTW, you can have more than one, you just have to supply the right types of arguments in the right order. Also, some DBI drivers - including DBD::Pg - allow you to use named placeholders like :name or numbered placeholders like $1, $2 - these look the same, and work roughly the same as shell positional parameters but but they're not, they're placeholders in an sql statement)
my $sql = 'select id,name from table where name <> $1 order by id';
shell is a terrible language for anything that isn't just feeding data and/or filenames into other programs and co-ordinating the execution of other programs. Anything even moderately complicated is going to be a PITA due to the care and attention you have to pay to quoting, whitespace, word-splitting etc issues. You will run into the same kinds of problems with nested quotes in ssh commands, or in a find .. -exec sh -c '...' {} + command and many other instances where you have to nest multiple levels of quotes.
| Escaping single quotes inside a single quoted sub-command, itself inside a double quoted command |
1,463,773,129,000 |
I am trying to programmatically create a file by printf-ing different things to it. (i.e. printf %s\\n hostname >> file.txt)
I would like to send an Esc code to clear the screen (it would be the first line).
In bash, I just use printf $'\033[2J'$'\033[;H' (\033 being the octet for Esc) and it clears the screen. Everything works as it should; as the first line in the file, when you cat the file, it clears the screen first.
In tcsh (the default shell for FreeBSD root), I can't seem to figure out how to "escape the escape" and get the equivalent of the bash escape codes.
I have been experimenting with (printf and echo)
echo %{\e[2J]}
echo \e
echo \\e
echo %{\033[2J}
If anyone can point me in the right direction, I would be very appreciative!.
Thanks!
|
printf expands those in the format (first) argument by itself, no need for those ksh93-style $'...' quotes. So:
printf '\33[2J\33[H'
Note that printf is not a builtin in tcsh, so you'd be calling the printf command on the filesystem. You might as well call the tput or clear commands then, but in tcsh that's not needed as tcsh has built-in support for termcap/terminfo, so you can do:
echotc clear
That will query the terminfo or termcap database for the right escape sequence to send for the current terminal (according to $TERM) which is usually better than using a hard-coded one.
If you wanted to use tcsh's echo built-in, you could do:
set echo_style = both # meant to be the default in tcsh
echo -n '\033[2J\033[H'
Or:
set echo_style = sysv # or both
echo '\033[2J\033[H\c'
| tcsh - echo escape code for escape |
1,463,773,129,000 |
If I want to bind a key-mapping to a function or widget in zsh I have learnt that I first have to hit Ctrl+v - at a prompt, then enter the key sequence I want to use, then use the output in my key-binding command.
So for example if I want to map Ctrl+xCtrl+v to the action of opening the current command line contents in an editor, I need to
hit Ctrl+v - to enter "dump key mode"
hit Ctrl+xCtrl+v
In my case this produces ^X^E
take the ^X^E and use it in my keybinding command, e.g.
bindkey "^X^E" edit-command-line
Why is this necessary and what is actually happening "behind the scenes" when I do this?
|
When you press Ctrl-V, the shell will start by ignoring keyboard interrupts and simply take the pressed key combination as the input character. This is easily possible as ASCII is designed to hold all control characters.
Of course, on display it has to cheat a bit and show the ^ followed by the corresponding key or otherwise it would output control characters instead of what you need to see.
Note that the bindkey documentation shows that it supports two notations for control characters: (examples refer to Ctrl-X)
caret notation which is to explicitly write the caret (^) followed by the corresponding control character textually (not needing the Ctrl-V method in this question); example: ^X
C- followed by the control character; example: C-x. This causes some key combinations to require escaping (even if you don't use it). You should probably read the whole screen and bindkey manual.
| Understanding what is happening when I dump a terminal character sequence with Ctrl-v? |
1,463,773,129,000 |
In a bash script I am rasterizing a PDF page by page into single files, in the end the single resulting PNGs are merged again into a PDF like this:
convert -monitor /path/to/1.png /path/to/2.png /path/to/3.png ... output.pdf
The only problem that script still has is the inability to correctly handle files with spaces. Here are some of the things I tried:
newfile=$(sed -r -e 's| |\\ |g' <<< "$tmppath$curr.png")
echo "DEBUG: newfiles : $newfiles"
filearray[$curr-1]="$newfiles"
echo "DEBUG: filearray: ${filearray[*]}"
This yields (per page/file):
DEBUG: newfiles : /tmp/pngpdf/file\ with\ spaces/1.png
DEBUG: filearray: /tmp/pngpdf/file\ with\ spaces/1.png
Later on, I have two debug messages
echo "DEBUG: filearray: ${filearray[*]}"
echo "DEBUG: ${filearray[0]}, ${filearray[1]}, ${filearray[2]}, ..."
to see how filearray looks like with multiple files/pages:
DEBUG: filearray: /tmp/pngpdf/file\ with\ spaces/1.png /tmp/pngpdf/file\ with\ spaces/2.png /tmp/pngpdf/file\ with\ spaces/3.png /tmp/pngpdf/file\ with\ spaces/4.png /tmp/pngpdf/file\ with\ spaces/5.png
DEBUG: /tmp/pngpdf/file\ with\ spaces/1.png, /tmp/pngpdf/file\ with\ spaces/2.png, /tmp/pngpdf/file\ with\ spaces/3.png, ...
And I can clearly see the following:
For every file there is exactly one array element.
Every space is preceded by a \.
I am putting the whole command into a variable first to see what would get executed:
execcmd="convert -monitor ${filearray[@]} output.pdf"
An example might look like that:
convert -monitor /tmp/pngpdf/file\ with\ spaces/1.png /tmp/pngpdf/file\ with\ spaces/2.png /tmp/pngpdf/file\ with\ spaces/3.png /tmp/pngpdf/file\ with\ spaces/4.png /tmp/pngpdf/file\ with\ spaces/5.png output.pdf
But executing that with $execcmd convert is throwing numerous errors at me:
convert.im6: unable to open image `/tmp/pngpdf/file\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `/tmp/pngpdf/file\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `with\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `with\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `spaces/1.png': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: unable to open file `spaces/1.png' @ error/png.c/ReadPNGImage/3667.
convert.im6: unable to open image `/tmp/pngpdf/file\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `/tmp/pngpdf/file\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `with\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `with\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `spaces/2.png': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: unable to open file `spaces/2.png' @ error/png.c/ReadPNGImage/3667.
convert.im6: unable to open image `/tmp/pngpdf/file\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `/tmp/pngpdf/file\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `with\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `with\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `spaces/3.png': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: unable to open file `spaces/3.png' @ error/png.c/ReadPNGImage/3667.
convert.im6: unable to open image `/tmp/pngpdf/file\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `/tmp/pngpdf/file\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `with\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `with\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `spaces/4.png': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: unable to open file `spaces/4.png' @ error/png.c/ReadPNGImage/3667.
convert.im6: unable to open image `/tmp/pngpdf/file\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `/tmp/pngpdf/file\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `with\': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: no decode delegate for this image format `with\' @ error/constitute.c/ReadImage/544.
convert.im6: unable to open image `spaces/5.png': No such file or directory @ error/blob.c/OpenBlob/2638.
convert.im6: unable to open file `spaces/5.png' @ error/png.c/ReadPNGImage/3667.
convert.im6: no images defined `output.pdf' @ error/convert.c/ConvertImageCommand/3044.
Obviously, it does not recognize what I want to do correctly. The backslashes are themselves escaped and, thus, the spaces re-gain their argument-delimiting powers. When putting that command into bash directly it runs smoothly (as expected):
$ convert -monitor /tmp/pngpdf/file\ with\ spaces/1.png /tmp/pngpdf/file\ with\ spaces/2.png /tmp/pngpdf/file\ with\ spaces/3.png /tmp/pngpdf/file\ with\ spaces/4.png /tmp/pngpdf/file\ with\ spaces/5.png output.pdf
Load/Image//tmp/pngpdf/file with spaces[1.png]: 584 of 585, 100% complete
Load/Image//tmp/pngpdf/file with spaces[2.png]: 584 of 585, 100% complete
Load/Image//tmp/pngpdf/file with spaces[3.png]: 584 of 585, 100% complete
Load/Image//tmp/pngpdf/file with spaces[4.png]: 584 of 585, 100% complete
Load/Image//tmp/pngpdf/file with spaces[5.png]: 584 of 585, 100% complete
Mogrify/Image//tmp/pngpdf/file with spaces[5.png]: 4 of 5, 100% complete
resize image[output.pdf]: 180 of 181, 100% complete
resize image[output.pdf]: 180 of 181, 100% complete
resize image[output.pdf]: 180 of 181, 100% complete
resize image[output.pdf]: 180 of 181, 100% complete
That leaves me very puzzled and I have no idea how to fix that. I tried saving the single filenames quoted in filearray as well such that I would get DEBUG: filearray: "/tmp/pngpdf/file with spaces/1.png" "/tmp/pngpdf/file with spaces/2.png" "/tmp/pngpdf/file with spaces/3.png" "/tmp/pngpdf/file with spaces/4.png" "/tmp/pngpdf/file with spaces/5.png" to no avail. Probably bash escapes these special characters as well. I am sure that it must have to do with how bash works, but it seems my understanding is not yet good enough to fix that on my own. So, I'd be very glad if someone could enlighten me. :)
PS: I've been looking for an answer to my question in several places (Shell script issue with filenames containing spaces, Using a generated list of filenames as argument list — with spaces, Looping through files with spaces in the names?, Bash scripting and files with spaces in them, Why does my shell script choke on whitespace or other special characters?, Trouble in script with spaces in filename). Unfortunately, I was not able to locate an answer to my question in there. Maybe it's just too late ;)
|
execcmd="convert -monitor ${filearray[@]} output.pdf"
But executing that with $execcmd convert is throwing numerous errors at me
Don't call $execcmd, as it has already lost the differentiation between filenames and space-separated parts of filenames. Instead, execute the command itself with quoted arguments:
convert -monitor "${filearray[@]}" output.pdf
| bash: Passing several space containing filenames |
1,463,773,129,000 |
When I need to progmatically open in emacs a file that has a space in its name, how can I do that? I've tried these commands from a Ruby script, inside backquotes or popen(…):
emacs "foo bar"
or
emacs foo\ bar
opens two files each named foo and bar, but I want to open a file named foo bar.
|
You're having problems with quoting because you're calling popen with a commandline, which is passed to the shell. Ruby's string parser is either eating the double quotes or eating the backslash. You can either call popen with an array of strings, which will bypass the shell, or you can write emacs foo\\ bar which will escape the backslash that you want Ruby to leave for the shell to see.
| Opening a file with space from Ruby |
1,463,773,129,000 |
The command below works:
$ echo '{ "a": [ { "b": "1" }, { "b": "2" } ] }' | jq -r '.a[0].b'
1
But if I try to get the values of all the b elements under a I get the following error:
$ echo '{ "a": [ { "b": "1" }, { "b": "2" } ] }' | jq -r '.a[*].b'
jq: error: syntax error, unexpected '*' (Unix shell quoting issues?) at <top-level>, line 1:
.a[*].b
jq: 1 compile error
How should I escape the wildcard? I've tried several variants without success.
Using wildcard as array index is a valid option according to https://support.smartbear.com/alertsite/docs/monitors/api/endpoint/jsonpath.html#:~:text=JSONPath%20is%20a%20query%20language,that%20need%20to%20be%20verified.
|
The array iterator in jq is .[]. The asterisk character is not required. The following command should get you the intended output:
echo '{ "a": [ { "b": "1" }, { "b": "2" } ] }' | jq -r '.a[].b'
Result:
1
2
Further reference: jq Manual
As to why the syntax is different, jq is a tool with its own syntax for querying. It is not based on the JSONPath standard.
| Correct escaping of wildcard when using jq |
1,463,773,129,000 |
Ok-- so I've got a mystery on my hands.
I'm currently manually PXE installing via IPMI, working in the terminal, and throughout the process of watching the terminal display information regarding the installation, I keep seeing ^[OP being repeated over and over--like someone is sitting on the keyboard.
The closest I've gotten to discovering what key this could be is this question
However, I can find no documentation of what the OP portion represents.
This happens on multiple devices I've used with different keyboards, so it is not a keyboard malfunction. It also does not occur on other machines running the same installers.
Any insight would be greatly appreciated!
EDIT: (expanding off of the accepted answer) Ok so extension on the mystery. I'm pretty sure these codes only return if the system does not know how to deal with the input. It only shows up at a random point during the installation--and when I try to hit the F1 key to force the ^[OP to show up, it does not (where the arrow keys will continue to show ^[[A, etc.) This would lead me to believe it is able to handle the F1 key somewhere within the boot (to be expected for a boot program). Any explanation on how this mystery location could be getting the character code to show up?
|
It is a representation of the F1 key...
Try for instance to execute a arbitrary command that hangs your terminal and then press F1. For example:
root@martinipc:somewhere/# read
^[OP^[OP^[OP
As you see i pressed F1 3 times and then ^[OP appeared 3 times.
Maybe your machine required you to press the F1 key before the point you are running, or maybe there is a temporary problem with the keyboard detection in the mother board, that is emulating the press of the F1 key.
Another possibility occurs if you pressed it only one time, but the system is with a lot of lag.. this can causes this strange behavior too, you press a and get something like aaaaaaaa.
| What is the meaning of the ^[OP in terminal? |
1,463,773,129,000 |
I am looking for a way to create a command line using 'at' (atd) to schedule a task, that schedules a task to do "more stuff" after a reboot (one time task).
What I am looking for right now is a way to make it fly just chaining together "at" schedules.
My problem is, that I get somehow lost with the quoting/escaping in bash.
what I got so far and what works on a manual execution is:
echo "somecommand -someOptions 2>&1 | mail -s \"$HOST after reboot: somecommand -someOptions\" [email protected]" | at now +5 minutes ;\
#do admin stuff before reboot in less than 5 mins here ;\
touch /fastboot ; reboot
as mentioned, that works fine.
I now would like to encapsulate the whole thing into ANOTHER NEW 'AT' COMMAND, so that the former command line can be scheduled to a specific starting time and no longer requires a manual execution.
I assume it must be possible to encapsulate/include the whole thing into another "at" command, so I would like to find out how to properly quote/escape it so that it will work.
I tried this (not working):
echo "echo \"somecommand -someOptions 2>&1 | mail -s \\"$HOST after reboot: somecommand -someOptions\\" [email protected]\" | at now +5 minutes ;\
#do admin stuff before reboot in less than 5 mins here ;\
touch /fastboot ; reboot ; " | at now + 1 minute
I see that it schedules 2 jobs, does the admin stuff before reboot and reboots, but the 2nd (later) at job is scrambled/crashed because of some bad quoting/escaping I guess. I would like to know and learn more about what is going on here and where I did wrong on the escaping. I guess I could use single quotes on the outer at, since in this case I don't need any variable expansions, but let's assume that I would like to use variable expansion there, too. In that case, how should I escape/quote this thing properly?
|
You forgot to backslash the double quote when you wrote \\" which should be \\\". But you can avoid this by using a third type of shell quote, the here-document:
at now + 1 minute <<\!
echo "somecommand -someOptions 2>&1 | mail -s \"$HOST after reboot: somecommand -someOptions\" [email protected]" | at now +5 minutes
#do admin stuff before reboot in less than 5 mins here
touch /fastboot ; reboot
!
| how to properly schedule a task that schedules a task with 'at' (atd) ? or how to properly quote/escape in bash |
1,463,773,129,000 |
When using shell in a box this is returned from the server
\r\n\u001B]0;kuba@jcubic:~\u0007\u001B[01;32mkuba@jcubic\u001B[00m:\u001B[01;34m~\u001B[00m$
what ESC]0; and \u0007 do?
|
These are XTerm Control Sequences, so from that list ESC ] is an "Operating System Command", and then down in that section one finds:
OSC Ps ; Pt BEL
Ps = 0 -> Change Icon Name and Window Title to Pt.
The use of unicode (\u...) is a bit odd, though the low number values used here can be looked up in man ascii:
$ man ascii | egrep -i '1b|007'
007 7 07 BEL '\a' (bell) 107 71 47 G
033 27 1B ESC (escape) 133 91 5B [
| What \u001B]0;kuba@jcubic: ~\u0007 escape code do? |
1,463,773,129,000 |
I have a debug trap that runs every time I enter a command in bash that sets the window title to indicate what command is running. I'm leaving out all the configuration details and boil it down to:
export PS1="\[\e]0;$GENERATED_WINDOW_TITLE\a\]$GENERATED_PROMPT"
This works incredibly well, with only one snag: if the bash shell is running in an environment that does not support this feature, the GENERATED_WINDOW_TITLE is printed on the screen with each prompt. This happens any time I'm running bash from a non-X terminal.
How can bash tell if this escape sequence is supported?
|
I don't think there's a terminfo capability for that. In practice, testing the value of TERM should be good enough. That's what I do in my .bashrc and .zshrc, and I don't recall it being a problem.
case $TERM in
(|color(|?))(([Ekx]|dt|(ai|n)x)term|rxvt|screen*)*)
PS1=$'\e\]0;$GENERATED_WINDOW_TITLE\a'"$PS1"
esac
| How can a bash script detect support for window titling escape characters? |
1,463,773,129,000 |
I have a PS1 set up in my .zshrc which includes multiple ANSI escape sequences. An equivalent definition works nicely in Bash, but in Zsh (v5.8.1) it seemingly causes the shell to calculate the width of the prompt incorrectly. As a consequence, when entering longer commands, the command line suddenly vanishes, and I am typing blind. And in cases it leads to the cursor being placed on the next line after the prompt, even though the PS1 does not include a line break.
I am already using the \001…\002 escape sequences that are used by readline to adjust the length calculation around each ANSI escape sequence. Unfortunately this seems to be insufficient for Zsh (in the example below they don’t seem to have any effect, but in my real, more complex PS1, they seem to improve the situation somewhat, at least).
Here’s an example to demonstrate the issue (the comment underlines the ANSI escape sequence parts that I believe need to be bracketed by \001…\002):
PS1=$'\001\e[38;2;1;1;1m\e[48;5;250m\002}\001\e[38;5;250m\e[48;2;1;1;1m\002} '
# \001--------------------------\002 \001--------------------------\002
With this prompt, and using an 80 column terminal, after I type 25 characters, the entire command including prompt vanishes. Using backspace does not make the characters reappear, but instead makes the cursor go to the previous line.
Here’s an Asciinema recording of this behaviour.
For this demo I have disabled all other customisation of my shell.
(In reality I am using the nf-pl-left_hard_divider, U+E0B0, from NerdFont instead of the }s, but this does not impact the issue.)
What am I doing wrong? How are ANSI escape sequences supposed to be used inside a Zsh prompt definition?
|
In zsh, you use %{ and %} prompt escapes for that. From info zsh 'prompt expansion':
%{...%}
Include a string as a literal escape sequence. The string within
the braces should not change the cursor position. Brace pairs can
nest.
A positive numeric argument between the % and the { is treated as
described for %G below.
So, that would be PS1='%{\e[...m%}' except that you don't need to hardcode escape sequences in zsh, as it has its own prompt expansion operators for most visual effects.
For instance to set the Foreground colour to RGB #010101 and Background colour 250 from the 256 colour palette, you'd do:
PS1='%F{#010101}%K{250}Text%f%k%# '
You can use print -P which also does prompt expansion, to see what corresponding escape sequences are being generated:
$ print -rP '%F{#010101}%K{250}' | sed -n l
\033[38;2;1;1;1m\033[48;5;250m$
There's even a zsh/nearcolor module to give you the nearest colour for those #RRGGBB specifications on terminals that don't support truecolor escape sequences (but support the 256 colour palette):
$ zmodload zsh/nearcolor
$ print -rP '%F{#010101}%K{250}' | sed -n l
\033[38;5;16m\033[48;5;250m$
| ANSI escape sequences in PS1 cause incorrect length calculation [duplicate] |
1,463,773,129,000 |
I am learning ansi escape codes with xterm. And according to this there is the following escape code:
CSI Ps SP A - Shift right Ps columns(s) (default = 1) (SR), ECMA-48.
Could anyone explain what is SP? Is it SPACE? I am asking as I tried it with space with the following command:
printf "\033[20 A"
But it didn't show any changes in ubuntu terminal.
|
Yes, it is space. If you look at ECMA-48 standard, page 68, point 8.3.35, you have:
SR: CSI Pn 02/00 04/01
It use the old ASCII (and other encoding) notation, but 02/00 is 0x20 so equivalent of SP, if you are using ASCII encoding, and 04/01 is 0x41 (A if you are using ASCII encoding).
I think that function works only on some modes, but I'm not sure, and nothing in standard and in xterm site (that you linked) gives us hints.
| What does SP mean in `CSI Ps SP A` xterm escape command? |
1,463,773,129,000 |
I want to do something like this.
$ touch a\ b
$ cmd=cat\ a\ b
$ echo $cmd
cat a b
$cmd
cat: a: No such file or directory
cat: b: No such file or directory
There are problems with the spaces in the file a b. So I tried this.
$ cmd=cat\ a\\\ b
$ echo $cmd
cat a\ b
$ $cmd
cat: a\': No such file or directory
cat: b: No such file or directory
Which also fails. How do I make it so that when I run $cmd I cat a b?
|
set -- cat "a b"
"$@"
This would run cat with the single argument a b. The expansion of "$@" (including the double quotes!) would be a list of individually quoted positional parameters. The positional parameters are set with set.
It's uncommon to include the command name in the list itself though. It would be more common to see
set -- "a b"
cat "$@"
or, if you want to keep the actual command name variable,
cmd=cat
set -- "a b"
"$cmd" "$@"
If you, in-between setting the filename operands and running the command, want to add more filenames or options to the command, simply modify $@:
cmd=cat
set -- "a b"
# later (adding more files to the end):
set -- "$@" "more files" "some glob here-"[0-9]
# later (adding an option at the front):
set -- -v "$@"
"$cmd" "$@"
Note that it becomes difficult to add to the front of $@ if the command name is already there. In that case you would have to temporary shift the command name off of $@ into a separate variable, add the thing you want to add, and then add the command name again:
cmd=$1
shift
set -- "$cmd" -v "$@"
We usually want to use set -- rather than just set to set the positional parameters. The -- stops the parser of command line options from detecting command line options past that point in the command line argument list. Note that without --, it would be impossible to add the string -v at the front of $@ with set as set -v does something quite different.
Related:
How can we run a command stored in a variable?
| Escaping spaces in dash [duplicate] |
1,463,773,129,000 |
I'm writing a wrapper around ack to search for code locally with some additional lines of context piped to a pager.
Here's the wrapper script ackc. Between the different examples, I'll be varying what gets passed to ack as the --pager.
#!/bin/sh
ack -C 20 -i \
--pager=most \
--heading \
--break \
--color \
--show-types \
"$@"
With less (without the -R) as the pager, almost all of the escape sequences are rendered using the caret notation (don't know what that's called. ^[ is the exception. It is rendered as ESC with inverted background colors (colors not reproduced here).
Here's a sample of the output (produced by ackc with --pager=less and environment variables such as LESS, LESSPIPE etc cleared)
ESC[1;32m.local/lib/python2.7/site-packages/markupsafe/_speedups.cESC[0m
...
ESC[1;33m19ESC[0m:#define PY_SSIZE_T_MAX ESC[30;43mINTESC[0m_MAXESC[0mESC[K
ESC[1;33m20ESC[0m:#define PY_SSIZE_T_MIN ESC[30;43mINTESC[0m_MINESC[0mESC[K
The important escape sequence here is the ^[[K sequence at the end of each line containing a highlighted item. It is handled appropriately by less -R.
.local/lib/python2.7/site-packages/markupsafe/_speedups.c
...
19:#define PY_SSIZE_T_MAX INT_MAX
20:#define PY_SSIZE_T_MIN INT_MIN
most, however, does not seem to handle it very well.
.local/lib/python2.7/site-packages/markupsafe/_speedups.c
1-/**
...
19:#define PY_SSIZE_T_MAX INT_MAX^[[K
20:#define PY_SSIZE_T_MIN INT_MIN^[[K
It passes through the ^[[K sequence as-is.
This sequence is CSI (n) K -- EL -- Erase in Line. When given no argument, it erases to the end of the line. Presumably this is needed to clear stray bits of background color if the matched term appears at the end of the line.
Is there a reason why most doesn't understand this sequence? Can I configure it to process it correctly?
|
most's behavior is hard-coded. The source-code has several chunks like this for parsing after an escape character is received:
if ((ch == 033) && (Most_V_Opt == 0))
{
while ((ch == 033)
&& (0 == most_parse_color_escape (&b, e, NULL))
&& (b < e))
ch = *b++;
}
Basically it says if it finds an escape character (033) and the -V option isn't set, then look for ANSI color escape sequences.
All of the clearing operations begin with an escape character as well, so most will not do what's asked.
By the way, I see that Davis made a change a couple of days ago as a workaround. Ultimately that will be in a packaged version...
Author: John E. Davis 2018-07-11 06:26:02
Committer: John E. Davis 2018-07-11 06:26:02
Parent: 97befd7b984520e80475bb1cb857b35650755a15 (pre5.1-20: Added support for Home/End keys)
Branches: master, remotes/origin/master
Follows:
Precedes:
pre5.1-21: Added a work-around for programs that try colorize the output using the clear-to-end-of-line escape sequence (ESC[K) without regard for the value of isatty(fileno(sdout)).
+21. src/line.c: Added a work-around for programs that try colorize the
+ output using the clear-to-end-of-line escape sequence (ESC[K)
+ without regard for the value of isatty(fileno(sdout)). Most will
+ ignore ESC[K unless invoked with -v.
| ANSI escape sequence ^[[K processed by less -R but not most |
1,463,773,129,000 |
How to create ssh session by ssh foo@bar except not any color format? The only require is the setting should be declared at the ssh command and can't be setting at config file.
As an aside, the reason is I want get the ssh shell result to a Scala program, git repos more detail if you interested. So, the color codes, such as[03m(maybe others), should be removed. I can't expect remote host make some setting and let it no color.
UPDATE
about JSch SSH library, there is some setting to explain:
setPtyType("dumb"), the parameter dumb setting the session with dumb emulation terminal similar with Linux, VT100 etc. The dumb emulation terminal is not any colors codes in result. Use echo $TERM will print dumb after setting this property.
setPty(bool enable) default is false but if I explicit setting setPty(false) the result will be break. Unfortunately I haven't find a API to get the pty values at runtime and the library is lack of document even though it is Java de-facto standard SSH library.
why I want use ssh in a ssh?
This feature is need because some key file maybe on remote host.So, I need login the remote host first and then login another host.
login a terminal and use ssh connect to another host is a normal operation.
What's the Scala library output if it has some color codes?
steps:
use JSch create a ssh session connection.
use ls
connect to new shell
use cd /var on the new shell
use ls on the new shell.
//use `JSch` create a ssh session connection.
val shell = new Shell(Auth("hostname", "username", 22, "pwd"))
//use `ls`
shell.exc("ls")
//connect to new shell
val newShell = shell.newShell(Auth("xxx.xxx.xxx.xxx", "root", 22, "pwd")).right.get
//use `cd /var` on the new shell
newShell.exc("cd /var")
//use `ls` on the new shell.
newShell.exc("ls")
newShell.exit()
shell.disconnect()
The command shell.exc("ls") output perfectly because I'm use dumb property to JSch.setPtyType(). It not color codes print.
The problem of the color codes occurred at newShell.exc("ls"), it prints:
[0m[01;34mbackups[0m [01;34mcache[0m [01;34mlib[0m [01;34mlocal[0m [01;36mlock[0m [01;34mlog[0m [01;34mmail[0m [01;34mopt[0m [01;36mrun[0m [01;34mspool[0m [30;42mtmp[0m
Obviously, it contains some color codes. val newShell is generated by a bare command ssh -t -o StrictHostKeyChecking=no foo@bar, so it different with JSch ways.
Currently, I just want dispose the color codes and keep the response information only contains text:
backups cache lib local lock log mail opt run spool tmp
Hope it will be clear the question. Thanks again.
UPDATE again
Sorry for my two mistake.
TERM=dumb is needn't at all because JSch has setting dumb and new ssh will fooling it.
setPty(true)(or -T parameter) is affect, my test was fail because of another problem that the MAGIC_SPLIT constants should be start/end with \n rather then \r\n in this mode.
What's the changes should I do?
Use setPty(true) for all message protocol, which contains JSch and ssh with -T parameter, rather then dumb because it is more clear response.
What's the problem left?
Some remote host will response a error message stdin: is not a tty after success login the remote host that is matter because of echo $? will be fail. But it is off topic already.
About this topic
Back to the question how to use ssh session with no color formatting?, the answer is use -T parameter for ssh.
And finally, thanks a lot for @sourcejedi's patience.
|
You just do it.
Local programs whose output is piped to another program, are expected to detect that they are not connected to any terminal, so they can't use any color codes (which vary between terminals, hence the TERM environment variable).
And when the local program ssh is piped to another program, and you do not pass the options -tt, it suppresses allocation of a "pseudo-terminal" and uses pipes instead. See also man ssh.
If your code was allocating a "pseudo-terminal" instead of using a pipe to capture the output, you would should notice that fact. In most contexts the code required is more obscure and longer than if you used a pipe; most times you don't need (or, as you say, want) the extra features of a PTY.
EXCEPT I think your question is wrong (in the UPDATE part)
Suppose you're actually not running ssh, and instead running sshpass -p '${auth.password}' ssh...
sshpass, as per it's man page, is running ssh "in a dedicated TTY, fooling it into thinking it is getting the password from an interactive user" (yep, it's another PTY).
In that case, you would need to disable the normal terminal behaviour again for the inner SSH connection. I.e. using ssh -T (not ssh -t as in one of the code commits!).
I think prefixing with TERM=dumb does also achieve a very similar effect. It's a bit of a work-around. ssh -T avoids the need to mess around with TERM. But I tested TERM=dumb sshpass sh -c 'echo $TERM' on my computer, it seems to pass through OK, and maybe it feels simpler to you.
Next question: Why is your testing telling you this is necessary, when your code is also already working to set TERM=dumb, before you launch the (inner) ssh command? You'd expect the ssh to have inherited TERM=dumb already. Look at the code for Jsch. I think your setPtyType("dumb") won't have any effect, for the reason that you don't have any call to setPty(true).
My understanding is contradicted by your assertion that setPty(false) "breaks", but you don't say how.
| How to use ssh session with no color formatting |
1,463,773,129,000 |
I use the command find . -maxdepth 1 -not -type d
which generates output like ./filename.1.out
I pipe the find command output to awk. The goal is to split on either the literal ./ or .. I have it working using:
find . -maxdepth 1 -not -type d | gawk 'BEGIN { FS = "(\\./)|(\\.)" } ; { print NF }'
In fact it works if I drop the first backslash in the first set of paren. Ex:
find . -maxdepth 1 -not -type d | gawk 'BEGIN { FS = "(\./)|(\\.)" } ; { print NF }'
What I don't understand - and my question is why does it not work if I use:
find . -maxdepth 1 -not -type d | gawk 'BEGIN { FS = "(\./)|(\.)" } ; { print NF }'
By "not work" I mean NF returns with a number as if the second paren was a regex . character (to match any type of character). Maybe I'm answering my own question... but as I look at the commands/behavior it would appear that the initial backslash is being ignored. In fact, there was a warning escape sequence message saying that \. was being treated as plain '.'. But I didn't really understand what it was doing until I began printing NF.
And indeed... the awk doc for escape sequences (https://www.gnu.org/software/gawk/manual/html_node/Escape-Sequences.html#Escape-Sequences) say:
The backslash character itself is another character that cannot be included normally; you must write \\ to put one backslash in the string or regexp.
So if I wanted to wring a regex to match a dollar sign then I would need FS="\\$"?
The post was originally to ask why it was happening. Then I believe I may have pieced things together. If I am wrong then please set me straight.
|
The FS value was scanned twice, the first as a string value and the second as an ERE (See Lexical Conventions).
And also, POSIX did not specify the behavior of \c when c is not one of ", /, \ddd with d is one of octal digits, \, a, b, f, n, r, t, v. So you don't know whether string \c will be passed as \c or c to ERE.
gawk, nawk, and Brian Kernighan's own version give you c, while mawk give you \c:
$ for AWK in gawk mawk nawk bk-awk; do
printf '<%s>\n' "$AWK"
echo | "$AWK" -F '\.' '{print FS}'
done
<gawk>
gawk: warning: escape sequence `\.' treated as plain `.'
.
<mawk>
\.
<nawk>
.
<bk-awk>
.
Because \\ will always be recognized as \, then you will be safe with \\c:
$ for AWK in gawk mawk nawk bk-awk; do
printf '<%s>\n' "$AWK"; echo | "$AWK" -F '\\.' '{print FS}'
done
<gawk>
\.
<mawk>
\.
<nawk>
\.
<bk-awk>
\.
The string value of \\c will be \c, so using it as an ERE give you the desired result.
| awk FS with back slashes |
1,463,773,129,000 |
I'm trying to put a bunch of images together into a pdf. I ran gm convert *.jpg out.pdf and it worked, but the images were not in the right order.
I found that ls -v orders them correctly so then I tried gm convert `ls -v *.jpg` out.pdf, but that failed because the files have spaces in the names.
So I checked the ls options and tried gm convert `ls -Qv *.jpg` out.pdf and gm convert `ls -bv *.jpg` out.pdf but both failed because, as far as I can tell, the file names got escaped twice (using \" in the first case, and \\ before space in the 2nd case).
Why is this happening, and how can I get the file names escaped correctly, i.e. only once? I'm using zsh if that's relevant.
|
You can't use escaping this way. You can use quotes or backslashes for something that understands them like the shell parsing.
There's nothing to escape the word splitting done upon variable expansion or command substitution (which is different from the shell parsing (tokenising) of a command line).
What you can do is either change the internal field delimiter $IFS to split on newline characters only (but note that except with zsh, you also need to disable filename generation (though not for the expansion of *.jpg obviously) which is the other thing done upon unquoted command substitution, unless you can guarantee that no filename contains globbing characters):
set -f # disable filename generation
IFS='
' # set IFS to newline character
IFS=$'\n' # alternative for ksh/zsh/bash
gm convert $(set +f; ls -v ./*.jpg) out.pdf # you also need ./ in case file
# names start with -
(that assumes none of the file names contain newline characters)
set -f in zsh (unless in sh/ksh emulation) disables reading rc files (think zsh -f, csh -f), which has no effect once the shell is already started so is harmless.
With zsh, you can shorten that to:
gm convert ${(f)"$(ls -v ./*.jpg)"} out.pdf
(f) short for (ps:\n:) is to split on newline characters.
Alternatively, you could use ls quoting to build a shell command line (shell code in shell syntax if you like) to pass to eval which would let the shell evaluate it and run it:
With ksh93, zsh or bash:
eval "images=($(ls --quoting-style=shell -v ./*.jpg))"
gm convert "${images[@]}" out.pdf
But with zsh, you could simply do:
gm convert ./*.jpg(n) out.pdf
(n) is a globbing qualifier that tells the shell to sort the generated file names numerically.
| How to prevent double escaping? |
1,463,773,129,000 |
I'm using urxvt version 9.22 under Ubuntu 20.04.1.
Even though I use -title MYTITLE on the urxvt command line, and even though I set the following Xresource ...
URxvt.insecure: false
... the title in the titlebar still gets overwritten if any command that is run within urxvt sends out the appropriate title-change escape sequence.
Is there any way in urxvt to specify that -title specified on the command line will never be able to be overwritten via any escape sequence?
Thank you in advance for any thoughts and ideas.
|
The resource insecure and the corresponding command line parameter -insecure only affect the sequences which produce output, like the unofficial DSR 7 extension:
case 7: /* unofficial extension */
if (option (Opt_insecure))
tt_printf ("%-.250s\012", rs[Rs_display_name]);
break;
From the source code:
case XTerm_title:
set_title (str);
break;
and the definition of set_title:
void
rxvt_term::set_title (const char *str)
{
set_mbstring_property (XA_WM_NAME, str);
#if ENABLE_EWMH
set_utf8_property (xa[XA_NET_WM_NAME], str);
#endif
}
it can be seen that the option(Opt_insecure) value isn't checked at all by URxvt when setting the title (and icon, etc).
The answer seems to be that the behavior described in this question could only be achieved by modifying the source code of URxvt to take the option(Opt_insecure) into account when setting the title.
| With urxvt: unable to prevent title in title bar from being overwritten |
1,463,773,129,000 |
In the documentation for the Zsh Line Editor, there is a section that says:
For either in-string or out-string, the following escape sequences are recognised:
\a
bell character
\b
backspace
\e, \E
escape
\f
form feed
\n
linefeed (newline)
\r
carriage return
\t
horizontal tab
\v
vertical tab
\NNN
character code in octal
\xNN
character code in hexadecimal
\uNNNN
unicode character code in hexadecimal
\UNNNNNNNN
unicode character code in hexadecimal
\M[-]X
character with meta bit set
\C[-]X
control character
^X
control character
In all other cases, ‘\’ escapes the following character. Delete is written as ‘^?’. Note that ‘\M^?’ and ‘^\M?’ are not the same...
How should those last two sequences be interpreted? My guess is:
\M^? - delete with the meta bit set?
^\M? - control + question mark with the meta bit set
Is this correct?
|
^? is the byte 127 = 0x7f, which is commonly sent by the Backspace key (unless it's set to send ^H and the Delete key is set to ^?).
\M^? or \M-^? is the same but with the upper bit set, i.e. 255 = 0xff. On modern systems, non-ASCII characters are encoded in UTF-8. On some ancient systems, or on modern systems with some backward compatibility settings designed for ASCII-only input, typing an ASCII character while holding Meta sends the corresponding byte with the upper bit set. If your terminal does that and it sends ^? for Ctrl+?, you should be able to input this byte with Meta+Ctrl+?.
% bindkey '^\M?' wibble
% bindkey | grep wibble
"\M-^_" wibble
^\M? is parsed as controlifying \M?, which is metaifying ?, i.e. setting the upper bit (bit 7) in ?. ? is 0x3f = 0b00111111 so \M? is the byte 0xcf = 0b10111111. Controlifying apparently sets bits 5 and 6 to 0 for every character except ?, for which it changes the value to 0x7f. Thus ^\M? ends up being 0x9f = 0b10011111, which is normally written \M^_ (set the upper bit of ^_). That's not useful behavior, it's just an edge case in the implementation.
| What do these strings, '\M^?' and '^\M?', represent in zsh/ZLE? |
1,463,773,129,000 |
Here is a command that works perfectly for me on the command-line:
find . -type f -exec grep -Hin --include=*.h --include=*.c 'somestring' {} \;
When I run the above command substituting the search path . with any path, the command still shows me only the list of files with .c or .h extension.
Now, I want to write a simple bash script with the same command as the value of a variable, just so that I could execute the script with minor modifications to do a similar search, rather than having to type the command all over again. But that is where I run into the escape rules nightmare (or lack of proper understanding of it!).
I wrote a script as shown below:
#!/bin/bash
path="/home/vinod"
string="somestring"
command="find ${path} -type f -exec grep -Hin --include=*.h --include=*.c '${string}' {} \;"
echo $command
$command
When I run the above script, I get the command echoed two times instead of once, as shown below
find . -type f -exec grep -Hin --include=*.h --include=*.c 'somestring' {} \;
find . -type f -exec grep -Hin --include=*.h --include=*.c 'somestring' {} \;
and the following run-time error:
find: missing argument to -exec
As you can see from the echo, the command is exactly the same as when I ran it on the command-line, for which I got the expected result.
Any thoughts on what could be wrong with my script?
TIA
|
Don't use a variable to store a command . Instead, use variables to store data and use functions to store (define) commands.
We can create a command that takes two parameters: a starting search path and a search string.
You could trivially modify it to take just a search pattern if your starting search path was also fixed. In fact, let's turn the order of the parameters around and say that if the search path is omitted it defaults to your $HOME directory:
#!/bin/sh
# Search in *.c and *.h files for a matching pattern
#
pattern=$1
path=${2:-$HOME}
find "${path:-.}" -type f \( -name '*.c' -o -name '*.h' \) -exec grep -Hin -- "$pattern" {} +
Save this as the file chfind, make it executable (chmod a+rx chfind), and put it somewhere in your $PATH. You can now use it just like any other utility:
chfind 'main'
chfind 'main' /some/other/tree/of/files
Because we didn't use grep -F the search string is actually a Regular Expression rather than a plain string, so searching for a declaration such as FILE *fp will not work.
Finally, if you want path names to be relative to your search path, you could change directory to the $path and then search from there:
cd "$path" &&
find -- * -type f \( -name '*.c' -o -name '*.h' \) -exec grep -Hin -- "$pattern" {} +
It's possibly worth explaining how this could be achieved with a function rather than in-line in the script. I've addressed your underlying requirement ("I want to write a simple bash script […] so that I could execute the script […] to do a similar search, rather than having to type the command all over again"). But if you want to use a function, it's almost exactly the same:
#!/bin/bash
# Search in *.c and *.h files for a matching pattern
#
chFind() {
local pattern=$1 path=$2
find "${path:-.}" -type f \( -name '*.c' -o -name '*.h' \) -exec grep -Hin -- "$pattern" {} +
}
chFind "$1" "${2:-$HOME}"
| How can I make this command work in bash script? [duplicate] |
1,463,773,129,000 |
I am trying to remove duplicates based on multiple columns from a pipe delimited file using this How to remove duplicates based on multiple dynamic columns
But I found there are pipes within the values in double quotes like below
3|XX|""|2022-04-05T21:39:22.899Z|2022-04-05T21:37:59Z|X7
3|XX|"2025035|6|15|0|0|15|39"|2022-04-05T21:39:22.899Z|2022-04-05T21:37:59Z|X7
These 2 rows are duplicates when I check on last column position6 and position 2 but due to pipes in position3 its not working. How to escape the Pipes in double quotes in below code ?
$4='2,6'
awk -v c="$4" -F'|' 'BEGIN{split(c,k,",")} {key=""; for (i in k) key=key FS $(k[i])} !seen[key]++'
TIA
|
With GNU awk for FPAT:
$ awk -v c='2,6' -v FPAT='([^|]*)|("[^"]*")' 'BEGIN{split(c,k,",")} {key=""; for (i in k) key=key RS $(k[i])} !seen[key]++' file
3|XX|""|2022-04-05T21:39:22.899Z|2022-04-05T21:37:59Z|X7
If you could have nested double quotes like "foo""bar" then change the FPAT assignment to FPAT='[^|]*|("([^"]|"")*")'
See whats-the-most-robust-way-to-efficiently-parse-csv-using-awk for more information.
| How to remove duplicates from pipe delimited file using awk with Pipe in Values? |
1,463,773,129,000 |
On wiki I found the following code for bright colors for SGR (select graphic rendition) function
FG BG
90 100 Bright Black
91 101 Bright Red
92 102 Bright Green
93 103 Bright Yellow
94 104 Bright Blue
95 105 Bright Magenta
96 106 Bright Cyan
97 107 Bright White
On wiki it is said "Later terminals added the ability to directly specify the "bright" colors with 90–97 and 100–107." However, I can't find these codes in ECMA-48/5th. There are only parameter values from 0 to 65. Could anyone explain these codes on wiki and how to make color bright according to ISO/IEC 6429:1992?
|
ECMA-48 doesn't define "bright colors". That came about due to PC-displays. It's an FAQ.
ECMA-48 defines colors with codes 0-7, both foreground and background. Text (ECMA-48) can be displayed with bold. PC-displays would not show bold text (equating bold to bright is reversing cause/effect), but used brightness for that feature. Since the normal (non-bold) yellow came out as brown, etc., in xterm and other terminals (such as Linux console), colors 8-15 were a desirable feature (in xterm, boldColors resource).
boldColors (class ColorMode)
Specifies whether to combine bold attribute with colors like
the IBM PC, i.e., map colors 0 through 7 to colors 8 through
15. These normally are the brighter versions of the first 8
colors, hence bold. The default is "true"
Some applications referred to that as bright colors (which is unnecessarily restrictive). You'd have to go back a while to see which term came first. In xterm, it was initially referred to just as "16-colors". In the aixterm manpage, neither bold or bright is used:
30..37 foreground colors—Xh, H
40..47 background colors—Xh, H
90..97 foreground colors—Xh, H
100..107 background colors—Xh, H
(the Xh and H refer to types of terminals).
Linux console, by the way, has "recently" (in the past 2-3 years) added the aixterm codes 90-107 for "bright" colors.
| How to make color bright with SGR function according to ISO/IEC 6429:1992? |
1,463,773,129,000 |
I want to enclose Fortran comments with two escape commands (ESC+E and ESC+F).
This implies detecting the comment that begins with ! until end of line, and prefixing it with ESC+E and suffixing it with ESC+F.
First attempt
$ echo "test line ! Enclose this in ESC commands" | sed 's/\(!.*\)/\033E\1\033F/'
test line 033E! Enclose this in ESC commands033F
The ESC character itself is not generated, instead I get 033.
Second attempt
$ echo "test line ! Enclose this in ESC commands" | sed $'s/\(!.*\)/\033E\1\033F/'
sh: Syntax error: Bad escape sequence
System details
Operating system: FreeBSD 12.
Shell: sh.
Sed: sed.
|
Failed attempts
There are two levels to understand here. First the shell processes the input,
which is then passed onto Sed.
sed 's/\(!.*\)/\033E\1\033F/'
Single quotes preserve the literal meaning of all the characters
inside, so in this first attempt Sed gets all the characters
between the quotes.
However, it fails because Sed does not understand \033 as the ASCII
octal 033 (ESC). You may have assumed that,
but the Sed manual says nothing about it.
sed $'s/\(!.*\)/\033E\1\033F/'
The $'...' construct is ANSI C quoting. This is a good idea
because the shell then transforms $'\033' into the ESC character.
However, FreeBSD's Sh manual
contains a list of valid backslash sequences and then clearly says
Any other string starting with a backslash is an error.
Does it list \( or \)? No, thus the error message. And \1, that is
supposed to go to Sed, would also be interpreted as the ASCII octal 001
(SOH), which is definitely not what you want.
Solutions
Note that for options 1 and 2 below, \033 can also be written simply as \e.
Only ANSI quote the \033 escape sequences, leaving the rest inside normal
quoting:
sed 's/\(!.*\)/'$'\033''E\1'$'\033''F/'
^^^^^^^^^^^^ ^^^^^ ^^^^
^^^^^^^ ^^^^^^^
No capture group is needed to capture the whole matched string. That is & by
default.
sed $'s/!.*/\033E&\033F/'
(POSIX compliant) Use Printf to generate ESC. Choose one of
esc=$(printf '\033'); sed "s/!.*/${esc}E&${esc}F/"
sed "s/!.*/$(printf '\033')E&$(printf '\033')F/"
(POSIX compliant) Awk.
awk '{sub(/!.*/, "\033E&\033F"); print}'
| How to insert ESC control characters in a file in FreeBSD? |
1,463,773,129,000 |
I'm working on code for a serial terminal and I'm implementing the ANSI escape codes for moving around the cursor, clearing the screen, etc, and I am curious how to know which to use since there doesn't seem to be a clear stopping point for the codes.
I'm using https://www2.ccs.neu.edu/research/gpc/VonaUtils/vona/terminal/vtansi.htm as a reference
For example, if I receive the code,
I start reading characters, but if I get the value 75='K', that could be ESC[K = Erase End of Line, or a 75 as a count for a code like ESC[{COUNT=75}C for move cursor 75 columns right.
What if I was receiving the code to erase the line followed by a printed A? As far as I know the code for that and the cursor 75 cols right would receive the exact same sequence.
I'm probably missing something obvious but could someone please give me a hint? Thank you
|
For "ANSI" (actually ECMA-48), the characters which begin the control sequence, determine the set of final characters. It's documented near the beginning of ECMA-48 (section 5.4 is particularly pertinent, though you may need an ASCII chart to understand its terminology).
The parameter 75 in a control sequence would be the characters 75, rather than a character whose value happened to be 75. There's no confusion between the two.
The link you cited was for a document written by someone who was unfamiliar with the standard. It's mentioned in the ncurses FAQ How do I get color with VT100?.
| How to know the end of an ANSI control code? |
1,463,773,129,000 |
The following command
tmux new -A -s $(date +%Y%m%d%H%M%S)
works and starts tmux with a session, named after current datetime (as expected).
But if I put the same in ssh config
RemoteCommand tmux new -A -s $(date +%Y%m%d%H%M%S)
it says
percent_expand: unknown key %Y
why and how to fix?
Apparently, ssh tries to expand percent sign. How to disable/escape this expansion?
|
Use %% where you want a literal %. This is extremely common: in most syntaxes that have a single escape character, doubling that character yields the literal character. For example \\ to match a literal backslash in a regular expression, \\ to get a literal backslash character in a shell unquoted or double-quoted word, %% to get a literal percent sign in printf output, etc.
RemoteCommand tmux new -A -s $(date +%Y%m%d%H%M%S)
The summary table you found didn't bother to list %%, but the OpenSSH manual does.
If you wanted a literal percent sign in the output of date (for whatever reason) you'd double the % for date: date +%Y%m%d%%%H%M%S puts a percent sign between the date and the time. In an SSH remote command, you'd need to double once for date and once for SSH, so 4 % would stand for one: RemoteCommand tmux new -A -s $(date +%Y%m%d%%%%%H%M%S).
| RemoteCommand with percent signs doesn't work |
1,463,773,129,000 |
I am trying to write a script which has to execute a command containing single quotes in it. This is the command I am trying to execute in the script:
srt-live-transmit udp://224.0.0.0:1234 'srt://@1111?passphrase=thisisatest&latency=500' -v
And this is my BASH shell script's command:
srt-live-transmit $MC srt://${SRT_IP}:${SRT_PORT}?${LATENCY}&${PASS} -v
As you can see the SRT path is in single quotes in order for the command to accept my both parameters: passphrase and latency. I have tried to escape the single quote with \', '\'', '"'"', $\' but the command is either not executed or the SRT path is without single quotes when I grep for the process in the process list.
|
If I understand correctly, and your variables contain what I guess they contain (next time, please show us what the variables' values are and how you assign them), then all you need is quoting:
srt-live-transmit "$MC" "srt://${SRT_IP}:${SRT_PORT}?${LATENCY}&${PASS}" -v
You can't really escape single quotes in a single-quoted string, but you can just use double quotes, which allow variables to be expanded, instead.
| Escaping single quotes in a bash script command |
1,463,773,129,000 |
I have a command that sets an environment variable from a command like this:
BLACKLIST=$(python tools.py gen-blacklist)
Which results in a string that contains dots * and asterisks *, like this:
LISTEN,UNLISTEN,NOTIFY,SHOW,REFRESH,pg_notify,.*remove,.*delete,.*update,.*create,.*insert
I want to be able to escape all the dots and asterisks in the string, this is what I have tried:
TEMP=$(python tools.py gen-blacklist) && BLACKLIST=$(echo ${TEMP/.\*/\\.\\*}) && echo $BLACKLIST
but it only replaces at the first occurence.
LISTEN,UNLISTEN,NOTIFY,SHOW,REFRESH,pg_notify,\.\*remove,.*delete,.*update,.*create,.*insert
How do I escape for all occurences of * and .?
|
Smells like an xy problem, but regardless...
From parameter expansion, this is what the manual has to say on ${parameter/pattern/string}-style expansions -
If pattern begins with ‘/’, all matches of pattern are replaced with
string. Normally only the first match is replaced
Therefore ${TEMP/.\*/\\.\\*} needs to change to ${TEMP//.\*/\\.\\*} (notice the additional / immediately after TEMP).
This yields
LISTEN,UNLISTEN,NOTIFY,SHOW,REFRESH,pg_notify,\.\*remove,\.\*delete,\.\*update,\.\*create,\.\*insert
| Escape dots and asterisks in command line variable output string |
1,463,773,129,000 |
Is there any way to cat a file without interpreting the double backslash as an escape sequence?
In this example a tex file is created:
cat <<EOF > file.tex
\\documentclass[varwidth=true,border=5pt]{standalone}
\\usepackage[utf8]{inputenc}
\\usepackage{amsmath}
\\begin{document}
$1
\\end{document}
EOF
How can I write this so that the backslash doesn't have to be written twice each time, but $1 is still expanded with its normal value (which might contain backslashes too)?
|
No, you are out of luck. The manual states:
and \ must be used to quote the characters \, $, and `
There is a workaround, use several here-docs:
cat <<\EOF > file.tex
\documentclass[varwidth=true,border=5pt]{standalone}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\begin{document}
EOF
cat <<EOF >> file.tex
$1
EOF
cat <<\EOF >> file.tex
\end{document}
EOF
Or better, once a variable contains a backlash, it is not changed on expansion:
doc1='\documentclass[varwidth=true,border=5pt]{standalone}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\begin{document}
'
doc2="$1"
doc3='\end{document}
'
cat <<EOF > file.tex
$doc1
$doc2
$doc3
EOF
Which is a convoluted way of writing:
doc='\documentclass[varwidth=true,border=5pt]{standalone}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\begin{document}
'"$1"'
\end{document}
'
printf '%s' "$doc" > file.tex
This also work with some other examples:
$ doc='\[\begin{bmatrix} t_{11} & t_{12} & t_{13} & t_{14} \\ t_{21} & t_{22} & t_{23} & t_{24} \\ t_{31} & t_{32} & t_{33} & t_{34} \end{bmatrix}\]'
$ printf '%s\n' "$doc"
\[\begin{bmatrix} t_{11} & t_{12} & t_{13} & t_{14} \\ t_{21} & t_{22} & t_{23} & t_{24} \\ t_{31} & t_{32} & t_{33} & t_{34} \end{bmatrix}\]'
And also, just to show that variables are expanded only once:
$ cat <<EOF
$doc
EOF
\[\begin{bmatrix} t_{11} & t_{12} & t_{13} & t_{14} \\ t_{21} & t_{22} & t_{23} & t_{24} \\ t_{31} & t_{32} & t_{33} & t_{34} \end{bmatrix}\]
| Here-document without interpreting escape sequences, but with interpolation |
1,463,773,129,000 |
Bash uses GNU Readline. Readline provides a collection of keyboard shortcuts. However, there are some shortcuts that work on bash and that are not documented in Readline reference. Some examples are:
C-h - Same as Backspace
C-m - Same as Enter (CR I guess)
So why do these shortcuts work? I guess that these may have something to do with ASCII but I am not sure which component provides interpretation of these control sequences as the behavior I have indicated.
Is it the Readline library? Or is it bash itself? Is it my terminal emulator? Is it the kernel? Etc...
What component makes these control sequences behave this way?
Edit: My .inputrc file:
# To the extent possible under law, the author(s) have dedicated all
# copyright and related and neighboring rights to this software to the
# public domain worldwide. This software is distributed without any warranty.
# You should have received a copy of the CC0 Public Domain Dedication along
# with this software.
# If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
# base-files version 4.2-4
# ~/.inputrc: readline initialization file.
# The latest version as installed by the Cygwin Setup program can
# always be found at /etc/defaults/etc/skel/.inputrc
# Modifying /etc/skel/.inputrc directly will prevent
# setup from updating it.
# The copy in your home directory (~/.inputrc) is yours, please
# feel free to customise it to create a shell
# environment to your liking. If you feel a change
# would be benifitial to all, please feel free to send
# a patch to the cygwin mailing list.
# the following line is actually
# equivalent to "\C-?": delete-char
"\e[3~": delete-char
# VT
"\e[1~": beginning-of-line
"\e[4~": end-of-line
# kvt
"\e[H": beginning-of-line
"\e[F": end-of-line
# rxvt and konsole (i.e. the KDE-app...)
"\e[7~": beginning-of-line
"\e[8~": end-of-line
# VT220
"\eOH": beginning-of-line
"\eOF": end-of-line
# Allow 8-bit input/output
#set meta-flag on
#set convert-meta off
#set input-meta on
#set output-meta on
#$if Bash
# Don't ring bell on completion
#set bell-style none
# or, don't beep at me - show me
#set bell-style visible
# Filename completion/expansion
#set completion-ignore-case on
#set show-all-if-ambiguous on
# Expand homedir name
#set expand-tilde on
# Append "/" to all dirnames
#set mark-directories on
#set mark-symlinked-directories on
# Match all files
#set match-hidden-files on
# 'Magic Space'
# Insert a space character then performs
# a history expansion in the line
#Space: magic-space
#$endif
|
The bindings (whether they appear in the manual or not) appear when you type
bind -p
For instance (partial listing):
"\C-g": abort
"\C-x\C-g": abort
"\e\C-g": abort
"\C-j": accept-line
"\C-m": accept-line
# alias-expand-line (not bound)
# arrow-key-prefix (not bound)
# backward-byte (not bound)
"\C-b": backward-char
# backward-byte (not bound)
"\C-b": backward-char
"\eOD": backward-char
"\e[D": backward-char
"\C-h": backward-delete-char
"\e[3;5~": backward-delete-char
"\C-?": backward-delete-char
"\C-x\C-?": backward-kill-line
"\e\C-h": backward-kill-word
"\e\C-?": backward-kill-word
"\eb": backward-word
"\e<": beginning-of-history
The manual documents the -p option:
The bind -p command displays Readline function names and bindings in a format that can put directly into an initialization file. See Bash Builtins.
The bindings (reading the source code) depend on the keymap. The ones I quoted are from the emacs keymap, which is initialized from a built-in table before scripts are applied. There is a corresponding file with tables for the vi keymap.
All of that is part of Readline (which is bundled with bash). When bash starts up, it defines the bindings using these tables. Depending on the other files it reads from /etc/inputrc, ~/.inputrc it may add to, modify or remove some of these built-in bindings.
| Keyboard shortcuts C-h, C-m in bash [duplicate] |
1,463,773,129,000 |
I'd like to know what's the difference between these two commands:
echo ` echo `date` `
echo ` echo \`date\` `
I know that \ is used to escape characters, but I cannot understand it in this particular context. Why aren't we using
echo \` echo \`date\` \`
instead, if we are supposed to escape ` character?
|
You can use the other expression for backquotes $(cmd), that can be nested. On the other hand, you can produce inner arguments to backquotes into variables and use them inside
echo $(echo `date`)
echo $(echo $(date))
x=`date` echo `echo $x`
Without the escape quote \`, you will have
echo $(echo )date$( )
The shell will try to parse the arguments, so try to figure the arguments of your expression, i.e.:
echo \` echo \`date\` \`
argv[0]="echo", argv[1]="`", argv[2]="echo", argv[3]="`date`", argv[4]="`"
I leave the other examples to figure out for yourself.
| Backquotes interpretation |
1,463,773,129,000 |
I have a program that executes some commands in a terminal (for example apt-get update) and stores the output.
How can I reinterpret the output, so I can remove all information that is not needed ?
For example, the raw output is (this is just a fraction of the output):
Reading package lists... 0%
Reading package lists... 0%
Reading package lists... 1%
Reading package lists... 6%
Reading package lists... 6%
Reading package lists... 26%
Reading package lists... 32%
Reading package lists... 32%
Reading package lists... 39%
Reading package lists... 39%
Reading package lists... 53%
Reading package lists... 65%
Reading package lists... 65%
Reading package lists... 68%
Reading package lists... 68%
Reading package lists... 83%
Reading package lists... 84%
Reading package lists... 84%
Reading package lists... 86%
Reading package lists... 86%
Reading package lists... 88%
Reading package lists... 88%
Reading package lists... 90%
Reading package lists... 90%
Reading package lists... 92%
Reading package lists... 92%
Reading package lists... 93%
Reading package lists... 93%
Reading package lists... 94%
Reading package lists... 94%
Reading package lists... 95%
Reading package lists... 95%
Reading package lists... 96%
Reading package lists... 96%
Reading package lists... 97%
Reading package lists... 97%
Reading package lists... 98%
Reading package lists... 98%
Reading package lists... 99%
Reading package lists... 99%
Reading package lists... 99%
Reading package lists... 99%
Reading package lists... Done
and it should be interpreted like when you execute the normal command in bash, being the final output:
Reading package lists... Done
|
Use col -b (on Linux it's part of the util-linux package; in base system elsewhere).
col filters out reverse (and half-reverse) line feeds so the output
is in the correct order, with only forward and half-forward line
feeds.
| Reinterpret terminal output |
1,415,183,595,000 |
I'm trying to create "/tmp/pwet/foo bar" with this:
DIR="/tmp/pwet/foo bar"
ARGS="-p ${DIR}"
mkdir $ARGS
For integration reasons, I have to keep the ARGS building in 2 steps: ARG = some_options + $DIR
This code create 2 dirs: /tmp/pwet/foo and ./bar. this is not what I want.
I've tried a lot of escaping things, without results. I run the script with bash. What's the good solution?
|
The problem is not with the contents of the variable (and this is why you can try with as many escape characters as you want), but with the fact that bash replaces $ARGS with the variable contents literally, so the space within it will be an actual space in the command.
Just use mkdir "$ARGS" instead (i.e., enclose the variable reference in "s)
UPDATE: As discussed in the comments, in order to have the -p and other options as not being part of the directory name, define 2 variables instead:
OPTS="-p -whatever" and
ARGS="/tmp/pwet/foo bar"
then later issue mkdir like this:
mkdir $OPTS "$ARGS"
Alternativelly, if you want full control over this using escape characters plus the possibility of having multiple directories, some with spaces, some without, options, all mixed together you can put the entire command in a variable and execute it with bash -c.
Check the example below as a starting point you can play with:
MKDIRCMD="mkdir -p \"A B\" \"C D\""
echo $MKDIRCMD # Notice the \ characters are NOT part of the string
bash -c "$MKDIRCMD"
| Shell string escape for command arguments [duplicate] |
1,415,183,595,000 |
I have a string that contains newline characters. I want to escape all newlines in that string by replacing all newline characters with a string of two characters: "\n". How can I do this in POSIX sh?
Here's the goal:
$ printf 'a\nb\nc\nd' | escape_newlines | od -a
0000000 a \ n b \ n c \ n d
141 134 156 142 134 156 143 134 156 144
0000012
How do I define escape_newlines?
Methods I have tried:
tr — Problem: not able to convert single character into multiple characters.
awk 'BEGIN{ORS="\\n"} {print}' — Problem: always inserts the two-character string "\n" at the end of the string even if the string does not end with a newline character. Example:
$ printf 'hi\n' | awk 'BEGIN{ORS="\\n"} {print}' | od -ab
0000000 h i \ n
150 151 134 156
0000004
$ printf 'hi' | awk 'BEGIN{ORS="\\n"} {print}' | od -ab
0000000 h i \ n
150 151 134 156
0000004
sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' — Problem: if there is a newline character at the end of the string, it will not be converted. Example:
$ printf 'h\ni' | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | od -ab
0000000 h \ n i
150 134 156 151
0000004
$ printf 'h\ni\n' | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g' | od -ab
0000000 h \ n i nl
150 134 156 151 012
0000005
|
Try awk with:
string='x
y
'
new_string=$(
LC_ALL=C awk -- '
BEGIN {
gsub("\n", "\\n", ARGV[1])
printf "%s", ARGV[1]
}' "$string"
)
In any case, note that command substitution removes all trailing newline characters. OK here as the output of awk doesn't contain any, but that means we could also have used print instead of printf "%s".
With sed:
new_string=$(
printf '%s\n' "$string" |
LC_ALL=C sed '
:1
$ ! {
N
b1
}
s/\n/\\n/g'
)
Note that per POSIX, using N on the last line is meant to discard the pattern space and exit. GNU sed only does it when $POSIXLY_CORRECT is in the environment, but still exits upon N called on the last line (but still prints the pattern space).
We use LC_ALL=C to avoid potential issues with decoding the string in the charmap of the user's locale.
sed is a text utility, so it expects text input and produces text output. Something that is not empty and doesn't end in a newline character is not text. Here we add one newline to the input, and rely on command substitution to remove the one sed adds on output.
Also note that if the input has lines with a length in bytes larger than LINE_MAX (which can be as low as 1024), that makes it non-text as well, and the behaviour is unspecified. IIRC, the pattern space is not required to be able to hold more than 10 x LINE_MAX as well.
The awk approach will have some limits as well starting with ARG_MAX which on systems will be lower than 10 x LINE_MAX. That limit applies to the sed one as well with shells where printf is not builtin (such as ksh88 or pdksh-based ones).
There's no limit to the size of a shell variable, though if it's exported to the environment, it will run against the ARG_MAX limit for all the external commands that are executed.
To process a stream, you'd need something like:
... | (cat; echo) | LC_ALL=C awk '
{printf "%s", sep $0; sep = "\\n"}'
Though beware that output is not text so cannot be processed by a POSIX text utility.
| How to convert all newlines to "\n" in POSIX sh strings |
1,415,183,595,000 |
How can i get this feature to work?
Pressing Esc while taking inputs from the user will exit the script
read -r -p "Enter the filenames: " -a arr
if press Esc; then
read $round
mkdir $round
fi
for filenames in "${arr[@]}"; do
if [[ -e "${filenames}" ]]; then
echo "${filenames} file exists (no override)"
else
cp -n ~/Documents/library/normal.cpp "${filenames}"
fi
done
How can i detect Esc key in this script?
PS: Saw many resources
https://www.linuxquestions.org/questions/linux-newbie-8/bash-esc-key-in-a-case-statement-759927/
they use another variable like $key or read -n1 $key just one character input
but here what will i do if I've a string or an array?
|
Here's how I tweaked my script. Hope it helps.
This should work in bash:
#!/bin/bash
# Bind the Escape key to run "escape_function" when pressed.
bind_escape () { bind -x '"\e": escape_function' 2> /dev/null; }
# Unbind the Escape key.
unbind_escape () { bind -r "\e" 2> /dev/null; }
escape_function () {
unbind_escape
echo "escape key pressed"
# command/s to be executed when the Escape key is pressed
exit
}
bind_escape
# Use read -e for this to work.
read -e -r -p "Enter the filenames: " -a arr
unbind_escape
# Commands to be executed when Enter is pressed.
for filenames in "${arr[@]}"; do
if [[ -e "${filenames}" ]]; then
echo "${filenames} file exists (no override)"
else
cp -n ~/Documents/library/normal.cpp "${filenames}"
fi
done
| Bash - If press Esc when taking user string-input from "read" command, stop and then do other action |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.