date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,328,417,304,000 |
I am trying to configure my bash ~/.inputrc to these settings
(Note: ←, → mean the left and right arrow keys)
Ctrl + ← - should jump back a word
Ctrl + → - should jump forward a word
Currently I have this in my ~/.inputrc and it doesn't work. Ctrl + arrow produces nothing.
"\eC-5C":forward-word
"\eC-5D":backward-word
I'm sure my escape sequence is wrong.
What are the correct escape sequences for the Ctrl + arrow combinations?
terminal: tmux inside gnome-terminal
|
Gnome-terminal (more properly VTE) imitates some version of xterm's escape sequences. How closely it does this, depends on the version of VTE.
The relevant xterm documentation is in the PC-Style Function Keys section of XTerm Control Sequences.
What you are looking for is a string like \e[1;5D (for control left-arrow), where the 5 denotes the control modifier.
In ncurses, you can see these strings using infocmp -x, as the values for kUP5, kDN5, kLFT5 and kRIT5. For example:
kDN5=\E[1;5B,
kLFT5=\E[1;5D,
kRIT5=\E[1;5C,
kUP5=\E[1;5A,
| What is the gnome-terminal ANSI escape sequence for "CTRL + arrow/s"? |
1,328,417,304,000 |
Any idea what could be causing this? Without using grep, the only stuff display there are the ISO codes and empty space.
Software used
Command: ./trans --id --input /path/to/txt | grep ISO | grep [a-z]
root@box /test # alias grep
alias grep='grep --color=auto'
root@box /test # type grep
grep is aliased to `grep --color=auto'
Normal output:
|
The screenshot appears to show mangled ANSI colour codes, which control text rendering. Bold/bright text is produced with the sequence ␛[1m, which is usually interpreted by your terminal and not displayed on the screen directly: it just makes the next bit of text bright. The screenshot of the ungrepped output shows that colour difference between the labels and values on each line, so the original output is using them.
It appears that that sequence has been broken by your final grep, which matched the "m" in the code (since it's a letter [a-z]) and tried to highlight it in red itself. That left a partial escape sequence behind, which your terminal couldn't process.
The escape character ␛ is U+001B, which is the hexadecimal number that's rendered in the unknown-character boxes. What's displayed is the escape (the box), a [, a 1, a red m followed by the expected matching text "eng", and the same happening at the end with "22" (the numeric code for "normal colour & intensity").
The broken output is really:
␛[1␛[31mmeng␛[22m␛[22␛[31mm␛[22m
where ␛[31m makes text red and ␛[22m turns it back to white, both inserted by grep around the m characters into the original text. The original was just:
␛[1meng␛[22m
which is just bright "eng" and then a switch back to normal text.
You could check this by changing your final grep into grep --color=always and piping into hexdump, which will show all the unprintable characters and the ones interpreted by your terminal.
You can deal with this a few ways. One is to use grep without your alias for the moment:
./trans --id --input /path/to/txt | grep ISO | \grep [a-z]
The backslash temporarily skips the alias and runs grep directly.
Another is to strip out the ANSI codes from the original command, for which there are some suggestions in this question:
./trans --id --input /path/to/txt | perl -pe 's/\e\[[\d;]*m//g' | grep ISO | grep [a-z]
Yet another option is to add an extraneous pipe on the end:
./trans --id --input /path/to/txt | grep ISO | grep [a-z] | cat
Because the final grep's output isn't directly to the TTY, but to cat via a pipe, it won't insert the coloured highlighting.
Perhaps the best option is to get Translate Shell to stop using terminal control sequences in its own output in the first place when it is not to a terminal. That would properly involve a bug report from you to its author(s) and a code fix to Translate Shell's ansi() function, but one can bodge it somewhat:
TERM=dumb ./trans --id --input /path/to/txt | grep ISO | grep [a-z]
This passes the dumb terminal type in Translate Shell's environment, which it at least recognizes as not having ECMA-48 colour support. (Sadly, Translate Shell does not use terminfo, and merely hardwires in its own code the terminal types that it understands and the control sequences that it uses.)
| Weird symbols on screen when using grep? |
1,328,417,304,000 |
I've got a script that scp's a file from remote host back to local. Sometimes the file names contain spaces. scp does not like spaces in its file names. For some reason my attempts at handling the spaces have not resulted in the correct scp path.
Code:
PATH=/var/root/Documents/MyFile OG-v1.2.3.pkg
scp $PATH [email protected]:/Users/Me/Desktop
Results in
Cannot find directory: var/root/Documents/MyFile
Cannot find directory: OG-v1.2.3.pkg
Enclosing PATH in quotes "$PATH" gives the same error.
Swapping the spaces for escaped spaces also is not working, although as far as I can tell it should:
ESC_PATH=${PATH/' '/'\ '}
although printing the escaped path shows that the edit worked:
echo $ESC_PATH
> /var/root/Documents/MyFile\ OG-v1.2.3.pkg
|
You should quote both the declaration and the usage
path="/var/root/Documents/MyFile OG-v1.2.3.pkg"
scp "$path" [email protected]:/Users/Me/Desktop
If you do not quote the first, $path will contain just the first part. If you do not quote the second, scp will treat each space-separated part as an argument.
(I've changed $PATH to $path because $PATH is an important reserved variable and you must not use it for general purposes.)
| Trouble in script with spaces in filename |
1,328,417,304,000 |
I have defined the color red using tput
red=$(tput setaf 1)
to colorize warnings in my program. This works fine:
printf '%sfail\n' "$red"
# prints 'fail' in red
But on one occasion I would like to print out the escape sequence as-is, something like:
\E[31mfail
How would I do this? I know printf has a %q flag but it escape others stuff I don't want to.
|
Sounds like you want the opposite of printing them literally, you want those escape characters converted to a printable descriptive form like \E or \033, ^[...
If it's just the ESC (0x1b) character you want to convert to \E, then with ksh93, zsh or bash (typically, the same ones that also support that non-standard %q), you can do:
printf '%s\n' "${red//$'\e'/\\E}"
Or pipe to sed $'s/\e/\\\\E/g'
For a more generic approach at converting non-graphical characters, you can use:
$ printf %s "$red" | od -A n -vt c # POSIX
033 [ 3 1 m
$ printf %s "$red" | sed -n l # POSIX
\033[31m$
$ printf '%s\n' "${(qqqq)red}" # zsh
$'\033[31m'
$ printf '%s\n' "$red" | cat -vt # some cat implementations
^[[31m
$ printf %s "$red" | uconv -x ':: [:Cc:]; ::Hex;' # ICU tools
\u001B[31m
$ printf %s "$red" | uconv -x ':: [:Cc:]; ::Name;' # ICU tools
\N{<control-001B>}[31m
| How to print control characters with escape sequences? |
1,328,417,304,000 |
Is there a pastebin service that supports colors? I understand some of them have syntax hilighting, but I'd like to do arbitrary coloring, preferably using the terminal escape sequences.
For example, I'd like to do:
grep --color=force foo /etc/motd | pastebinit
Does anyone know such a cool site?
|
Termbin.com supports what you need.
$ grep --color=force foo /etc/motd | nc termbin.com 9999
http://termbin.com/xxxx
$ curl http://termbin.com/xxxx
You'll get exactly what you sent.
The service is running on a free and open software called Fiche so you can also install your own.
| is there a pastebin service that supports terminal escape sequences? |
1,328,417,304,000 |
I have this Bash script
for i in 1 2 3
do
for j in 4 5 6
do
echo "hello_$i_$j"
done
done
but it prints
hello_4
hello_5
hello_6
three times, whereas (you may guess) I want hello_1_4, hello_1_5, etc.
Escaping only the underscore or only the dollar sign does not work. Any ideas?
Thanks!
|
Add the line: set -u to the top of your code. Then rerun and see what you get.
The error that variable i_ is unbound, meaning not defined anywhere. Now why is bash talking about this variable i_? You did not define it anywhere. Look closely at your echo statement: "hello_$i_$j" the underscore after the $i is being seen as the variable i_ since _ is a valid variable name identifier character.
So to prevent bash from doing so, you need to enclose the variable name in braces {}, like as, echo "Hello_${i}_${j}" The braces prevent the variable name from spilling over and abutting with the _. Note: the braces on variable $j are optional as the " serves the purpose of delimiting.
| Escaping _$ (underscore and dollar sign) [duplicate] |
1,328,417,304,000 |
I'd like to use grep with a PCRE expression that contains the < character. Bash thinks I want to redirect, but I don't want to. How can I escape <?
|
I was able to do it with a backslash:
25 % grep \< xmospos.c
#include <stdio.h>
#include <stdlib.h>
#include <getopt.h>
#include <X11/Xlib.h>
A quoted less-than, and a quoted, backslashed less than both gave goofy answers.
| How to escape < or > in a parameter in shell? |
1,328,417,304,000 |
On my Windows7 machine, I'm traversing to the installed path of PuTTy Application and running:
plink <hostname>
to connect to my remote Linux host.
I see some unrecognized characters in the output as:
-bash-3.2$ ls -lrt
←[00mtotal 96
drwx------ 5 lg262728 lg262728 4096 Jun 10 15:32 ←[00;34mmyScripts←[00m
drwx------ 2 lg262728 lg262728 4096 Jun 12 13:19 ←[00;34mmyLangs←[00m
drwxr-xr-x 4 lg262728 lg262728 4096 Jul 1 07:43 ←[00;34mmyWorkSpace←[00m
←[m-bash-3.2$
What is wrong here? Is it something about encoding?
|
As michas has explained, those are terminal escape sequences. How they are interpreted is up to the terminal. You can do as michas has suggested and call ls like \ls, which will call the ls executable in $PATH, instead of the common shell alias ls --color=auto. To remove that shell alias you could do:
unalias ls
You can also add the option...
ls ${opts} --color=never
...at any time to turn it off. Another way to disable the color sequences for instance is to do:
ls ${opts} | cat
This works, because in the --color=auto mode ls checks its output to determine if it is a tty device and if so, it injects the terminal escapes to colorize its output. When it is not a terminal device - like when it is instead a |pipe file as in the example above - ls prints no escape sequences. This is the standard behavior of most applications that can colorize their output.
More interesting though is the api that most ls implementations provide to control this behavior - which I find interesting and which has prompted me to write this answer.
ls determines which parts of its output to colorize, based on the values in the $LS_COLORS environment variable. The dircolors application is an interface for handling this. For instance, on my machine:
dircolors -p
...
# Below are the color init strings for the basic file types. A color init
# string consists of one or more of the following numeric codes:
# Attribute codes:
# 00=none 01=bold 04=underscore 05=blink 07=reverse 08=concealed
# Text color codes:
# 30=black 31=red 32=green 33=yellow 34=blue 35=magenta 36=cyan 37=white
# Background color codes:
# 40=black 41=red 42=green 43=yellow 44=blue 45=magenta 46=cyan 47=white
#NORMAL 00 # no color code at all
#FILE 00 # regular file: use no color at all
RESET 0 # reset to "normal" color
DIR 01;34 # directory
LINK 01;36 # symbolic link. (If you set this to 'target' instead of a
# numerical value, the color is as for the file pointed to.)
... and so on. When compared to...
printf %s "$LS_COLORS"
rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:\
bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:\
ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:...
...we can begin to get an idea of what ls is doing. What is specifically not shown in either of these though, is the compiled in values for...
lc=\e[:rc=m:ec=:
Each of these handles what goes to the left side of a terminal escape code, the right side of a terminal escape code, and the end of a terminal escape sequence. As you can see in the dircolors output, my fi=: is not set as the default - because typically ls does not colorize regular files.
But if we take all of this together and add a little, we can do things like...
mkdir dir ; touch file1 file2
LS_COLORS=\
'lc=\nLEFT_SIDE_ESCAPE_SEQUENCE\n:'\
'rc=\nRIGHT_SIDE_ESCAPE_SEQUENCE\n:'\
'ec=\nEND_OF_ESCAPE_SEQUENCE:'\
'fi=REGULAR_FILE_ESCAPE_CODE:'\
'di=DIRECTORY_ESCAPE_CODE:'\
ls -l --color=always | cat -A
total 0$
drwxr-xr-x 1 mikeserv mikeserv 0 Jul 10 01:05 $
END_OF_ESCAPE_SEQUENCE$
LEFT_SIDE_ESCAPE_SEQUENCE$
DIRECTORY_ESCAPE_CODE$
RIGHT_SIDE_ESCAPE_SEQUENCE$
dir$
END_OF_ESCAPE_SEQUENCE/$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 $
LEFT_SIDE_ESCAPE_SEQUENCE$
REGULAR_FILE_ESCAPE_CODE$
RIGHT_SIDE_ESCAPE_SEQUENCE$
file1$
END_OF_ESCAPE_SEQUENCE$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 $
LEFT_SIDE_ESCAPE_SEQUENCE$
REGULAR_FILE_ESCAPE_CODE$
RIGHT_SIDE_ESCAPE_SEQUENCE$
file2$
END_OF_ESCAPE_SEQUENCE$
$
LEFT_SIDE_ESCAPE_SEQUENCE$
$
RIGHT_SIDE_ESCAPE_SEQUENCE$
ls should print the ec escape once at the start of it's output and the lc and rc escapes once at the very end. Every other time they occur is immediately preceding or following a file name. The ec sequence only occurs if it is set - the default behavior is to use the reset or rs sequence, instead in combination with lc and rc. Your output shows a configuration something like:
`lc=\033[:rc=m:rs=0...`
...which is typical, though ec grants you more control. For instance, if you ever wanted a \0null delimited ls, it can be accomplished as simply as:
LS_COLORS='lc=\0:rc=:ec=\0\0\0:fi=:di=:' ls -l --color=always | cat -A
total 0$
drwxr-xr-x 1 mikeserv mikeserv 0 Jul 10 01:05 ^@^@^@^@dir^@^@^@/$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 ^@file1^@^@^@$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 ^@file2^@^@^@$
^@%
Again you can see that we get one extra ec just before the first escape sequence is printed and one just at the end. Aside from those two, \0 null bytes only occur immediately preceding or following a filename. You can even see that the extra / slash used to indicate a directory falls outside the nulls enclosing it.
This works fine for file names containing new lines as well:
touch 'new
line
file'
LS_COLORS='lc=\0:rc=:ec=\0\0\0:fi=:di=:' \
ls -l --color=always | cat -A
total 0$
drwxr-xr-x 1 mikeserv mikeserv 0 Jul 10 01:05 ^@^@^@^@dir^@^@^@/$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 ^@file1^@^@^@$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 ^@file2^@^@^@$
-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:43 ^@new$
line$
file^@^@^@$
^@%
If we take all of this together we can surmise that, in your case, there are three primary problems, and if either of them is addressed it will likely obviate the others.
In the first place, either your terminal device simply does not interpret terminal color escapes correctly or plink is in some way rendering them unreadable for your terminal. I'm not sure which is true, but the plink docs have this to say about it:
The output sent by the server will be written straight to your command prompt window, which will most likely not interpret terminal control codes in the way the server expects it to. So if you run any full-screen applications, for example, you can expect to see strange characters appearing in your window. Interactive connections like this are not the main point of Plink.
In order to connect with a different protocol, you can give the command line options -ssh, -telnet, -rlogin or -raw. To make an SSH connection, for example:
Z:\sysosd>plink -ssh login.example.com
login as:
If you have already set up a PuTTY saved session, then instead of supplying a host name, you can give the saved session name. This allows you to use public-key authentication, specify a user name, and use most of the other features of PuTTY:
Z:\sysosd>plink my-ssh-session
Sent username "fred"
Authenticating with public key "fred@winbox"
Last login: Thu Dec 6 19:25:33 2001 from :0.0
fred@flunky:~$
If either of those options either does not work for you or is otherwise not feasible, then you can address this by maybe running a check against plink in your .${shell}rc file that is sourced at login. You might thereby enable the ls alias, only if your login device is not a plink connection. Alternatively, you could remove it altogether, I suppose, but doing so will render ls colorless for any login session. You might also add an unalias ls or even alias ls='ls --color=never command to plink's login command and thereby only remove the alias, when logged in with plink.
The last problem is that ls - regardless of it's aliased command line options - is provided an environment specifying terminal escapes in $LS_COLORS. Similar to the latter case of problem 2, you could simply set $LS_COLORS to a null value when logging in with plink. In that way, there will be no color codes to render and ls --color=auto will make no difference at all.
| What is the problem with the output of plink? |
1,328,417,304,000 |
Suppose for the sake of argument my password below is abc123@
I need to authenticate my linux machine through a corporate proxy to get patches and updates... normally I'd use this:
export HTTP_PROXY='http://<Americas\Username>:<Password>@proxy.foo.com'
export http_proxy='http://<Americas\Username>:<Password>@proxy.foo.com'
However, when I substitute a real password ending with @ and then run aptitude update, I get...
[mpenning@netwiki ~]$ sudo -E aptitude update
Err http://mirror.anl.gov squeeze Release.gpg
Could not resolve '@proxy.foo.com'
Err http://mirror.anl.gov/debian/ squeeze/main Translation-en
Could not resolve '@proxy.foo.com'
I have tried escaping the password with \@, escaping both with \@\@, double characters (@@), and nothing seems to get this to proxy correctly; I never had a problem until I changed my password recently.
What is the right way to escape my password in bash?
|
You could try URL-encoding your password. @ should be replaced by %40.
Tackling Special Characters in Proxy Passwords on Linux indicates this should work, but looking around other people seem not to get that to work (and I have no way of testing this).
| export HTTP_PROXY and special characters in passwd |
1,328,417,304,000 |
Recently, I have been playing around a lot with color in the terminal and, therefore, with escape sequences, too. I've read the relevant parts of the Bash manpage along with numerous helpful pages on the Net.
I've got most of what I want working; a nice color Bash prompt, for example. That said, I am still somewhat confused over when I should use (or need to use) the "non-printing escape sequence" characters. Those would be \[ and \].
If I don't use them in PS1 when defining my prompt then my prompt most definitely does not display properly. If I do use them, everything is fine. Good.
But, outside of PS1, they don't seem to operate the same way. For example, to make scripts more readable I have defined a variable $RGB_PURPLE which is set via a simple function c8_rgb(). The end result is that the variable contains the value \[\e[01;38;05;129m\] which turns on a bold purple foreground color.
When I use this variable in PS1, it does what I expect. If I use it via printf or echo -e it "half" works. The command printf "${RGB_PURPLE}TEST${COLOR_CLR}\n" (where COLOR_CLR is the escape sequence to reset text properties) results in the following display: \[\]TEST\[\] where everything except the first \[ and final \] are displayed in purple.
Why the difference? Why are these brackets printed instead of being processed by the terminal? I would have expected them to be treated the same when printed as part of the prompt as when printed by other means. I don't understand the change.
It seems, empirically, that these characters must be used inside the prompt definition, while they shouldn't be used in pretty much every other case. This makes it difficult to use a common function, like my c8_rgb() function mentioned above, to handle escape sequence generation and output since the function cannot know whether its result will be in a prompt config or someplace else.
And a quick related question: are echo -e and printf essentially the same with regards to outputting escape sequences? I typically use printf, but is there any reason to favor one over the other?
Can anyone explain this apparent subtle difference? Are there any other oddities I should be aware of when using escape sequences (usually just for color) in the terminal? Thanks!
|
The "non-printing escape sequence" is needed when using non-printing characters in $PS1 because bash needs to know the position of the cursor so that the screen can be updated correctly when you edit the command line. Bash does that by counting the number of characters in the $PS1 prompt and then that's the column number the cursor is in.
However, if you place non-printing sequences in $PS1, then that count is wrong and the line can be messed up if you edit the command line. Hence the introduction of the \[ and \] markers to indicate that the enclosed bytes should not be counted.
| Non-printing escape sequence: when? |
1,328,417,304,000 |
I executed the following command
# top > /home/user/top_output.txt
Nothing Happened for a while and then i pressed Ctrl+C. When i checked the file that was created, it had the contents in it. So i fired cat command on it and it gave me this output.
But when i tried the same thing with the less command i got this.
According to this post the job of Cat,less or More is just to display the contents of the file not translating the encoding. Can someone please tell me what's happening here?
P.S: I'm currently using Fedora 19
|
The escape sequences ESC [ ... m are called ANSI Escape Sequences. top sends them to your terminal to make it format output in color, bold, inverted text and so on. You never see these characters when running top but you see the resulting format. You could think of it as looking at a webpage in a browser - you don't see the <html>... formatting the content.
When dumping the output of top into a file, you are saving the non-printable escape sequences with everything else. Think of it as saving view source in your browser.
The default for less is to escape terminal control characters, displaying them in a printable form.
The default for cat is to pass them through to your terminal which interprets them and makes it look "normal".
Try less -r /home/user/top_output.txt
$ man less ...
-r or --raw-control-chars
Causes "raw" control characters to be displayed.
The default is to display control characters using the caret
notation; for example, a control-A (octal 001) is displayed as
"^A". Warning: when the -r option is used, less cannot keep
track of the actual appearance of the screen (since this depends
on how the screen responds to each type of control
character). Thus, various display problems may result, such
as long lines being split in the wrong place.
Compare to cat -v /home/user/top_output.txt which will escape non-printable characters.
| Cat and Less give different output |
1,328,417,304,000 |
Nano calls it ^L, but of course, typing something like
$ grep -v "^\^L" file
doesn't work. Its unicode codepoint is 000C. How can I match it in a regular expression?
|
That seems to be the good old form feed character, described in man ascii as:
Oct Dec Hex Char
------------------------------------------
014 12 0C FF '\f' (form feed)
(Not mentioned there, but ^L's code is the same 12.)
Then in bash any of these should work:
grep -v $'\f' file
grep -v $'\cL' file
grep -v $'\x0C' file
| How can I match the page break character in a regular expression? |
1,328,417,304,000 |
I've been looking into escape sequences lately, and I'm surprised of what they can do. You can even move an xterm X11 window with them (try printf '\e[3;0;0t'), wow!
The most common way to know what features a terminal supports seems to be using a database. That's what ncurses does, and it's used by 99.9% of applications that rely on escape sequences.
Ncurses reads the terminfo database for your shell's TERM environment variable in order to decide which features are supported by your console.
You can change the TERM environment variable of your shell, and if you do it most applications could start using less features or misbehaving (try running nano or vim after setting TERM="").
I've seen that some escape codes cause the terminal to report stuff. For instance <ESC>[6n causes the terminal to report the cursor position. (printf '\e[6n')
Why don't we use similar report mechanisms to let the console report which features it supports?
Instead of coupling the features with the value of TERM, each console could advertise its own features, making the whole thing more precise and reliable. Why isn't this a thing?
Edit: something that I should have asked before... I'd like to create a new escape sequence, to hack konsole and gnome-terminal in order to support it and to use it in some scripts.
I'd like to be able to query the console in order to know whether the one I'm running supports this feature - what's the suggested way to do that?
|
It's not as simple as you might suppose. xterm (like the DEC VTxxx terminals starting with VT100) has a number of reports for various features (refer to XTerm Control Sequences). The most generally useful is that which tells what type of terminal it is:
CSI Ps c Send Device Attributes (Primary DA).
Not all terminals have that type of response (Sun hardware console has/had none).
But there are more features than reports (for instance, how to tell whether a terminal is really interpreting UTF-8: the accepted route for that is via the locale environment variables, so no need has been established for another control sequence/response).
In practice, while there are a few applications that pay attention to reports (such as vim, checking the actual values of function keys, the number of colors using DCS + p Pt ST, and even the cursor appearance using DCS $ q Pt ST), the process is unreliable because some developers find it simpler to return a given report-response than to implement the feature. If you read through the source code for various programs, you'll find interesting quirks where someone has customized a response to make it look like some version of xterm.
| Terminal escape sequences: why don't terminals report what features they support, instead of relying on terminfo? |
1,465,563,411,000 |
When I used shell in a box and when I call less command (echo foo | less) in ajax response there was this code:
\u001B[?1049h\u001B[?1h\u001B=\rfoo\r\n\u001B[7m(END)\u001B[27m\u001B[K
what does \u001B[?1049h and \u001B[?1h escape sequences do, also what is \u001B=? Are they documented somewhere?
|
\u001B is an unnecessarily verbose ASCII escape character, which seems to have been introduced for ECMAScript6. POSIX would use octal \033, and some others allow hexadecimal \01b. The upper/lower case of the number is irrelevant.
The \u001B[?1049h (and \u001B[?1049l) are escape sequences which tell xterm to optionally switch to and from the alternate screen.
The question mark shows that it is "private use" (a category set aside for implementation-specific features in the standard). About a third of the private-use modes listed in XTerm Control Sequences correspond to one of DEC's (those have a mnemonic such as DECCKM in their descriptions). The others are either original to xterm, or adapted from other terminals, as noted.
The reason for this escape sequence is to provide a terminfo-based way to let users decide whether programs can use the alternate screen. According to the xterm manual:
titeInhibit (class TiteInhibit)
Specifies whether or not xterm should remove ti and te termcap
entries (used to switch between alternate screens on startup of
many screen-oriented programs) from the TERMCAP string. If
set, xterm also ignores the escape sequence to switch to the
alternate screen. Xterm supports terminfo in a different way,
supporting composite control sequences (also known as private
modes) 1047, 1048 and 1049 which have the same effect as the
original 47 control sequence. The default for this resource is
"false".
The 1049 code (introduced in 1998) is recognized by most terminal emulators which claim to be xterm-compatible, but most do not make the feature optional. So they don't really implement the feature.
On the other hand, \u001B[?1h did not originate with xterm, but (like \u001B=) is from DEC VT100s, used for switching the terminal to use application mode for cursor keys (DECCKM) and the numeric keypad (DECKPAM). These are used by programs such as less when initializing the terminal because terminal descriptions use application (or normal) mode escape sequences for special keys to match the initialization strings given in these terminal descriptions.
Further reading:
Why doesn't the screen clear when running vi? (xterm FAQ)
Why can't I use the cursor keys in (whatever) shell? (xterm FAQ)
My cursor keys do not work (ncurses FAQ)
XTerm Control Sequences
| What does [?1049h and [?1h ANSI escape sequences do? |
1,465,563,411,000 |
In bash, I press ctrl+v to start verbatim insert. In the verbatim mode, I press the Esc key and bash shows ^[. I redirect it to file esc.
Also in the verbatim mode, I press ctrl key with [ key, and bash shows ^[. I redirect it to file ctrl.
Next, I compare the two files, and they are the same!
$ echo '^[' > esc
$ echo '^[' > ctrl
$ diff esc ctrl
$
Why do Ctrl+[ and Esc produce the same content?
Is ^[ here the C0 and C1 control codes? If so, the wiki article says ^[ is Escape, so why is ctrl+[ also Escape?
The root problem is that I want to check and create a key binding.
(zsh)$ bindkey -L
...
bindkey "^['" quote-line
...
So do I need to type ESC+' or ctrl+[+'?
|
This looks to follow the same logic as Ctrl-A, or ^A being character code 1, and ^@ being used to represent the NUL byte. Here, the ^ is a common way of representing Ctrl with another key.
Namely, entering Ctrl-foo gives the character code of foo with bit 6 cleared, reducing the character code by 64. So, A is character code 65, and ^A is character code 1; @ is 64, and ^@ is 0, NUL; and also [ is 91, and ^[ is 27, ESC. It's just that for ESC you also have a separate key, but you do have the enter and tab keys too, which also produce control characters, so it's not that out of the ordinary.
Of course, how Ctrl-something works on modern systems probably depends on other things too, like how your keymaps and key bindings are set up. Also don't ask me how that works for character codes < 64, e.g. ^1. With the terminal I tried that on, Ctrl-space gave the NUL byte.
| Why do `ctrl+[` and `ESC` both produce `^[`? [duplicate] |
1,465,563,411,000 |
I'd like to print a man style usage message to describe a shell function like this output man find:
NAME
find - search for files in a directory hierarchy
SYNOPSIS
find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]
DESCRIPTION
This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each
given starting-point by evaluating the given expression from left to right, according to the rules of
precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and opera‐
tions, true for or), at which point find moves on to the next file name. If no starting-point is speci‐
fied, `.' is assumed.
OPTIONS
I am facing an error message on the ` character.
Following simple script shows the error:
~$ cat <<EOF
`.'
EOF
bash: bad substitution: no closing "`" in `.'
I though heredoc was a cool way to echo strings by pasting them without having to escape its content such a quotes, etc...
I assume I was wrong :/
Can someone explain this behavior please? Can heredoc accept ` character?
Edit 2: I accepted the answer of quoted here-document <<'END_HELP', but I finally won't use it for this kind of complete manual output as kusalananda does suggests
Edit 1: (For future reads) the limit with using quoted here-document is that is prevents to use tput in the here-document.
To do so, I did the following:
unquoted here-document, for tput commands to be executed
prevent the "bad substitution" error by escaping the backtick instead
use tput within the here-document
Example:
normal=$( tput sgr0 ) ;
bold=$(tput bold) ;
cat <<END_HELP # here-document not quoted
${bold}NAME${normal}
find - search for files in a directory hierarchy
${bold}SYNOPSIS${normal}
find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]
${bold}DESCRIPTION${normal}
This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each
given starting-point by evaluating the given expression from left to right, according to the rules of
precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and opera‐
tions, true for or), at which point find moves on to the next file name. If no starting-point is speci‐
fied, \`.' is assumed.
END_HELP
unset normal ;
unset bold ;
Here, note the escaped backtick that was source of error:
\`.'
|
The backtick introduces a command substitution. Since the here-document is not quoted, this will be interpreted by the shell. The shell complains since the command substitution has no ending backtick.
To quote a here-document, use
cat <<'END_HELP'
something something help
END_HELP
or
cat <<\END_HELP
something something help
END_HELP
Regarding your comments on the resolution of this issue:
Utilities seldom output a complete manual by themselves but may offer a synopsis or basic usage information. This is seldom, if ever, colorized (since its output may not be directed to a terminal or pager like less). The real manual is often typesetted using groff or a dedicated man-page formatter like mandoc and is handled completely separate from the code.
| Bad substitution: no closing "`" in a heredoc / EOF |
1,465,563,411,000 |
I'm colorizing the header of a table formatted with column -ts $'\t'
Works well without color codes, but when I add color codes to the first line column doesn't properly align the output.
Without colored output it works as expected:
printf "1\t2\t3\nasdasdasdasdasdasdasd\tqwe\tqweqwe\n" | column -ts $'\t'
But when adding color on the first line column doesn't align the text of the colored row:
printf "\e[7m1\t2\t3\e[0m\nasdasdasdasdasdasdasd\tqwe\tqweqwe\n" | column -ts $'\t'
Observed this behaviour both on Ubuntu Linux and Mac OS X.
|
I imagine that column doesn't know that \e[7m is a v100 escape sequence that takes no space in the output. It seems to asssume character codes 0 to 037 octal take no space. You can get what you want by putting the initial escape sequence on a line of its own, then removing that newline from the output:
printf '\e[7m\n1\t2\t3\e[0m\nasdasdasdasdasdasdasd\tqwe\tqweqwe\n' |
column -ts $'\t' |
sed '1{N;s/\n//}'
| Issue with column command and color escape codes |
1,465,563,411,000 |
My prompt string is printed using this statement,
printf '\033]0;%s@%s:%s\007' user host /home/user
Why does it need an escape character (\033) and a bell character (007)?
When i ran the same command manually, it prints nothing.
When i removed the escape characters and gave the command as,
printf '%s@%s:%s' user host /home/user
it prints,
user@home:/home/user
which is easier to understand.
So, how does the escape characters, \033 and 007 get converted to a shell prompt string?
|
Only \033 is an escape and it initiates the escape sequence up until and include the ;. \033]0;. This initiates a string that sets the title in the titlebar of the terminal and that string is terminated by the \007 special character.
See man console_codes:
It accepts ESC ] (OSC) for the setting of certain resources. In addi‐
tion to the ECMA-48 string terminator (ST), xterm(1) accepts a BEL to
terminate an OSC string. These are a few of the OSC control sequences
recognized by xterm(1):
ESC ] 0 ; txt ST Set icon name and window title to txt.
That you don't see any changes is probably because your prompt sets the title to the default title string on returning to the prompt. Try:
PROMPT_COMMAND= ; printf '\033]0;Hello World!\007'
| Bell and escape character in prompt string |
1,465,563,411,000 |
So in bash,
When I do
echo \*
*
This seems right, as * is escaped and taken literally.
But I can't understand that,
when I do
echo \\*
\*
I thought the first backslash escaped the second one thus two backslash "\\" will give me one "\" in literal. and * followed carrying its special meaning.
I was expecting:
echo \\*
\file1 file2 file3
ANSWER SUMMARY:
Since \ is taken literally, echo \* will behave just as echo a*, which will find any file that starts with literal "a".
Follow up question,
If I want to print out exactly like
\file1 file2 file3
What command should I use?
e.g. like the following but I want no space
echo \\ *
\ file1 file2 file3
|
If you don't have a file in the current directory whose name starts with a backslash, this is expected behaviour. Bash expands * to match any existing file names, but:
If the pattern does not match any existing filenames or pathnames, the pattern string shall be left unchanged.
Because there was no filename starting with \, the pattern was left as-is and echo is given the argument \*. This behaviour is often confusing, and some other shells, such as zsh, do not have it. You can change it in Bash using shopt -o failglob, which will then give an error as zsh does and help you diagnose the problem instead of misbehaving.
The * and ? pattern characters can appear anywhere in the word, and characters before and after are matched literally. That is why echo \\* and echo \\ * give such different output: the first matches anything that starts with \ (and fails) and the second outputs a \, and then all filenames.
The most straightforward way of getting the output you want safely is probably to use printf:
printf '\\'; printf "%s " *
echo * is unsafe in the case of unusual filenames with - or \ in them in any case.
| echo \\* - bash backslash escape behavior, is it evaluated backwards? |
1,465,563,411,000 |
My understanding is that terminals often use ANSI control-codes to represent non-alphanumeric character sequences. For example, when editing .inputrc for Bash in Linux, it's easy to find code sequences that look as follows:
"\e[A": history-search-backward
"\e[B": history-search-forward
"\e[C": forward-char
"\e[D": backward-char
"\e[1~": beginning-of-line
"\e[4~": end-of-line
"\e[3~": delete-char
"\e[2~": quoted-insert
"\e[5C": forward-word
"\e[5D": backward-word
The commands above define key bindings for the Bash commands history-search-backward, etc..
Now, in bash, I can use read to see how characters typed in my keyboard are mapped to ANSI control codes. E.g. For example, if I run read, and then enter Ctrl-P, I get: ^P. Similarly, if I enter Alt-W, I get: ^[W.
My question is: Is there a program, tool or website that does the opposite? I.e. a tool that outputs or shows the sequence of keyboard keys that I need to type on my keyboard to obtain a given ANSI control-code sequence. For example, entering ^[W should output: Alt-W
Thanks!
|
infocmp can help. It writes escape as \E rather than \e or ^[.
For example, to find \e[A, which is your history-search-backward:
$ infocmp -1x | grep -F '=\E[A,'
cuu1=\E[A,
$ man 5 terminfo | grep ' cuu1 '
cursor_up cuu1 up up one line
Which tells you to press cursor up, a.k.a. up arrow.
Note that you will need the -x flag (shown above) to display some combinations, e.g. Ctrl+<-.
These extended keys are not part of the standard, so they aren't listed in the terminfo man page, but they are documented in the terminfo file.
Also note that the control sequences vary depending on which terminal you use.
You can get information about a different terminal by using infocmp -1x <terminal>, e.g. infocmp -1x rxvt, infocmp -1x putty, etc.
Once you figure out which one terminfo thinks you have, things will be easier if you set your TERM variable to match.
| Reverse control-code look up for terminals |
1,465,563,411,000 |
I always thought that bash treats backslashes the same when using without or with double quotes, but I was wrong:
[user@linux ~]$ echo "foo \ "
foo \
[user@linux ~]$ echo foo \ # Space after \
foo
So I thought backslashes are always printed, when using double quotes, but:
[user@linux ~]$ echo "foo \" "
foo "
[user@linux ~]$ echo "foo \\ "
foo \
Why is the backslash in the first code line shown?
|
Section 3.1.2.3 Double Quotes of the GNU Bash manual says:
The backslash retains its special meaning only when followed by one of
the following characters: ‘$’, ‘`’, ‘"’, ‘\’, or
newline. Within double quotes, backslashes that are followed by one
of these characters are removed. Backslashes preceding characters
without a special meaning are left unmodified. A double quote may be
quoted within double quotes by preceding it with a backslash. If
enabled, history expansion will be performed unless an ‘!’ appearing
in double quotes is escaped using a backslash. The backslash preceding
the ‘!’ is not removed.
Thus \ in double quotes is treated differently both from \ in single quotes and \ outside quotes. It is treated literally except when it is in a position to cause a character to be treated literally that could otherwise have special meaning in double quotes.
Note that sequences like \', \?, and \* are treated literally and the backslash is not removed, because ', ? and * already have no special meaning when enclosed in double quotes.
| Why is a single backslash shown when using quotes |
1,465,563,411,000 |
Is it possible to escape all meta characters of a string inside a variable before passing it to grep? I know similar question has been asked before on SE
(here) and also a good explanation here, but I was just curious about whether it is possible with basic/extended posix regex pattern instead of perl pattern? (currently I'm reading perl regex syntax to understand it first instead of jumping into the solution)
Why this requirement: (Meta, not required for answer)
I was trying to write a small script for splitting large files where I split files to file_name.ext.000, file_name.ext.001... etc. which works fine. Now I don't like to split those files which are already split (ie. have files names having 3 character extension which are all digits, and their size sum up to original file size. Now if I use a plain shell expansion like file_name.ext.* it also matchs files having file_name.ext.ext2 and hence the total size mismatches and split occurs even though there's no need to resplit. So I would check only for those files having name file_name.ext.### where ### are digits. My current expression to find file size of these parts look like this:
FILE_SIZE_EXISTING=$( (find "$DESTINATION" -type f -regextype posix-extended -regex "^$DESTINATION/$FILE_BASENAME(\.[[:digit:]]{3})?$" -print0 | xargs -0 stat --printf="%s\\n" 2>/dev/null || echo 0) | paste -sd+ | bc)
This works for simple file names. However, it does not work if some fancy name e.g. containing [ ] etc. Is there a workaround? I'm new to shell scripting and hence don't know perl much.
|
How to quote special characters (portably)
The following snippet adds a backslash before each character that's special in extended regular expressions, using sed to replace any occurence of one of the characters ][()\.^$?*+ by a backslash followed by that character:
raw_string='test[string]\.wibble'
quoted_string=$(printf %s "$raw_string" | sed 's/[][()\.^$?*+]/\\&/g')
This will remove trailing newlines in $raw_string; if that's a problem, ensure that the string doesn't end with a newline by adding an inert character at the end, then strip off that character.
quoted_string=$(printf %sa "$raw_string" | sed 's/[][()\.^$?*+]/\\&/g')
quoted_string=${quoted_string%?}
How to quote special characters (in bash or zsh)
Bash and zsh have a pattern replacement feature, which can be faster if the string is not very long. It's cumbersome here because the replacement must be a string, so each character needs to be replaced separately. Note that you must escape the backslashes first.
quoted_string=${raw_string//\\//\\\\}
for c in \[ \] \( \) \. \^ \$ \? \* \+; do
quoted_string=${quoted_string//"$c"/"\\$c"}
done
How to quote special characters (in ksh93)
Ksh's string replacement construct is more powerful than the watered-down version in bash and zsh. It supports references to groups in the pattern.
quoted_string=${raw_string//@([][()\.^$?*+])/\\\1}
What you actually want
You don't need find here: shell patterns are sufficient to match files ending with three digits. If no part file exists, the glob pattern is left unexpanded. There's also a simpler way of adding the file sizes: rather than use stat (which exists on many unix variants but has a different syntax on each) and do complex pipelining to sum the values, you can call wc -c (on regular files, on most systems, wc will look at the file size and not bother to open the file and read the bytes).
set -- "$DESTINATION/$FILE_BASENAME".[0-9][0-9][0-9]
case $1 in
*\]) # The glob was left intact, so no part exists
do_split …;;
*) # The glob was expanded, so at least one part exists
FILE_SIZE_EXISTING=$(wc -c "$@" | sed -n '$s/[^0-9]//gp')
if [ "$FILE_SIZE_EXISTING" -ne "$(wc -c <"$DESTINATION/$FILE_BASENAME")" ]; then
do_split …
fi
Note that your test on the total size is not very reliable: if the file has changed but remained the same size, you'll end up with stale parts. That's ok if the files never change and the only risk is that parts may be truncated or missing.
| Escaping of meta characters in basic/extended posix regex strings in grep |
1,465,563,411,000 |
So I'm writing a terminal emulation (I know, I should just compile putty, etc.) and am at the stage of plodding through vttest to make sure it's right. I'm basing it on the VT102 for now but will add later terminal features such as color when the basics are working right.
The command set is mostly ANSI. DEC had their own command set but supported the ANSI commands from about 1973. Those ANSI standards are apparently not available now but the ECMA equivalents are, I have them (ECMA-48 seems most relevant) but does not answer this question as far as I can see. Most ANSI command sequences start with ESC. Many commands start with the command sequence identifier shown here as CSI and represented in the data as 0x1c 0x5b (ESC [), or 0xdb if 8-bit communication was possible. Then followed a sequence identifying the command. Some commands affect cursor position, some the screen, some provoke a response to the host and so on.
Some terminal commands include a numeric argument. Example CSI 10 ; 5 H means make the cursor position row 10, column 5. When the numeric argument is missing there is a default value to use: CSI 10 ; H means make the cursor position row 10, column 1 because 1 is the default value when an argument is not given.
I have the vt102 manual from vt100.net (great resource) and about a dozen pages giving partial information on these command sequences. Apparently the complete gospel DEC terminal spec never made it out of DEC.
What is clear is that CSI C is move cursor right and that the default value is 1.
What isn't clear is what is the meaning of CSI 0 C.
Why have a zero there, it would seem to make the command do nothing? If it means "use default value" then it could have been sent as 1 instead, but the shorter string would have no argument and rely on the default value being interpreted as 1 anyway. These actual physical VT terminals were often used at 300 baud and below so the one character did matter!
I'm not so advanced with vttest that I can just try it both ways and see which makes everything perfect but I'm far enough that little questions like this are starting to matter.
|
I got in touch with Thomas Dickey (invisible-island.net) who maintains xterm and vttest - he explained that CSI 0 C is the same as CSI 1 C or CSI C in xterm.
For anyone looking for more information on terminal programming I highly recommend checking out the xterm source he hosts - specifically the ctlseqs.txt inside xterm, which looks very much like the one true terminal control sequences reference I've been searching for.
| DEC ANSI command sequence questions; cursor movement |
1,465,563,411,000 |
I am confused about how Ctrl-key combinations work in terminal. In bash man page, there are various combinations such as:
C-e - go to end of the line
C-f - go one character forward
etc.
But then there are some undocumented shortcuts such as:
C-j (OR C-m) for return key.
C-h for backspace
C-i for tab
etc.
Are these shortcuts just forgotten to be documented? Or, because
C-j is LF
C-m is CR
C-i is Tab
in ASCII, is this somehow a "default" behavior? In other words, is the behavior for C-j, C-m and C-i not implemented by bash but by something else?
Another question is, when I press C-v and left arrow key, ^[[D appears on screen. I.e, ESC-[-d. But when I press ESC-[-d, the cursor does not move left. What is the reason for this?
EDIT:
Initially, I have thought that when I press C-j, the keyboard directly sends 00001010 to kernel. But then I decided that this is not the case, because using programs such as xev or evtest, I have observed that key presses to Control and j appear as different events. So when I press C-j, the keyboard does not send 00001010, but probably multiple bytes. Then where the conversion of these multiple bytes to 00001010 is done?
|
The behavior of C-m, C-i, etc. is implemented by bash, but the fact that they're the same thing as Return, Tab, etc. is due to the behavior of the terminal. All terminals behave like this because all terminals have always behaved like this and it's what applications expect. The interface between a terminal and an application is based on characters (in fact, bytes), not on keys, so keys that don't send printable characters and key combinations have to be encoded somehow. See How do keyboard input and text output work? for more on this topic. See also https://emacs.stackexchange.com/questions/1020/problems-with-keybindings-when-using-terminal
TAB is the tab character in ASCII, and that's the same thing as the Ctrl+I character in ASCII. Likewise for the other keys. Terminals send that character both when the user presses Tab and when the user presses Ctrl+I. Ditto for RET (CR) and C-m, for LFD and C-j (which most keyboards don't have), and for ESC and C-[. There's also BackSpace which sends either C-h or C-?, that's an issue of its own.
The configuration of the terminal (stty settings) can additionally come into play, and this affects some of bash's settings (e.g. after stty erase @, bash will treat pressing @ as BackSpace), but not C-m and C-j to submit the current line.
^[[D is Esc [ D, but with a capital D. If you press Esc [ D, bash sees the Left key, due to the declaration of cursor key escape sequences in the termcap or terminfo database. There's no default binding for Esc [ d (it isn't an escape sequence that common terminals send).
| Question about behavior of control key shortcuts |
1,465,563,411,000 |
In my directory I have two files with space, foo bar and another file. I also have two files without space, file1 and file2.
The following script works:
for f in foo\ bar another\ file; do file "$f"; done
This script also works:
for f in 'foo bar' 'another file'; do file "$f"; done
But the following script doesn't work:
files="foo\ bar another\ file"
for f in $files; do file "$f"; done
Not even this script works:
files="'foo bar' 'another file'"
for f in $files; do file "$f"; done
But, if the files do not contain space, the script works:
files="file1 file2"
for f in $files; do file "$f"; done
Thanks!
Edit
Code snippet of my script:
while getopts "i:a:c:d:f:g:h" arg; do
case $arg in
i) files=$OPTARG;;
# ...
esac
done
for f in $files; do file "$f"; done
With files without spaces, my script works. But I would like to run the script passing files with spaces as argument in one of these ways:
./script.sh -i "foo\ bar another\ file"
./script.sh -i foo\ bar another\ file
./script.sh -i "'foo bar' 'another file'"
./script.sh -i 'foo bar' 'another file'
|
For your command line parsing, arrange with the pathname operands to always be the last ones on the command line:
./myscript -a -b -c -- 'foo bar' 'another file' file[12]
The parsing of the options would look something like
a_opt=false b_opt=false c_opt=false
while getopts abc opt; do
case $opt in
a) a_opt=true ;;
b) b_opt=true ;;
c) c_opt=true ;;
*) echo error >&2; exit 1
esac
done
shift "$(( OPTIND - 1 ))"
for pathname do
# process pathname operand "$pathname" here
done
The shift will make sure to shift off the handled options so that the pathname operands are the only things left in the list of positional parameters.
If that's not possible, allow the -i option to be specified multiple times and collect the given arguments in an array each time you come across it in the loop:
pathnames=() a_opt=false b_opt=false c_opt=false
while getopts abci: opt; do
case $opt in
a) a_opt=true ;;
b) b_opt=true ;;
c) c_opt=true ;;
i) pathnames+=( "$OPTARG" ) ;;
*) echo error >&2; exit 1
esac
done
shift "$(( OPTIND - 1 ))"
for pathname in "${pathnames[@]}"; do
# process pathname argument "$pathname" here
done
This would be called as
./myscript -a -b -c -i 'foo bar' -i 'another file' -i file1 -i file2
| Bash for loop with string var containing spaces |
1,465,563,411,000 |
I thought ls colors are defined as
<file_type>=[<bg(40-47)>];<font_spec(0:5)>;<font_color(30-37)>
but, I recently found this where there are more colors and colors are specified as, e.g.:
.tar 00;38;5;61
for a 256-color terminal.
What does this definition mean?
|
The argument to a LS_COLORS directive is a string that is written to the terminal as part of an escape sequence. When displaying a file name, ls writes \e[, then the string associated with the file type, then m, then the file name, then \e[0m (where \e represents an escape character). This is the escape sequence that tells xterm and compatible terminals (which is most of them nowadays) to change colors and other text attributes (CSI Pm m in the documentation. ls doesn't care what the sequence of characters means or how many semicolons it contains.
Old terminals only supported 8 foreground colors, designated by the numbers 30 through 37. Terminals that support more colors use the escape sequence \e[38;5;Psm where Ps is a color number, or \e[38;2;Pr;Pg;Pbm where Pr, Pg, Pb are RGB values. These can be combined with other attributes, e.g. \e38;5;61;1m or \e38;2;95;95;175;1m for bold slate blue text.
| LS_COLORS for 256-color terminal |
1,465,563,411,000 |
I have input coming from pipe which contains special escape characters. For illustration, the stream looks like this:
printf "foo.bar\033[00m" > file
cat file
How can I remove the trailing .bar\033[00m?
I have tried the following, but that does not work:
cat file | sed 's/\.bar\033[00m//'
|
If your file contains control characters such as
printf "foo.bar\033[00m" > file
then to remove the specific, single occurrence of .bar\033[00m write the following:
sed $'s/\.bar\033\[00m//'
To remove all kinds of escape sequences in the entire file:
sed $'s/\033\[[0-9;]*m//g'
The dollar-before-single-quoted-string ($'some text') instructs the shell to apply ANSI C quoting to the string's content, like printf does.
This is required to produce the "escape" ASCII character (0x1B/033/...).
The character can also be produced via keyboard shortcuts (no $' necessary):
sed 's/\.barCtrl-vESC\[00m//'
After hitting Ctrl-vESC you should see ^[ on the screen, but not literal ^ and [ (two characters), but one single control character.
Original answer
If you want in the output just foo then
printf '%s' 'foo.bar\033[00m' | sed 's/\.bar\\033\[00m//'
Notice that both \ and [ has to escaped by another \. Additionally I've added '%s' to printf to print the input characters as literal string, otherwise \033[ could be interpreted as escape code followed by ANSI colour.
| Remove escape characters using sed |
1,465,563,411,000 |
ESC sends \x1b. That's 1 byte: the actual escape character.
Page Up sends \x1b[H. That's 3 bytes.
F2 sends \x1b[OQ. That's 4 bytes.
F5 sends \x1b[15~. That's 5 bytes.
What's the maximum length for one of these? Is this documented somewhere?
|
There is no predefined limit for the length of control sequences. OP gives as examples some strings sent by special keys, which are documented in XTerm Control Sequences.
xterm starts with a list of possible key codes, may add codes for modifiers as outlined in the Alt and Meta keys section. There is no table of lengths. One complication for doing that is that there are several resource settings which work together to make a few thousand possible keyboard arrangements. Rather than describe all of those, the xterm terminal description is presented as a set of terminfo building blocks (the names with "+"), including user-defined capabilities for modified keys (e.g., control, shift, etc).
The terminfo for xterm page lists those (generated by a script).
The building-blocks are limited in size, to fit within the 4096-byte limit on compiled terminfo assumed by most implementations.
The ncurses terminal database lists a subset of those building-blocks.
It also documents the user-defined capabilities used by the xterm entries, noting that there are many more keys possible than are documented.
Some other terminals implement the xterm scheme, but only for particular combinations. So those would be easier to enumerate. They are in a sense "predefined".
However, special keys are not the only type of control sequence. Each of these terminals using the ECMA-48 format accepts control sequences sent from the host. Generally speaking, they accept numeric or string parameters:
xterm ignores numbers larger than 65535, so you could take that as a limit on the number of digits (but terminal-dependent).
A control sequence could include an arbitrary number of numeric parameters. For example, xterm accepts a control sequence which changes each of the 256 colors in the 256-color palette. That's a few kilobytes (which could be estimated for an upper bound). The control sequence parser does not need more than a few numbers at any point.
string parameters (such as setting the title on a window) do not have a predefined limit on their length.
Again, other terminal emulators may use their own limits on the length of control sequences that they accept.
| What's the maximum length for a multibyte escape sequence? |
1,465,563,411,000 |
I am writing a script which executes a number of programs with arguments. To simplify it as much as possible, consider the following example:
programs=( "ls" "echo" )
declare -A parameters
parameters["ls"]="-l /tmp/foo"
parameters["echo"]="Hello"
for program in "${programs[@]}"
do
$program ${parameters[$program]}
done
This works fine as long as there are no spaces in the arguments. Of course, escaping strings has been discussed before, so I tried all of the different ways I could find here on StackExchange and some other pages. However, I could not find a solution which works in my case which uses associative arrays. For example, when I want to get a listing of "/tmp/foo bar", neither this
parameters["ls"]="-l /tmp/foo bar"
nor this
parameters["ls"]="-l \"/tmp/foo bar\""
nor this
parameters["ls"]="-l /tmp/foo\ bar"
works as expected. Similarly, putting quotes around the program call in the loop
$program "${parameters[$program]}"
does not work either, since the two parameters (-l and the path) are interpreted as one parameter.
I found the closest thing to a solution in this post which proposes using multiple separate variables, one for each argument, for the program call. This, however, is not possible in my use case which requires a variable number of arguments for each program - thus my use of the associative array.
Is there any other way to escape the path name?
Best regards
Andreas
|
First, create an array with the parameters.
Then, store the array string value (found with declare -p) at parameters, and recover and use it as an actual array later on like in:
#!/bin/bash
programs=( "ls" "echo" )
declare -A parameters
arrayTmp=("-l" "/tmp/foo bar")
parameters["ls"]="`declare -p arrayTmp |sed -r "s,[^=]*='(.*)'$,\1,"`"
parameters["echo"]="Hello"
for program in "${programs[@]}";do
echo "PROGRAM: $program"
declare -a arrayTmp="${parameters[$program]}"
$program "${arrayTmp[@]}"
arrayTmp=()
done
| Escaping strings in associative arrays (bash) |
1,465,563,411,000 |
I have been trying to:
cp file.csv file.$(date +%D).csv
But it fails because the filenames is: file.03/27/19.csv with the slash of separate directories.
And I have been trying again to:
cp file.csv file.$(printf "%q" $(date +%D)).csv
But it still fails.
|
You can't have / (byte 0x2F on ASCII-based systems) in a file name, period.
You can use characters that look like / like ∕ (U+2215 division slash) or ⁄ (U+2044 fraction slash though found in fewer of the charsets used in current locales), so you could do (provided that U+2215 character exists in the locale's charset, includes GBK, BIG5, UTF-8, GB18030):
cp file.csv "file.$(date +%D | sed 's|/|∕|g').csv"
Or with some shells (zsh, bash at least):
cp file.csv "file.$(date +%D | sed $'s|/|\u2215|g').csv"
(here using sed instead of tr as some tr implementations including GNU tr still don't support multi-byte characters).
But you may run into problems like the file name being rendered differently in locales using a different charset from the one that was in use at the time you created the file (and of course the confusion of users when they see what looks like a slash in a file name).
My advice would be to use the standard non-ambiguous (for most people outside the US, 03/12/18 would be interpreted as the 3rd of December 2018 for instance) YYYY-mm-dd format instead (which also helps wrt sorting):
cp file.csv "file.$(date +%Y-%m-%d).csv"
Which with many date implementations you can shorten to:
cp file.csv "file.$(date +%F).csv"
| Escape a mm/dd/YY backup date in a file name |
1,465,563,411,000 |
I have one-line that I want to call using alias.
while printf '%s ' "$(df -P / | awk 'NR==2 { print $(NF-1) }')"; do sleep 30; done
I tried to escape ' like \' but it didn't work.
What is the correct syntax to use above one-liner in alias?
|
alias my_du=$'while printf \'%s \' "$(df -P / | awk \'NR==2 { print $(NF-1) }\')"; do sleep 3; done'
You can check the result with
alias my_du
If $() is quoted by " instead of ' or \ then it is substituted and the result rather than the intended program call becomes part of the alias definition.
| How to use ' in alias? |
1,465,563,411,000 |
In sh (not Bash), how would we abort execution of command when the prompt is in > mode?
For example, when entering a string with quotes only at the beginning, it makes the prompt look like >, without the ability to quit it normally, unless hitting Ctrl + D. Example:
root@MyPC:~# echo "hello
> I am
> (How do I exit of this mode?)
In case that I don't know which character delimits the string (" or ' or simply escaping a newline/spaces with backslash), is there a way to let Bash know that I want to abort the execution of the current command?
|
^C aka Ctrl+C will abort what you're doing and get you back to a normal prompt.
| Exit of "> " mode in Unix shell |
1,465,563,411,000 |
Let's say we have a generic keyboard with some unknown keys which may send escape sequences to terminal.
The keyboard is connected to an xterm terminal emulator running on a generic BSD/Linux.
To create correct mapping for the unknown keys, we must first know what escape sequences they send to the xterm.
But how to know what escape sequences the keys send?
|
Your keyboard is not connected to xterm. It's connected to your PC. A kernel driver knows how to decode the key press and release sent by the keyboard and make that available to applications via a generic API on special device file.
An X server is such an application that uses that API.
It translates those key presses and releases into X "KeyPress" and "KeyRelease" events which carry with them the information of the key pressed as both a keycode and a keysym. That's another API.
xterm is an X application. It connects to an X server and tells it: I'm interested in all KeyPress and KeyRelease events. When it has the focus and when the KeyPress and KeyRelease events are not hijacked by your Window Manager or other applications that register for some KeyPress events globally, xterm will receive the KeyPress and KeyRelease events.
xterm translates a keysym in a KeyPress event into a sequence of characters it sends to the master side of a pseudo-terminal driver. Applications running in your xterm will eventually read from the slave side of that pseudo-terminal driver the characters sent by xterm, but potentially altered by the pseudo-terminal driver (for instance, under some conditions, 0xd characters are translated to 0xa ones, 0x3 would cause a SIGINT to be sent...).
With those clarifications out of the way. To know which keycode or keysym is sent by the X server upon a given key press, you can use xev.
To know which sequence of characters (if any) is sent by xterm, you need to tell the pseudo-terminal driver not to mingle with them first (stty raw) and then you can use cat -vt or sed -n l or od to see them:
{
stty raw min 1 time 20 -echo
dd count=1 2> /dev/null | od -vAn -tx1
stty sane
}
(above adding a min 1 time 20 and using dd so it exits after one keypress as you wouldn't be able to exit with Ctrl-C otherwise).
| How to find out the escape sequence my keyboards sends to terminal? |
1,465,563,411,000 |
How can a command-line argument containing a dot (.) be passed? Are there any escape sequences for capturing characters like dot?
The following invocation of a bash-script from the shell does not work:
# ./deploy.sh input.txt
./deploy.sh: line 9: input.txt: syntax error in expression (error token is ".txt")
I have tried the following:
backslash
quote
double quotes
./deploy.sh input (this works)
EDIT
Take this use-case:
I have 3 files: server.jar client.jar gui.jar
I need to scp them from a source to a dest
source dir: login1@host1:/home/xyz/deploy/
dest dir: login2@host2: /data/apps/env/software/binary/
Solution:
Read artifacts to be copied into an array from the command-line
create dest path and source path strings by using the correct directory prefixes
use a for loop to scp each artifact (having figured out the paths)
Here's the simple script which is doing 1 (read artifacts into an array):
#!/bin/bash
declare -a artifacts
for i
do
artifacts[i]=$i
echo ${artifacts[i]}
done
Execution1
-bash-3.00$ ./simple.sh arg1 arg2 arg3
arg1
arg2
arg3
Execution2
-bash-3.00$ ./simple.sh arg1.txt arg2.txt arg3.txt
./simple.sh: line 7: arg1.txt: syntax error in expression (error token is ".txt")
|
You need to use declare -A instead of declare -a. You are clearly using associative arrays with arbitrary string arguments as indices, but declare -a is only for integer indexed arrays. arg.txt does not evaluate to a valid integer, hence your error.
Edit
You seem to be using bash version 3. Unfortunately, associative arrays are not available until version 4. I recommend you post a sanitized version of your original deploy.sh script with sensitive personal information removed so you can get ideas from other people about alternative approaches.
Edit 2
Just to summarize a bit of exchange in the chat:
The easiest way to do some action over all the arguments is to just iterate over them with a for loop:
for arg; do
scp login1@host1:"$arg" login2@host2:/dest/
done
Remember to double-quote all instances of "$arg".
You do not need to put the arguments in an array yourself, as they already exist in the array $@, which is what for uses by default when you don't give an explicit in list....
| Passing a bash command-line argument containing a dot |
1,465,563,411,000 |
I am trying to write a program that runs a console program like gcc and displays its output in color. I used openpty instead of pipe to pretend to be a character device and now get ANSI escape codes that carry color information. I tried some programs and they sometimes give me the code CSI [ 49 m. Both wikipedia and the xterm escape code documentation (search for Ps = 4 9) agree that code CSI [ 49 m means that I should be using the default background color.
However, debian's xterm and zsh as well as ubuntu's linux console disagree.
printf '\033[\061mTest\n\033[\060m' executed in a console like xterm should be printing "Test" with the default background color (\033 is escape and escape + [ make a CSI (Control Sequence Introducer) and \061 is octal which is 49 decimal), but it actually prints "Test" in bold (and the \061 at the end seems to mean "not bold anymore" but is documented neither on wikipedia nor in the xterm color code documentation). All consoles mentioned above agree on this.
There is a list of color codes for various consoles and standards, but none of them list CSI 49 m to mean "bold".
Where does this inconsistency come from? Where can I find a list of color codes that correspond to what any of xterm, zsh or the linux console are using?
|
\61 is the octal code for the 1 character in ASCII, so \e[\61m or \33[\61m or \33\133\61\155 or \33[1m is <ESC>[1m.
That's CSI 1 m. See Wikipedia or the xterm documentation.
$ printf '\e[\61m' | od -An -vto1 -tc
033 133 061 155
033 [ 1 m
$ tput bold | od -An -vto1 -tc
033 133 061 155
033 [ 1 m
For the default background colour, you'd need \e[49m, not \e[\61m. Those 1 and 49 numbers are meant to be expressed in their decimal string representation, not be the byte value.
| Where is the character escape sequence `\033[\061m` documented to mean bold? |
1,465,563,411,000 |
The methods I found break things further down the line by also affecting linebreaks.
For example...
$ message="First Line\nSecond Line";
$ echo "${message^^}"
FIRST LINE\NSECOND LINE
Is there an elegant way to convert a string to uppercase, but leaving escaped characters alone, to get the following output instead?
FIRST LINE\nSECOND LINE
I could just do something convoluted like changing "\n" to 0001 or something along those lines, apply the conversion and then return 0001 to "\n". But maybe there is a better way.
|
echo "$message" | sed -e 's/^[[:lower:]]/\u&/' -e 's/\([^\]\)\([[:lower:]]\)/\1\u\2/g' \
-e 's/\([^\]\)\([[:lower:]]\)/\1\u\2/g'
-e 's/^[[:lower:]]/\u&/'
If the first character in the string
(or, more generally, the first character on a line)
is a lower-case letter, capitalize it.
Because the first character on a line can’t be escaped.
Duh.
That’s a no-brainer.
-e 's/\([^\]\)\([[:lower:]]\)/\1\u\2/g'
Look at the line two characters at a time.
If a lower-case letter is preceded by something other than a backslash,
leave the preceding character alone, and capitalize the lower-case letter.
You might think that this would be enough to process the entire line.
Unfortunately, since it processes the line two characters at a time,
it gets only every other letter:
$ echo "first line\nsecond line" | sed -e 's/\([^\]\)\([[:lower:]]\)/\1\u\2/g'
fIrSt LiNe\nSeCoNd LiNe
so,
-e 's/\([^\]\)\([[:lower:]]\)/\1\u\2/g'
Do the exact same thing a second time.
This will pick up the letters that were skipped on the first pass.
Alternative version:
echo "$message" | sed -e 's/^[[:lower:]]/\u&/' \
-e ': loop; s/\([^\]\)\([[:lower:]]\)/\1\u\2/g; t loop'
Basically the same as the first version,
but, instead of repeating the second s command,
it iterates it with a loop.
Unfortunately, this will not work correctly for double backslashes:
foo\\bar will become FOO\\bAR, even though the b should be capitalized,
since the \\ is an escaped backslash,
and so should not cause the b to be escaped.
| Convert to uppercase, except for escaped characters |
1,465,563,411,000 |
I have a pattern variable with below value:
\"something//\\anotherthing'
and a file with below contents:
\"something//\\anotherthing'
\"something//\\anotherthing
\"something/\anotherthing'
\"something\anotherthing'
\\"something\/\/\\\\anotherthing'
When I compare a line read from the file against the pattern in the environment with == operator, I get the expected output:
patt="$pattern" awk '{print $0, ENVIRON["patt"], ($0 == ENVIRON["patt"]?"YES":"NO") }' OFS="\t" file
\"something//\\anotherthing' \"something//\\anotherthing' YES
\"something//\\anotherthing \"something//\\anotherthing' NO
\"something/\anotherthing' \"something//\\anotherthing' NO
\"something\anotherthing' \"something//\\anotherthing' NO
\\"something\/\/\\\\anotherthing' \"something//\\anotherthing' NO
But when I do the same with the ~ operator, the tests never match.
(I expected YES on the first line, as above):
patt="$pattern" awk '{print $0, ENVIRON["patt"], ($0 ~ ENVIRON["patt"]?"YES":"NO") }' OFS="\t" file
\"something//\\anotherthing' \"something//\\anotherthing' NO
\"something//\\anotherthing \"something//\\anotherthing' NO
\"something/\anotherthing' \"something//\\anotherthing' NO
\"something\anotherthing' \"something//\\anotherthing' NO
\\"something\/\/\\\\anotherthing' \"something//\\anotherthing' NO
To fix the issue with ~ comparison I need to double escape the escapes:
patt="${pattern//\\/\\\\}" awk '{print $0, ENVIRON["patt"], ($0 ~ ENVIRON["patt"]?"YES":"NO") }' OFS="\t" file
\"something//\\anotherthing' \\"something//\\\\anotherthing' YES
\"something//\\anotherthing \\"something//\\\\anotherthing' NO
\"something/\anotherthing' \\"something//\\\\anotherthing' NO
\"something\anotherthing' \\"something//\\\\anotherthing' NO
\\"something\/\/\\\\anotherthing' \\"something//\\\\anotherthing' NO
Note the double escapes in result of printing ENVIRON["patt"] in second column.
Question:
Where does escape sequence in awk happening when using tilde ~ comparison operator? on $0 (or $1, $2, ...) or in ENVIRON["variable"]?
|
The ~ operator does pattern matching, treating the right hand operand as an (extended) regular expression, and the left hand one as a string. POSIX says:
A regular expression can be matched against a specific field or string
by using one of the two regular expression matching operators, '~' and
"!~". These operators shall interpret their right-hand operand as a
regular expression and their left-hand operand as a string.
So ENVIRON["patt"] is treated as a regular expression, and needs to have all characters that are special in EREs to be escaped, if you don't want them to be have their regular ERE meanings.
Note that it's not about using $0 or ENVIRON["name"], but the left and right sides of the tilde. This would take the input lines (in $0) as the regular expression to match against:
str=foobar awk 'ENVIRON["str"] ~ $0 {
printf "pattern /%s/ matches string \"%s\"\n", $0, ENVIRON["str"] }'
| Where are escape sequences needed when using tilde ~ operator in awk? |
1,465,563,411,000 |
This works (puts the date and time in the clipboard) in just iTerm:
printf "\e]1337;Copy=:$(date | base64)\a"; echo $(pbpaste)
This works in tmux running locally (using the DCS passthrough):
printf "\ePtmux;\e\e]1337;Copy=:$(date | base64)\a\e\\"; echo $(pbpaste)
This works in tmux running remotely:
printf "\ePtmux;\e\e]1337;Copy=:$(date | base64)\a\e\\"; echo $(ssh -p 2222 -qt localhost pbpaste)
My only problem is running tmux remotely under a local tmux:
printf "\ePtmux;\e\ePtmux;\e\e]1337;Copy=:$(date | base64)\a\e\\\e\\"; echo $(ssh -p 2222 -qt localhost pbpaste)
I think the problem is the inner \e\\ is being interpreted as the outer \e\\.
Is there some way to escape the inner \e\\ so it makes it the outer tmux properly?
|
You need to double every \e for each tmux, including the \e in the terminating \e\\, so:
printf "\ePtmux;\e\ePtmux;\e\e]1337;Copy=:$(date | base64)\a\e\e\\\e\\"
Alternatively if you configure tmux to use OSC 52 and then turn it on in iTerm2 ("Applications in terminal may access clipboard" from a quick search) it will pass through each tmux (creating a paste buffer in each) to the host clipboard. For tmux you will need something like:
set -as terminal-overrides ',tmux*:Ms=\\E]52;%p1%s;%p2%s\\007'
set -as terminal-overrides ',screen*:Ms=\\E]52;%p1%s;%p2%s\\007'
set -s set-clipboard on
Then you can do this in the innermost tmux:
printf "\033]52;$(date)\007"
Of course this will mean anything you copy in tmux will also go into the host clipboard which you may not want.
| How can I send an escape sequence from a nested tmux session to iTerm2? |
1,465,563,411,000 |
When you run cal on Linux, the output for the current month will reverse video highlight the current day. When I send that output to hexdump -c, I get some interesting results:
0000000 N o v e m b e r 2 0 1 6
0000010 \n S u M o T u
0000020 W e T h F r S a \n
0000030 1 2 _ \b _ \b 3
0000040 4 5 \n 6 7
0000050 8 9 1 0 1 1 1 2 \n
0000060 1 3 1 4 1 5 1 6 1 7 1
0000070 8 1 9 \n 2 0 2 1 2 2
0000080 2 3 2 4 2 5 2 6 \n 2 7
0000090 2 8 2 9 3 0
00000a0 \n
00000b0 \n
00000bc
As you can see, there is an invisible sequence being printed of _\b _\b before the '3' that is highlighted for today. _ being underscore (5F in ascii hex) and \b being Ctrl-H or 08 in ASCII hex. What is this? I know there are a lot of obscure terminal codes, but I would expect it to use something more standard like \e[7m. What is even stranger is that I can't reproduce the same behavior of cal by printing out the same characters using standard printf functions like one of these commands:
/usr/bin/printf "1 2 _\b _\b3 4 5\n"
/usr/bin/printf "1 2 _^H _^H3 4 5\n"
Where ^H is made by pressing Ctrl-V Ctrl-H. But neither of these produce the same inverse video output that cal does. I even tried writing a little C program to do it too. I've also tried with echo -e. The interesting thing is that while it doesn't reverse the video in the terminal, if I pipe the output from less -R, it changes its color to yellow and underlines it. On other terminals I tried it just underlines it. It almost seems like overstriking, but if I use a character other than _ it doesn't work, which makes me think that _\b is a single code sequence. And how does the video for that character then get inversed?
Any insight into this?
The man page says that the output of cal is supposed to be a bit for bit compatible version to the original Unix cal command. So I can only assume this is some ancient code.
|
Ctrl-H is backspace, it moves the cursor one step to the left. Sending an underscore, a backspace, and some other character was the way to underline something on a hard-copy ("paper") terminal back in the good old days. This was used to highlight the current day in the output of cal.
My cal program, when run in konsole does not output this sequence. If I run script -c cal and examine the resulting typescript file, I can see the cal program uses the escape sequence <esc>[7m to switch to inverse mode video.
| Why does 'cal' use weird 08 / ^H / \b terminal code for highlighting and how does that work? |
1,465,563,411,000 |
I am working with control sequences for text formatting and have stumbled upon some unexpected behavior. I have some text to output, in the middle of this text there is some smaller text block highlighted with a colored background. The original text may be pretty long, it may take a few lines, so the colored text block in the middle may appear on the multiple lines - starts on the one line and finishes on the other. Everything seems to work fine until it reaches the bottom of the terminal window, a lot of whitespace becomes colored:
Here is a script to reproduce:
# color.sh
echo -e 'default \x1B[43m color \n color \x1B[49m default';
As you can see, I've added a newline character \n in the text block, just to reproduce, but it my situation, when a text is pretty long and it does not fit into a single line, colored block gets split into multiple line, and as a result I have the same colored whitespace after the text.
# color-long.sh
echo -e 'default default default default default default default default default \x1B[43m color color \x1B[49m default';
I get this on Ubuntu 14.04, but was able to reproduce the same behavior on Yosemite 10.10.
I wonder what is the reason of this behavior and how it can be fixed without using different utilities for output (instead of echo), but maybe by using the same control sequences. I have control over the text, but not over the output process.
I've already tried to wrap sequences like \[\x1B[43m\], \001\x1B[43m\002, but it does not give any result, just adds extra [] to the text or outputs some unrecognized symbols.
|
This has nothing to do with bash, it is purely an effect of the terminal's behavior, specifically scroll. When you reach the bottom of the screen, and start to type on the next line, the terminal creates a new blank line by pushing everything up one line. (In older terminals this destroys the top line. In newer terminals the top line is just pushed into the scrollback buffer.) Now here is the hard question, what color is the new line. The foreground color is not an issue, because you cannot see it, but the background color . . . (In the days they started arguing it could be black gray or white (or actually black green, bright green or the same in amber)) There were experiments to just use black, but there were complaints. What was settled on is that the new line (or cleared area in a clear up the whole screen in the case of a screen clear) would have the current background color as the background color of the affected area. So this behavior is by design, because it does the right thing in most cases.
So what do you want to do to get the behavior you want? When you change back to black background (or at the end of the line) send a clear to end of line which will set the background color for the rest of the line.
| Background color whitespace when end of the terminal reached |
1,465,563,411,000 |
My coworker has the following in the ~/.bash_profile of many of our servers:
echo -e "\033]50;SetProfile=Production\a"
The text doesn't seem to matter, since this also works:
echo -e "\033]50;ANY_TEXT\a"
But no text doesn't work; the \a is also required.
This causes his terminal in OSX to change profiles (different colours, etc.); but in my xterm, it changes the font to huge; which I can't seem to reset.
I have tried to reset this with:
Setting VT fonts with shift+right click
Do "soft reset" and "full reset" with shift+middle click
Sending of various escape codes & commands:
$ echo -e "\033c" # Reset terminal, no effect
$ echo -e "\033[0;m" # Reset attributes, no effect
$ tput sgr0 # No effect
$ tput reset # No effect
My questions:
Why does this work on xterm & what exactly does it do? Code 50 is listed as "Reserved"?
How do I reset this?
Screenshot:
|
Looking at the list of xterm escape codes reveals that (esc)]50;name(bel) sets the xterm's font to the font name, or to an entry in the font menu if the first character of name is a #.
The simplest way to reset it is to use the xterm's font menu (Ctrl + right mouse click) and select an entry other than Default. Alternatively, you can find out which font the xterm uses on startup, and set that with the escape sequence.
In the font menu you'll also find an option Allow Font Ops; if you uncheck that, you cannot any more change the font using escape sequences.
| Escape code 50 in xterm |
1,465,563,411,000 |
I know that Konsole supports an escape code like "\<Esc>]50;CursorShape=0\x7" that will change the cursor shape. I was wondering if any other terminal emulators support changing the cursor shape via an escape code?
|
The Linux VGA/framebuffer consoles use the Esc[?...c code.
xterm supports the VT520 sequence DECSCUSR Esc[... q since patch 282.
| Escape Code to Change Cursor Shape |
1,465,563,411,000 |
On RHEL6 and CentOS 6, /etc/bashrc sets PROMPT_COMMAND here:
case $TERM in
xterm*)
if [ -e /etc/sysconfig/bash-prompt-xterm ]; then
PROMPT_COMMAND=/etc/sysconfig/bash-prompt-xterm
else
PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"; echo -ne "\007"'
fi
;;
screen)
if [ -e /etc/sysconfig/bash-prompt-screen ]; then
PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen
else
PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"; echo -ne "\033\\"'
fi
All of these options, as far as I know, are printed invisibly. What is the use of this?
I know that PROMPT_COMMAND is to be executed before display the prompt (PS1 usually). I do not understand why echoing something that is not visible is of any use.
|
\033 is the octal code for the Esc (Escape) character, which is a good hint that the echoed strings in your PROMPT_COMMAND are terminal control sequences. Both sequences in your examples look like they set the terminal title to user@host:pwd.
The first case, xterm* sets the window name and icon title. For a detailed explanation, look at the list of xterm control sequences and scroll down until you find OSC P s; P t; ST under Operating System Controls (OSC is ESC ] and ST is ESC \).
The second case is for the screen terminal emulator, and in the list of screen control sequences, it explains that ESC _ sets screen's hardstatus (simply put, that's the title of the screen window).
| In Bash, why is PROMPT_COMMAND set to something invisible? |
1,465,563,411,000 |
user prompt:
user - / :
up/down keys to locate historical commands.
user - / : historical command
clear the "historical command" :
user - / :hist
No matter how many times I hit BackSpace.. unable to delete "hist" part.
technically it is not even there. but on the screen it simply wont clear out until I hit "enter"
user - / :hist
user - / :
what causes it ?
The problem is probably due to the colors I used here. but how can It be corrected ?
PS1="\[\033[38;5;190m\]\u - \W : \e[m"
|
Your final ANSII escape sequence isn't finished. The reset code (\e[0m), like the others, needs a [. Change your PS1 to:
PS1="\[\033[38;5;190m\]\u - \W : \[\e[0m"
Or, to keep things consistent:
PS1="\[\033[38;5;190m\]\u - \W : \[\033[0m"
| bashrc PS1 : user prompt won't clear entire text [duplicate] |
1,465,563,411,000 |
$ ag findVersions( src/java/com/google
-bash: syntax error near unexpected token `('
I tried quoting it, escaping it, both were not suitable.
$ ag findVersions\( src/java/com/google
ERR: pcre_compile failed at position 13. Error: missing )
$ ack findVersions\( src/java/com/twitter
Invalid regex 'findVersions(':
Unmatched ( in regex; marked by <-- HERE in m/findVersions( <-- HERE / at /opt/bin/ack line 2989.
|
Since the first argument is a regex, you'll have to both: escape regex special characters, and protect it from the shell
ag 'findVersions\(' src/java/com/google
| How use '(' token when using it for searching with ag / awk? |
1,465,563,411,000 |
Compare the following two commands:
mysqldump database_name --hex-blob -uuser_name -p | tee database_name_tee.sql
mysqldump database_name --hex-blob -uuser_name -p > database_name_out.sql
If I run the first, on completion I see the following on my terminal:
$ 62;c62;c62;c62;c
Where does this come from? Does it suggest that something has gone wrong somewhere in the process? Are these control characters which are being output for some reason?
U+0C62 is Telugu Vowel Sign Vocalic L, which I’m pretty sure is not part of my data, so I don’t think this is Unicode. Anyway, the sequence seems to be not c62 but 62;c. This could be a control character of some kind. And whatever is causing it is included in the output file. If I later cat either database_name_tee.sql or database_name_out.sql, I again see this sequence once the cat is complete.
tail database.sql -n200 does not produce this output; -n300 produces just $ 62;c62;c; and -n400 produces $ 62;c62;c62;c62;c. So whatever is causing this is distributed throughout the file.
Mucking around with head and tail, I found one of the culprits: a single line which, when saved to a separate file and printed with cat, produces $ 62;c62;c. My problem is that this single line is 1043108 bytes.
(The generated SQL file is perfectly fine, and runs without errors. I don’t think that this has anything to do with MySQL per se.)
I’m running the initial mysqldump on a CentOS server, and am seeing the same effects from cat on both the server itself and my Ubuntu desktop, so this seems to be a general Bash thing.
od -c problem_line produces 65174 lines of output, so I cut it down to a smaller section which demonstrates the same output (also available as a plain hexdump).
|
There are no escape characters in the octal dump (those would be 033).
There are a few 8-bit control codes (generally not implemented by most terminals other than xterm). The octal 232 is hex 0x9a, and (referring to XTerm Control Sequences):
ESC Z
Return Terminal ID (DECID is 0x9a). Obsolete form of CSI c (DA).
...
CSI Ps c Send Device Attributes (Primary DA).
Ps = 0 or omitted -> request attributes from terminal. The
response depends on the decTerminalID resource setting.
...
-> CSI ? 6 c ("VT102")
The characters come from a response by the terminal to the DECID control character. The details of the response depend on the terminal emulator (which was not mentioned in the question).
| What does it mean when characters appear on the prompt after an operation? |
1,465,563,411,000 |
I found that by printing \r from a script the cursor moves to the beginning of the line.
What character will move the cursor up one line?
Where is a list of these special characters?
EDIT: Working in OSX. Also, \a rings the bell.
|
If your terminal emulator supports ANSI escape sequences, you can move the cursor up by running this:
echo -n -e '\033[2A'
or
ruby -e 'print "\033[2A"'
This will move the cursor up 2 lines. It works in gnome-terminal and xterm and many others.
| What are the special characters to print from a script to move the cursor? |
1,465,563,411,000 |
I understand that any character is comprised of one or more byte/s.
If I am not mistaken, at least in *nix operating systems, a character will generally (or totally?) be comprised of only one byte.
What is the difference between a byte and a character (at least *nixwise)?
|
A byte is by convention and POSIX definition eight bits. A bit is a binary digit (i. e. the fundamental 1 or 0 that is at the base of nearly all digital computing).
A character is often one byte and in some contexts (e. g. ASCII) can be defined to be one byte in length. However, Unicode and UTF-8 and UTF-16 define expanded character sets wherein a single character (or glyph) can be defined by data payloads longer than one byte in length.
The single character:
Q̴̢̪̘̳̣̞̩̪̑̍̉̆̉͛̑̂̕͝
is a single character, but it is composed in Unicode by applying multiple accents (or diacritics) to the base glyph, the simple Q. This encoding is many more bytes than one in length: Putting solely that character into a file and displaying the contents with hexdump rather than cat on my locale yields:
$ hexdump -C demo
00000000 51 cc b4 cc 91 cc 8d cc 89 cc 86 cc 89 cd 9d cd |Q...............|
00000010 9b cc 91 cc 95 cc 82 cc aa cc 98 cc b3 cc a3 cc |................|
00000020 a2 cc 9e cc a9 cc aa 0a |........|
00000028
| What is the difference between a byte and a character (at least *nixwise)? |
1,465,563,411,000 |
I'm running iwlist wlo1 scan | grep ESSID inside a script.
It displays French characters in the following format
\xC3\x89 for É and \xC3\xA9 for é.
I'm not sure what this format is called. I tried using an answer for converting unicode echo -ne '\xC3\xA9' | iconv -f utf-16be but it converted to 쎩.
What is the official name for this format and how can I convert it in bash?
|
Hexdecimal numeric constants are usually represented with 0x prefix.Character and string constants may express character codes in hexadecimal with the prefix \x followed by two hex digits.
echo -ne '\xC3\x89' should give you É.
-e - enable interpretation of backslash escapes(including \xHH - byte with hexadecimal value HH (1 to 2 digits))
To deal with better portability use printf function:
printf "%b" '\xC3\x89'
É
| Bash convert \xC3\x89 to É? |
1,465,563,411,000 |
This works fine:
sed -i 's# @driver.find_element(:xpath, "//a\[contains(@href,##' temp_spec.rb
against a source of
@driver.find_element(:xpath, "//a[contains(@href,'change_district')]").click
I am just left with:
'change_district')]").click`
but when I try to add a single quote at the end it fails:
sed -i 's# @driver.find_element(:xpath, "//a\[contains(@href,\'##' temp_spec.rb
syntax error near unexpected token `('
I am using \' to escape the single quote.
Note that I am using a # as a delimiter instead of the normal / delimiter as there are /'s in the text.
|
The strings inside single quotes are used verbatim by the shell (and hence cannot contain other single quote, since that would be treated as the closing one of a pair). That said, you have several options:
end the single-quoted string, add escaped (either by backslash or double quotes) single quote, and immediately start the following part of your string:
$ echo 'foo'\''bar' 'foo'"'"'bar'
foo'bar foo'bar
use double quotes for your string:
$ echo "foo'bar"
foo'bar
this of course means that you have to be careful about what gets expanded inside the double quotes by your shell;
if you are using utilities that understand \xHH or \0nnn escaping (GNU sed does), you can use one of those (ASCII single quote is at code point 39 (0x27), hence e.g. \x27).
| How to escape a single quote? [duplicate] |
1,465,563,411,000 |
I have found this answer providing how to manipulate the current xterm window's dimensions, ie:
echo -ne "\e[8;30;30t"
How can I modify this to maximize the window (xterm's alt + enter shortcut)?
Also, where do I find more info on these xterm command line modifiers?
UPDATE:
See multiple solutions below for both maximize and full screen (without title and borders)
|
The commands are
echo -ne '\e[9;1t'
to maximize and
echo -ne '\e[9;0t'
to restore the original size. It's described in the
xterm control sequences documentation.
| Maximize xterm via bash script |
1,479,710,941,000 |
I've checked out a lot of links on how to grep individual escape characters or literal strings, but I just cannot get them to combine to find the background-red ANSI escape sequence ^[41m, even typing the ^[ as both Ctrl+V+Ctrl+[ and the two literal characters ^+[ and using both the -E and -F flags.
The raw bytes I am trying to find, given by hexdump are:
1b 5b 33 37 6d 1b 5b 34 31 6d 30 2e 30 30 25
Where this corresponds to WHITE FOREGROUND RED BACKGROUND 0.00%. I'm producing these codes with Python's colorama package and Fore.WHITE+Back.RED, just in case anyone is curious.
So, what is the secret I am missing?
|
but I just cannot get them to combine to find the background-red ANSI
escape sequence ^[41m
If you use vim to open this file, you will know it's not ^[41m, instead it's ^[[41m, which ^[ navigate by arrow key as a set:
1b is Escape represent by single escape character ^[ which can be invoked by Ctrl+V follow by Esc. ^[ look like 2 characters but it's not, it's single:
xb@dnxb:~/Downloads/grep$ ascii 1b
ASCII 1/11 is decimal 027, hex 1b, octal 033, bits 00011011: called ^[, ESC
Official name: Escape
xb@dnxb:~/Downloads/grep$
Do this (Use Ctrl+V follow by Esc to create ^[, then continuously type \[41m):
xb@dnxb:~/Downloads/grep$ hexdump -C /tmp/2
00000000 1b 5b 33 37 6d 1b 5b 34 31 6d 30 2e 30 30 25 0a |.[37m.[41m0.00%.|
00000010
xb@dnxb:~/Downloads/grep$ \grep '^[\[41m' /tmp/2
0.00%
xb@dnxb:~/Downloads/grep$ \grep '^[\[41m' /tmp/2 | hexdump -C
00000000 1b 5b 33 37 6d 1b 5b 34 31 6d 30 2e 30 30 25 0a |.[37m.[41m0.00%.|
00000010
xb@dnxb:~/Downloads/grep$
Ensure you escape grep by prefix \ to avoid its alias --color affect:
[Alternative]:
\grep -P '\e\[41m' (Credit: OP's comment)
\grep '^[\[41m' , which use Ctrl+V follow by Ctrl+[ to create ^[. Useful when backspace in my keyboard is not 0x08, but i can use Ctrl+V follow by Ctrl+H (^H get from ascii 08) to produce it.
| grep for an ANSI escape code |
1,479,710,941,000 |
I am trying to make a script that takes some atom feeds and posts them to slack through the Slack API via curl. What I have now works for simple texts, but some of them have double quotes or & characters in them and that seems to annoy slack API as I get an invalid payload error. Here's my script:
#!/bin/bash
rsstail -i 3 -u "http://MY_FEED_URL" -n 0 | while read line;
do
# This is just a sample text, it should be ${line}
data='Something "&" and something do " "';
payload="payload={\"channel\": \"#my_channel\", \"username\": \"Bot\", \"text\": \"${data}\", \"icon_emoji\": \":ghost:\"}";
echo ${payload};
curl \
-H "Accept: application/json" \
-X POST \
-d '${payload}' \
https://hooks.slack.com/services/xxxx
The output of the "echo" is:
payload={"channel": "#my_channel", "username": "Bot", "text": "Something "&" and something do " "", "icon_emoji": ":ghost:"}
I am not advanced in bash scripting and I need a little bit of help. What am I doing wrong ?
Thanks!
|
You need to url-encode the data, and the easiest way to do this is to get curl to do it for by replacing the -d option by --data-urlencode.
Also, you need to use double quotes or the shell will not expand the variable, so we have
curl \
-H "Accept: application/json" \
-X POST \
--data-urlencode "${payload}" \
https://hooks.slack.com/services/xxxx
You can also simplify setting payload= by having part of the string inside single quotes, and part within double quotes, provided there is no space between
the parts, as in '...'"..."'...'. So, we get
payload='payload={"channel": "#my_channel", "username": "Bot", "text": "'"${data}"'", "icon_emoji": ":ghost:"}'
You probably need to escape the double quotes in data, perhaps with \, as they will be inside "" in a json string.
data='Something \"&\" and something do \" \"'
If you have read your data from the input into variable line, you can do this replacement with bash:
line=${line//\"/\\\"}
| Escape strings to be posted via curl |
1,479,710,941,000 |
My username is user1 and password is mypass8@
I wish to pass this username and password to curl command.
How do i escape the @ in the password when passing it to curl command ?
curl -k -X POST https://user1:mypass8@@myshop.com/job/build
I get the error message:
MESSAGE:Invalid password/token for user: user1
I also tried the following but all fail stating the password is not right as the 2 charecter and not considered in the password by the curl command.
curl -k -X POST https://'user1:mypass8@'@myshop.com/job/build
curl -k -X POST https://"user1:mypass8@"@myshop.com/job/build
curl -k -X POST https://user1:"mypass8@"@myshop.com/job/build
curl -k -X POST https://user1:'mypass8@'@myshop.com/job/build
Update: the login I m trying is for Jenkins admin console.
Can you please suggest ?
|
I fixed the issue using a base64 conversion...
echo -n '[USERNAME]:[PASSWORD]' | base64
This will generate a token to pass to curl as a header in this way:
curl -H 'Authorization: Basic [add value obtained from the previous cmd]'
| How to escape '@' in curl command password field |
1,479,710,941,000 |
I am playing around with shell scripts that use ANSI codes and found that for one reason or another different escape codes are supported depending on your terminal/OS.
In some cases I get a dump of unparsed gunk unexpectedly, which I'm assuming means my terminal (on Mac OS) doesn't support that escape code used, despite having read in a number of places that these mean the same thing:
27 = 033 = 0x1b = ^[ = \e
In searching I found this question about detecting slash-escaped support.
The selected answer sniffs the $TERM value to detect
case $TERM in
(|color(|?))(([Ekx]|dt|(ai|n)x)term|rxvt|screen*)*)
PS1=$'\e\]0;$GENERATED_WINDOW_TITLE\a'"$PS1"
esac
But I wonder how reliable that is.
Is there a standard way to check for escape code support (primarily for Bash), or is that script pretty much the run of the mill?
Alternatively, what escape code can I use to 'guarantee' the most
wide-spread support?
What about echo expansion -e?
What are general best practices in terms of portability/availability/distribution for scripts that use or reference control codes?
This is a nice read too for anyone else looking for info.
|
Have you got specific operations in mind?
Here is an example of using standout mode, which on many terminals will give a strong visible result:
tput smso; echo hello, world; tput rmso
The tput tool uses the value of the $TERM environment variable to determine which escape sequences to output - if any. For example
TERM=xterm
( tput smso; echo hello, world; tput rmso ) | hexdump -C
00000000 1b 5b 37 6d 68 65 6c 6c 6f 2c 20 77 6f 72 6c 64 |.[7mhello, world|
00000010 0a 1b 5b 32 37 6d |..[27m|
TERM=dumb
( tput smso; echo hello, world; tput rmso ) | hexdump -C
00000000 68 65 6c 6c 6f 2c 20 77 6f 72 6c 64 0a |hello, world.|
Interesting characteristic pairs can be found in man 5 terminfo, some of which are as follows:
Standout: smso and rmso
Underline: smul and rmul
Blink (yes!): blink
Doublewide: swidm and rwidm
Reverse: rev
Cancel all: sgr0
If you want to write bold text, but only if the terminal understands it, then a snippet like this will work
boldOn=$(tput smso)
boldOff=$(tput rmso)
# ...
printf "%s%s%s\n" "$boldOn" 'This message will be in bold, when available' "$boldOff"
| Programmatically detect the ANSI escape code supported by terminal |
1,479,710,941,000 |
Help me to decipher the escape sequences created by ncurses library and catched by strace. I am exploring how ncurses interacts with terminal and want to understand its "handshake protocol". I have found some descriptions already, but didn't understand all of them though, like "Set cursor key to cursor".
echo $TERM prints xterm-256color.
Original
write(1, "\33[?1049h\33[22;0;0t\33[1;39r\33(B\33[m\33[4l\33[?7h\33[H\33[2J", 46) = 46
write(1, "Hello World !!!", 15) = 15
write(1, "\33[39;1H\33[?1049l\33[23;0;0t\r\33[?1l\33>", 32) = 32
My assumptions
write(1, "
\33[?1049h # go to alternate screen
\33[22;0;0t
\33[1;39r
\33(B # Set United States G0 character set
\33[m # Turn off character attributes
\33[4l
\33[?7h # Set auto-wrap mode
\33[H # Move cursor to upper left corner
\33[2J # Clear entire screen
", 46) = 46
write(1, "Hello World !!!", 15) = 15
write(1, "
\33[39;1H
\33[?1049l # Go back to the initial screen
\33[23;0;0t\r
\33[?1l # Set cursor key to cursor
\33>
", 32) = 32
The testing program source
int main()
{
napms(25000); /* This pause is needed to catch the process by strace*/
initscr(); /* Start curses mode */
printw("Hello World !!!"); /* Print Hello World */
refresh(); /* Print it on to the real screen */
endwin(); /* End curses mode */
return 0;
}
|
For XTerm and anything that claims compatibility with it, you'll want this:
https://invisible-island.net/xterm/ctlseqs/ctlseqs.html
You'll also need the manual for the VT100 terminal, which XTerm emulates and expands on:
https://vt100.net/docs/vt100-ug/contents.html
The Linux console_codes(4) man page describes the control codes used by the Linux console, which is also a superset of VT100 and the man page sometimes has more verbose descriptions than the other sources above:
http://man7.org/linux/man-pages/man4/console_codes.4.html
The unknown codes in your example:
\33[22;0;0t
Here, the first part \33[ (or ESC [ ) is known as CSI or Control Sequence Introducer.
The CSI <number> ; <number> ; <number> t is a window manipulation sequence.
Control sequences ending with t always take three numeric parameters, but won't always use all of them. The first parameter is 22, and the second is 0, so this code tells the terminal emulator to save the current window and icon titles so that they can be restored later.
\33[1;39r
This is CSI <number> ; <number> r. The meaning is "set scrolling region". Setting this to something smaller than the size of the current window would efficiently allow keeping something static like a menu line at the top of a TUI display, a status line at the bottom, or both while displaying a lot of text within the scrolling region.
\33[4l
This is CSI <one or more numbers> l. The meaning is "reset mode". Value 4 resets (disables) "insertion-replace mode", or in plain terms, tells that anything printed to the screen should simply overwrite what was there before.
\33[39;1H
This is CSI <number> ; <number> H. This moves the cursor to the 39th line, 1st column.
\33[23;0;0t
This is another window manipulation sequence. This restores the previously-saved window and icon titles. Obviously your test program did not change the titles at all, but these sequences are part of the standard initialization/exit procedures done by initscr() and endwin() respectively.
\33[?1l # Set cursor key to cursor
This sets the cursor keys of the VT100 keypad to the regular "cursor key mode". There was also another mode, intended to allow these keys to be used for application-specific purposes, like an extra set of function keys. The VT100 terminal produced different output for these keys according to the mode settings; this just ensures that if the application switched the cursor keys to a non-default mode, they will be returned to default mode before the program exits.
\33>
This is just ESC >. This is similar to the previous code, but for the numeric keypad.
| The deciphering of ncurses escape sequences |
1,479,710,941,000 |
I'm trying to match against some UTF-8 characters.
The problem is grep doesn't translate \x byte escapes, so
this fails:
echo -e '\xd8\xaa' | grep -P '\xd8\xaa'
while this succeeds:
echo -e '\xd8\xaa' | grep -P $(printf '\xd8\xaa')
Can grep understand byte escapes directly without using printf? How?
|
This fails:
$ echo -e '\xd8\xaa' | grep -P '\xd8\xaa' | hexdump
This succeeds:
$ echo -e '\xd8\xaa' | grep -P $'\xd8\xaa' | hexdump
0000000 aad8 000a
0000003
Documentation
From man bash:
Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as
specified by the
ANSI C standard. Backslash escape sequences, if present, are decoded as follows:
\a alert (bell)
\b backspace
\e
\E an escape character
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\\ backslash
\' single quote
\" double quote
\? question mark
\nnn the eight-bit character whose value is the octal value nnn (one to three digits)
\xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits)
\uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHH (one to four hex digits)
\UHHHHHHHH
the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHHHHHH (one to eight hex digits)
\cx a control-x character
The expanded result is single-quoted, as if the dollar sign had not been present.
| Making grep understand byte escapes |
1,479,710,941,000 |
I'm reading a book on shell programming and learning that following commands are equivalent, which beep on my Mac but don't make any sound on Ubuntu:
$ echo $'\a'
$ echo -e "\a"
$
however in both cases, terminal prompts a blank line. My questions are:
What's $'\a' here? Parameter expansion, command substitution or something else?
Why echo prints empty line as if the parameter is undefined, like for instance, in the case of
$ echo $NONSENSE which prompts empty line? Thanks!
|
don't make any sound on Ubuntu:
perhaps because your particular terminal emulator is configured to avoid sounds, or because the pcspkr kernel module is unloaded, etc... You could use another terminal emulator (e.g. the old xterm) which should beep.
What's $'\a' here?
Read the chapter on shell expansion of the Bash manual. It is called ANSI-C quoting (as commented by South Parker).
Why echo prints empty line
the echo command (read more echo(1)...) is often a bash shell builtin so (without any -n) it prints its expanded arguments (here the bell character) followed by a newline. But your terminal emulator don't ring the audible bell (and the bell character is not displayed, since it is a control character)
BTW, Apple is rumored to dislike the GPLv3+ license, so you might upgrade your bash to a recent version (e.g. 4.4 in august 2017) on your Apple computer.
You might read the tty demystified for an historical approach to terminal emulators on Unix. See also pty(7).
| Mechanism of BELL character '\a'? |
1,479,710,941,000 |
As others have pointed out, color codes in PS1 should be bracketed by \[ and \] to avoid them taking up horizontal space. I've added the necessary code to .bashrc:
highlight()
{
if [ -x /usr/bin/tput ]
then
printf '\['
tput bold
printf '\]'
printf '\['
tput setaf $1
printf '\]'
fi
shift
printf -- "$@"
if [ -x /usr/bin/tput ]
then
printf '\['
tput sgr0
printf '\]'
fi
}
highlight_error()
{
highlight 1 "$@"
}
The last function is used in PS1 in both normal and escaped command substitutions to be able to change the string based on the result of the previous command:
# Exit code
PS1="\$(exit_code=\${?#0}
highlight_error \"\${exit_code}\${exit_code:+ }\")"
...
if [ "$USER" == 'root' ]
then
PS1="${PS1}$(highlight_error '\u')"
else
PS1="${PS1}\u"
fi
The issue is then that the escaped brackets are output as literals, so my prompt looks like this after running a command which doesn't exist:
\[\]\[\]127 \[\]user@machine:/path
$
Wrapping the escaped highlight_error in printf %b didn't help. How can I fix the output so that I can use the functions for both normal and escaped command substitutions?
|
It seems that any escape sequence actually in PS1 literally needs to be wrapped in \[ and \], but if you call a function or command that produces output, it does not need to be wrapped.
So why not just move the
"\$(exit_code=\${?#0}
highlight_error \"\${exit_code}\${exit_code:+ }\")"
stuff inside a function, e.g.
print_error_if_error()
{
exit_code=$?
if [ $exit_code -ne 0 ]; then
highlight_error "$exit_code "
fi
}
and then I think you can remove all the \[ and \] stuff...
highlight()
{
if [ -x /usr/bin/tput ]
then
tput bold
tput setaf $1
fi
shift
printf -- "$@"
if [ -x /usr/bin/tput ]
then
tput sgr0
fi
}
highlight_error()
{
highlight 1 "$@"
}
PS1='$(print_error_if_error)'
# ...
if [ "$USER" = 'root' ]
then
PS1="${PS1}$(highlight_error '\u')"
else
PS1="${PS1}\u"
fi
| Re-escape brackets in PS1 |
1,479,710,941,000 |
am using iTerm with Tmux, but I see this without tmux. If I hit SHIFT+SPACE I get the escape sequence ^[[32;2u. Is there a way to suppress it or disable that escape sequence? I am not seeing it in my preferences as a predefined escape sequence
|
The solution depends on the application that enables the xterm modifyOtherKeys feature. iTerm2 recently adapted/imitated/whatever this from xterm.
The xterm manual in turn mentions a page in the FAQ XTerm – “Other” Modified Keys which gives information on the feature, and in that page's Other Programs section points to an iTerm2 discussion where Nachman added the CSI u feature in January 2019. That gives the same information as modifyOtherKeys, using a slightly different format (see formatOtherKeys, which dates from 2008).
Here's a screenshot showing the preference:
The "?" help points to iTerm2's website, which is a bit lacking in depth (ymmv). My intention when I developed the feature was that applications would enable it temporarily, rather than making the terminal turn it on indefinitely.
| SHIFT+SPACE on terminal sends escape sequence |
1,479,710,941,000 |
I am attempting to write a simple bash parser. I am following the steps in this wiki. One of the assumptions I make is that I can do a pass over the entire input string and remove all single and double quotes if I escape the characters correctly. When run in the bash, they two strings should yield the same output.
This section of my parser takes any given string and removes single and double quotes from it (except quotes that are escaped and interpreted as literals). Both strings should still yield the same result when executed in bash.
My parser converts the original to My Parse like below. However, the original works, but My Parse does not.
# Original
$ node -p "console.log($(echo \"hello world\"))"
hello world
# My Parse: Escape everything within double quotes except command substitution
v v
$ node -p \c\o\n\s\o\l\e\.\l\o\g\($(echo \"hello world\")\)
[eval]:1
console.log("hello
^^^^^^
SyntaxError: Invalid or unexpected token
I have several ideas about why my parsing is wrong.
I am not understanding some fundamental aspect of how command substitution inside double quotes works. My understanding is command substitution occurs first, then quotes are processed.
I am not understanding some fundamental aspect of how command substitution is actually output. My understanding is $(echo \"hello world\") should yield a string "hello world" not commands "hello and world"
There is some special-ness with the echo command (potentially because it is variadic). I am actually getting lucky that this works in the original scenario, but actually, changing the command inside the command substitution could break it...
There is a problem with my node / javascript problem. This is pretty simple js, so I don't think this is it...
One last interesting thing: It works when I wrap the command substitution in double quotes. Maybe to ask this whole question differently, how could I write the same input as below without the double quotes (excluding the escaped ones).
# Escape everything but keep command substitution in double quotes
v v
$ node -p \c\o\n\s\o\l\e\.\l\o\g\("$(echo \"hello world\")"\)
hello world
Note: This question is somewhat a follow up to this question about escaping double quotes
|
This is Word Splitting in action. Before we start, peruse the Shell Expansions paying attention to the order in which they are performed.
Looking at node -p "console.log($(echo \"hello world\"))"
brace expansion? no
tilde expansion? no
parameter expansion? none here
command substitution? yes, leaving you with
node -p "console.log("hello world")"
arithmetic expansion? no
process substitution? no
word splitting? the argument to -p is in quotes, so no.
filename expansion? no
quote removal is done
bash spawns node passing it 2 arguments, -p and console.log("hello world")
Now look at node -p console.log($(echo \"hello world\"))
after command substitution, we have node -p console.log("hello world")
when we get to word splitting, the argument to -p has no quotes to protect it. For the current command, bash has 4 tokens:
node -p console.log("hello world")
^^^^ ^^ ^^^^^^^^^^^^^^^^^^ ^^^^^^^
bash spawns node passing it 3 arguments: -p, console.log("hello and world") -- console.log("hello is clearly a javascript syntax error, and you see what happens.
Lots more detail at Security implications of forgetting to quote a variable in bash/POSIX shells
| Escaping double quotes inside command substitution |
1,479,710,941,000 |
I'm having a problem with a script. It is meant to change a value in a file called %DIR% so that it becomes a path name. The problem is the slashes in the directory name upset sed, so I get weird errors. I need to convert the slashes in the path name into escaped slashes.
So /var/www would become \/var\/www
But I don't know how to do this.
Currently the script runs sed with this:
sed -i "s/%DIR%/$directory/g" "$config"
|
Since you say you are using Bash, you can use Parameter Expansion to insert the slashes:
$ directory=/var/www
$ echo "${directory//\//\\/}"
\/var\/www
This breaks up as
substitute directory
replacing every (//)
slash (\/)
with (/)
backslash+slash (\\/).
Putting this into your sed command gives:
sed -i "s/%DIR%/${directory//\//\\/}/g" "$config"
| Bash converting path names for sed so they escape [duplicate] |
1,479,710,941,000 |
I'm having trouble understanding what I need to escape when using sh -c.
Let's say I want to run the for loop for i in {1..4}; do echo $i; done. By itself, this works fine.
If I pass it to eval, I need to escape the $: eval "for i in {1..4}; do echo \$i; done", but I cannot make it work for sh -c "[...]":
$ sh -c "for i in {1..4}; do echo $i; done"
4
$ sh -c "for i in {1..4}; do echo \$i; done"
{1..4}
$ sh -c "for i in \{1..4\}; do echo \$i; done"
{1..4}
$ sh -c "for i in \{1..4\}\; do echo \$i\; done"
sh: 1: Syntax error: end of file unexpected
Where can I find more information about this?
|
The usual wisdom is to define the script (after the -c) inside single quotes. The other part you need to use is a shell where the {1..4} construct is valid:
$ bash -c 'for i in {1..4}; do echo $i; done' # also work with ksh and zsh
One alternative to get it working with dash (your sh) is to make the expansion on the shell you are using interactively (I am assuming that you use bash or zsh as your interactive shell):
$ dash -c 'for i do echo $i; done' mysh {1..4}
1
2
3
4
| How to run a loop inside sh -c |
1,479,710,941,000 |
I have a json string inside json. This got encoded multiple times and I ended up with many escape backlashes: \\\".
The much shortened string looks like,
'[{"testId" : "12345", "message": "\\\"the status is pass\\\" comment \\\\\"this is some weird encoding\\\\\""}]'
I am trying to grep and get the number of occurrences of the pattern \\\" and not \\\\\"?
How can I do it?
Any shell/python solution is good to go. In python, using the search string
search_string = r"""\\\\\""",
throws unexpected EOF error.
|
To look for \\\" anywhere on a line:
grep -F '\\\"'
That is, use -F for a fixed string search as opposed to a regular expression match (where backslash is special). And use strong quotes ('...') inside which backslash is not special.
Without -F, you'd need to double the backslashes:
grep '\\\\\\"'
Or use:
grep '\\\{3\}"'
grep -E '\\{3}"'
grep -E '[\]{3}"'
Within double quotes, you'd need another level of backslashes and also escape the " with backslash:
# 1
# 1234567890123
grep "\\\\\\\\\\\\\""
backslash is another shell quoting operator. So you can also quote those backslash and " characters with backslash:
\g\r\e\p \\\\\\\\\\\\\"
I've even quoted the characters of grep above though that's not necessary (as none of g, r, e, p are special to the shell (except in the Bourne shell if they appear in $IFS). The only character I've not quoted is the space character, as we do need its special meaning in the shell: separate arguments.
To look for \\\" provided it's not preceded by another backslash
grep -e '^\\\\\\"' -e '[^\]\\\\\\"'
That is, look for \\\" at the beginning of the line, or following a character other than backslash.
That time, we have to use a regular expression, a fixed-string search won't do.
grep returns the lines that match any of those expressions. You can also write it with one expression per line:
grep '^\\\\\\"
[^\]\\\\\\"'
Or with only one expression:
grep '^\(.*[^\]\)\{0,1\}\\\{3\}"' # BRE
grep -E '^(.*[^\])?\\{3}"' # ERE equivalent
grep -E '(^|[^\])\\{3}"'
With GNU grep built with PCRE support, you can use a look-behind negative assertion:
grep -P '(?<!\\)\\{3}"'
Get a match count
To get a count of the lines that match the pattern (that is, that have one or more occurrences of \\\"), you'd add the -c option to grep. If however you want the number of occurrences, you can use the GNU specific -o option (though now also supported by a few other implementations) to print all the matches one per line, and then pipe to wc -l to get a line-count:
grep -Po '(?<!\\)\\{3}"' | wc -l
Or standardly/POSIXly, use awk instead:
awk '{n+=gsub(/(^|[^\\])\\{3}"/,"")};END{print 0+n}'
(awk's gsub() substitutes and returns the number of substitutions).
| Match pattern \\\" using grep |
1,479,710,941,000 |
I want to cat all files to one new file in a bash script.
For example there are three files in my dir:
- file_a.txt
- file b.txt
- file(c).txt
When I write the following, it works without problems:
cat "file"*".txt" >> out_file.bak
No I want to make it more flexible/clean by using a variable:
input_files="file"*".txt"
cat $input_files >> out_file.bak
Unfortunately this doesn't work. The question is why? (When I echo input_files and run the command in terminal everything is fine. So why doesn't it work in the bash script?)
|
It depends when you want to do the expansion:
When you define the variable:
$ files=$(echo a*)
$ echo $files
a1 a2 a3
$ echo "$files"
a1 a2 a3
When you access the variable:
$ files=a*
$ echo "$files"
a*
$ echo $files
a1 a2 a3
| Use asterisk in variables |
1,479,710,941,000 |
I wanted to find out what my terminal is sending for Ctrl+Backspace and Alt+Backspace, the standard way to do this is to run cat on the terminal and typing stuff usually works, but with certain output like these, the results are tricky.
I am guessing that Alt+Backspace is sending \x1b\x7f (that is, escape backspace) but what's happening if I run cat and type Ctrl+V and Alt+Backspace, or just Alt+Backspace, what happens is that the escape will get "typed" and then immediately it is removed with the backspace so it looks like nothing is happening. I only got clued into this once by seeing my computer render a single frame of the ^[ escape being there.
So far I am not sure how to work out what Ctrl+Backspace is sending. it's not Ctrl+W even though both delete a word on the bash prompt, because under cat it is doing nothing while Ctrl+W deletes a word!
|
Simply use this command:
showkey -a
| How can I find out what the escape codes my terminal are sending for certain special ones that cat will not show? |
1,479,710,941,000 |
I am processing a huge stream of CSV data coming from a source which includes special characters, such as the following:
`÷ Þ Ÿ ³ Ù ÷`
Here is an example row from a data set which includes these characters:
'÷ÞW' , 'ŸŸŸŸŸŸŸ', '³ŸŸÙ÷'
Here is another example taken from a different data set:
WCP16,2013-06-04 20:06:24,2013-06-04,CPU,PrimeNumberGenerationTest,PASS,USA,HF0SXV1,,,N,9999
WCP06,2013-06-04 20:06:24,2013-06-04,CPU,RegisterTest,PASS,USA,HF0SXV1,,,N,9999
WCD42,2013-06-04 20:06:24,2013-06-04,DVDMINUSRW,MainICTest,PASS,USA,HF0SXV1,,,N,9999
WCP09,2013-06-05 01:52:53,2013-06-05,CPU,SSE3Test,PASS,,?÷ÞQ»,,,N,9999
WCP10,2013-06-05 01:52:53,2013-06-05,CPU,SSE4_1Test,PASS,,?÷ÞQ»,,,N,9999
If I knew what type of characters to expect then I could handle that in Informatica when I the read file.
But in my situation I am not sure what type of data I will get on any given day, and as a result my jobs are failing. So I need a way to remove all special characters from the data.
|
I'm not sure exactly what you mean by "special characters", so I'm going to assume that you want to get rid of non-ASCII characters. There are a few different tools that might work for you. The first few that come to mind for me are:
iconv (internationalization conversion)
tr (translate)
sed (stream editor)
iconv (internationalization conversion)
Here is a solution using iconv:
iconv -c -f utf-8 -t ascii input_file.csv
The -f flag (from) specifies an input format, the -t flag (to) specifies an output format, and the -c flag tells iconv to discard characters that cannot be converted to the target. This writes the results to standard output (i.e. to your console). If you want to write the results to a new file you would do something like this instead:
iconv -c -f utf-8 -t ascii input_file.csv -o output_file.csv
Then, if you want, you can replace the original file with the new file:
mv -i output_file.csv input_file.csv
Here is how iconv handles your first example string:
$ echo "'÷ÞW' , 'ŸŸŸŸŸŸŸ', '³ŸŸÙ÷'" | iconv -c -f utf8 -t ascii
'W' , '', ''
tr (translate)
Here is a solution using the tr (translate) command:
cat input_file.csv | tr -cd '\000-\177'
The \000-\177 pattern specifies the numerical range 0-127 using octal notation. This is the range of values for ASCII characters. The -c flag tells tr to match values in the complement of this range (i.e. to match non-ASCII characters) and the -d flag tells tr perform deletion (instead of translation).
To write the results to a file you would use output redirection:
cat input_file.csv | tr -cd '\000-\177' > output_file.csv
Here is how tr handles your first example string:
$ echo "'÷ÞW' , 'ŸŸŸŸŸŸŸ', '³ŸŸÙ÷'" | tr -cd '\000-\177'
'W' , '', ''
sed (stream editor)
Here is a solution using sed:
sed 's/[\d128-\d255]//g' input_file.csv
The s prefix tells sed to perform substitution, the g suffix tells sed to match patterns globally (by default only the first occurrence is matched), the pattern [\d128-\d255] tells sed to match characters with decimal values in the range 128-255 (i.e. non-ASCII characters), and the empty string between the second and third forward-slashes tells sed to replace matched patterns with the empty string (i.e. to remove them).
Unlike many other programs, sed has an option to update the file in-place (instead of manually writing to a different file and then replacing the original):
sed -i 's/[\d128-\d255]//g' input_file.csv
Here is how sed handles your first example string:
$ echo "'÷ÞW' , 'ŸŸŸŸŸŸŸ', '³ŸŸÙ÷'" | sed 's/[\d128-\d255]//g'
'W' , '', ''
| Remove all type of special characters in unix .csv file |
1,479,710,941,000 |
If I do git status --short, git lists files that are not tracked with two red question marks in front:
I'm trying to store this in a variable and print it with color later. Here's my bash script:
#!/bin/bash
status=$(git status --short)
echo -e "$status"
I thought the -e flag would cause bash to color the output, but it isn't working:
How can I do this?
Edit: the possible duplicate is asking how escape characters, specifically ANSI color control sequences, work. I think I understand how they work. My question is how to preserve those in the script output.
|
Most programs that produce color will, by default, only produce it when the output is to a terminal, not a pipe or file. Generally, this is a good thing. Often, however, there is an override switch. For example, for ls, one can use --color=always and, as a result, color can be saved in shell variables. For example:
grep also supports the --colors=always option.
For git, the corresponding option is its color.ui configuration setting:
git -c color.ui=always status
| How do I store colored text in a variable and print it with color later? |
1,479,710,941,000 |
I have this:
I read down arrow key
abc@abc-ubuntu:~/bashpratice$ read -n 3 key
^[[Babc@abc-ubuntu:~/bashpratice$
I am able to grep for it
abc@abc-ubuntu:~/bashpratice$ echo $key | grep '\['
[B
abc@abc-ubuntu:~/bashpratice$ echo $key | grep '\[B'
[B
But echoing the key just prints spaces
abc@abc-ubuntu:~/bashpratice$ echo $key
abc@abc-ubuntu:~/bashpratice$
Why does echoing the key just give spaces?
|
What happens when downarrow is typed in a terminal
As reported by xxd -p, when typing ↓ + return :
xxd -p
^[[B
1b5b420a
The downarrow key leads to a sequence of 3 characters:
the first is \x1b (a.k.a. escape, see man ascii), echoed on the terminal as ^[,
second is \x5b, that is [,
third is \x42, that is B.
The last character, \x0a is just the newline character.
So, downarrow is echoed on the terminal as ^[[B. In reality, this corresponds to the 1b5b42 hex sequence, which is the one actually sent to the reading process.
About your experiments
Your key variable contains the 1b5b42 hex sequence. Check it with
echo -n "$key" | xxd -p
1b5b42
Of course, grep will be able to catch the 5b42 hex sequence (that is [B)¹.
However, when you send something to the terminal, the escape character \x1b is interpreted as the beginning of some special escape sequence. For example \x1b[31m is a sequence that is recognized by most terminals and means "use red foreground color". Check it yourself:
echo -e 'hello \x1b[31myou'
The sequence will change the current color, but it will not print anything.
You can also check this:
echo -e 'hello \x1b[Byou'
and you'll see that the special sequence \x1b[B is interpreted by the terminal as "move the cursor down by one".
That's why your echo $key won't show something directly visible on the terminal, except for some blank lines.
—
1. I'm not sure why grep happens to print just [B, I have some different result on my setup.
| Trying to print up down arrow keys |
1,479,710,941,000 |
When I do a search bindkey in the zsh plugins directory for key conflicts I get responses from both the .zsh script files and .md files, and some of the zsh readme files use a double quote in the bindkey statement.
How would I do a search for bindkeys using both ' and " for quoting? For instance if search for usage of Ctrl-R the first command using double quotes for the matching string produces the README.md of zsh-navigation-tools and single quotes produces the bindkey command for both vi-mode and zsh-navigation-tools
grep -r -i 'bindkey "^r' ~/.oh-my-zsh/plugins
output:
zsh-navigation-tools/README.md: bindkey "^R" znt-history-widget
grep -r -i "bindkey '^r" ~/.oh-my-zsh/plugins
output:
vi-mode/vi-mode.plugin.zsh:bindkey '^r' history-incremental-search-backward
zsh-navigation-tools/zsh-navigation-tools.plugin.zsh:bindkey '^R' znt-history-widget
How can I create the command that will output all 3?
Does grep have the option of specifying an alternate quoting character that will allow both ' and " as literals?
|
Of course, you need to escape the ^ character:
grep -r 'bindkey "\^r' dir
Then, you could use the "Extended Regex" Alternate character '|':
grep -E 'bindkey "\^r''|'"bindkey '\^r" dir
Which could be reduced to:
grep -E 'bindkey ("|'"')"'\^r' dir [1]
Or, if using bash, ksh or zsh, use the $' quoting (both ' and " could be escaped):
grep -E $'bindkey (\"|\')\^r' dir
And, finally, realize that there are two r: r and R:
grep -rE $'bindkey (\"|\')\^(r|R)' dir
Or use i (but that will change other characters also):
grep -riE $'bindkey (\"|\')\^r' dir
Of course, this is also a perl regex (GNU grep):
grep -rP $'bindkey (\"|\')\^(r|R)' dir
[1]It may be difficult to understand the quoting.
But it is simply a concatenation of three quoted parts.
A string quoted with single quotes followed by an string quoted with double quotes followed by a third string quoted again with single quotes. The easiest way to see the efect is to echo it. The shell will remove one quoting level and the string that the command actually receives becomes clear:
$ echo grep -E 'bindkey ("|' "')" '\^r' dir
grep -E bindkey ("| ') \^r dir
Maybe it would be easier to see with:
$ echo grep -E 'a'"b"'c' dir
grep -E abc dir
The same could be written in one pair of single quotes.
Remember that single quotes can not be included inside single quotes:
$ echo grep -E 'bindkey ("|'\'')\^r' dir
grep -E bindkey ("|')\^r dir
Or inside double quotes (a double quote could be escaped inside double quotes). With the added risk that some other characters ($, `, \, * and @) have an special meaning inside double quotes (not inside single quotes):
$ echo grep -E "bindkey (\"|')\^r" dir
grep -E bindkey ("|')\^r dir
An alternative is to use a character list […] with \' and \":
$ grep -E 'bindkey '[\"\']'\^r' dir
That is still a three part string, but the middle is not quoted (and no spaces):
'bindkey ' [\"\'] '\^r'
| How can search for both single quotes and double quotes in a grep search? |
1,479,710,941,000 |
Errata:
Similar questions about this have been asked but after searching this for a few days there appears to be no answer to this specific scenario.
Description of the problem:
The second line in in the following bash script triggers the error:
#!/bin/bash
sessionuser=$( ps -o user= -p $$ | awk '{print $1}' )
print $sessionuser
Here is the error message:
Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.
Things I have tried:
I have tried every combination of single quotes, back angled single quotes, double quotes, and spacing I could think of both inside and outside the $() command output capture method.
I have tried using $( exec ... ) where ... is the command being attempted here.
I have read up on bash, and searched these forums and many others and nothing seems illuminate why this error message is happening or how to work around it.
If the suggestion given in the error message is followed like this:
sessionuser=$( ps -o user= -p 1000 | awk '\{print $1}' )
It results in the following error message combined with the previous one:
awk: cmd. line:1: \{print $1}
awk: cmd. line:1: ^ backslash not last character on line
Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.
The message refers to line 528 in /usr/bin/print. Here is that line:
$comm =~ s!%{(.*?)}!$_="'$ENV{$1}'";s/\`//g;s/\'\'//g;$_!ge;
Rational for my bash script:
The string $USER can be rewritten and is therefore not necessarily reliable. The command "whoami" will return different results depending on whether or not privileges have been elevated for the current user.
As such there is a need for reliably attaining the current session users name for portability of scripting, and that is because I am probably not going to keep the same user name forever and would like my scripts to continue working regardless of who I have logged in as.
All of that is because user files are being backed up that have huge directory structures and many files. Every once in a while a file with root ownership and permissions will end up in that backup stack for that user.
There are lots of reasons why this happens and sometimes its just because that user backed up a wallpaper or a theme they like from the system directory structure, or sometimes its because a project was compiled by that user and some of its directories or files needed to be set to root ownership and permissions for it to function in some way, and other times it may be due to some other strange unaccounted for thing.
I understand that rsync might be able to handle this problem, but I'd like to understand how to tackle the "Unescaped left brace" in a Bash script problem first.
I can study rsync on my own, but after trying for a few days this bash script doesn't appear to have a solution that is easy to discover or illuminate through either online searches or reading the manuals.
[UPDATE 01]:
Some information was missing from my original post so I'm adding it here.
Here are the relevant system specs:
OS: Xubuntu 16.04 x86_64
Bash: GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu)
Source and Rational for the commands I'm using:
3rd reply down in the following thread:
https://stackoverflow.com/questions/19306771/get-current-users-username-in-bash
Print vs. Printf
I posted this question using "print" instead of "printf" because the source I copied it from used the "print" syntax. After using "printf" I get the same error message with an added error message as output:
Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.
Error: no such file "sessions_username_here"
Where "sessions_username_here" is a replacement of the actual sessions user name for the purpose of keeping the discussion generalized to whatever username could or might be used.
[UPDATE FINAL]
The chosen solution offered by Stéphane Chazelas clarified all the issues my script was having in a single post. I was mistakenly assuming that the 2nd line of the script since the output was complaining about brackets. To be clear it was the 3rd line that was triggering the warning (see Chazelas post for why and how) and that is probably why everyone was suggesting printf instead of print. I just needed to be pointed at the 3rd line of the script in order to make sense of those suggestions.
Things that didn't work as suggested:
sessionuser=$(logname)
Resulting error message:
logname: no login name
...so maybe that suggestion isn't quite as reliable as it might seem on the surface.
If user privileges are elevated which is sometimes the case when running scripts then:
id -un
would output
root
and not the current session's user name. This would probably be a simple matter of making sure the script drops out of root privileges before execution which could solve this issue but that is beyond the scope of this thread.
Things that did or could work as suggested:
After I figure out how to verify my script is running in a POSIX environment and somehow de-elevating root privileges, then I could indeed use "id -un" to acquire the current sessions username, but those verifications and de-escilations are beyond the scope of this threads question.
For now "without" POSIX verification, privilege testing, and de-escalation the script does what was originally intended to do without error. Here is what that script looks like now:
#!/bin/bash
sessionuser=$( ps -o user= -p $$ | awk '{printf $1}' )
printf '%s\n' "$sessionuser"
Note: The above script if run with elevated privileges still outputs "root" instead of the current sessions username even though the privilege escalated command:
sudo ps -o user= -p $$ | awk '{printf $1}'
will output the current sessions username and not "root" so even though the scope of this thread is answered I am back to square one with this script.
Thanks again to xtrmz, icarus, and especially Stéphane Chazelas who somehow was able catch my misunderstanding of the issue. I'm really impressed with every one here. Thanks for the help! :)
|
It's the third line (print $sessionuser) that causes that error, not the second.
print is a builtin command to output text in ksh and zsh, but not bash. In bash, you need to use printf or echo instead.
Also note that in bash (contrary to zsh, but like ksh), you need to quote your variables.
So zsh's:
print $sessionuser
(though I suspect you meant:
print -r -- $sessionuser
If the intent was to write to stdout the content of that variable followed by a newline) would be in bash:
printf '%s\n' "$sessionuser"
(also works in zsh/ksh).
Some systems also have a print executable command in the file system that is used to send something to a printer, and that's the one you're actually calling here. Proof that it is rarely used is that your implementation (same as mine, as part of Debian's mime-support package) has not been updated after perl's upgrade to work around the fact that perl now warns you about those improper uses of { in regular expressions and nobody noticed.
{ is a regexp operator (for things like x{min,max}). Here in %{(.*?)}, that (.*?) is not a min,max, still perl is lenient about that and treats those { literally instead of failing with a regexp parsing error. It used to be silent about that, but it now reports a warning to tell you you probably have a problem in your (here print's) code: either you intended to use the { operator, but then you have a mistake within. Or you didn't and then you need to escape those {.
BTW, you can simply use:
sessionuser=$(logname)
to get the name of the user that started the login session that script is part of. That uses the getlogin() standard POSIX function. On GNU systems, that queries utmp and generally only works for tty login sessions (as long as something like login or the terminal emulator registers the tty with utmp).
Or:
sessionuser=$(id -un)
To get the name of one user that has the same uid as the effective user id of the process running id (same as the one running that script).
It's equivalent to your ps -p "$$" approach because the shell invocation that would execute id would be the same as the one that expands $$ and apart from zsh (via assignment to the EUID/UID/USERNAME special variables), shells can't change their uids without executing a different command (and of course, of all commands, id would not be setuid).
Both id and logname are standard (POSIX) commands (note that on Solaris, for id like for many other commands you'd need to make sure you place yourself in a POSIX environment to make sure you call the id command in /usr/xpg4/bin and not the ancient one in /bin. The only purpose of using ps in the answer you linked to is to work around that limitation of /bin/id on Solaris).
If you want to know the user that called sudo, it's via the $SUDO_USER environment variable. That's a username derived by sudo from the real user id of the process that executed sudo. sudo later changes that real user id to that of the target user (root by default) so that $SUDO_USER variable is the only way to know which it was.
Note that when you do:
sudo ps -fp "$$"
That $$ is expanded by the shell that invokes sudo to the pid of the process that executed that shell, not the pid of sudo or ps, so it will give not give you root here.
sudo sh -c 'ps -fp "$$"'
Would give you the process that executed that sh (running as root) which is now either still running sh or possibly ps for sh invocations that don't fork an extra process for the last command.
That would be the same for a script that does that same ps -p "$$" and that you run as sudo that-script.
Note that in any case, neither bash nor sudo are POSIX commands. And there are many systems where neither are found.
| Saving command output to a variable in bash results in "Unescaped left brace in regex is deprecated" |
1,479,710,941,000 |
I am trying to print a sorted list of all zsh options, with set options colored green and unset options colored red. I cannot get sort to work properly with on the colored lines though.The below prints all the red options followed by all the green options:
print -lP "%F{green}"${^$(setopt)} "%F{red}"${^$(unsetopt)} | sort
I figured this was because print -P is expanding the format strings such that each option line starts with ^[[31m for red and ^[[32m for green. Looking at the sort manpage, I saw two options that might help:
-i, --ignore-nonprinting
consider only printable characters
-k, --key=POS1[,POS2]
start a key at POS1 (origin 1), end it at POS2 (default end of line)
So I tried:
print -lP "%F{green}"${^$(setopt)} "%F{red}"${^$(unsetopt)} | sort -i
and
print -lP "%F{green}"${^$(setopt)} "%F{red}"${^$(unsetopt)} | sort --key=<N>
Where I tried setting <N> to many different numbers. In all cases, I got the same results (all red options before all green). How can I solve this?
|
-k option of sort takes two numerical arguments: field and character. You want to sort on 6th character of first field. It is 6th character because %F{green} is replaced by ESC[32m. So this should work:
print -lP "%F{green}"${^$(setopt)} "%F{red}"${^$(unsetopt)} | sort -k 1.6
| Sorting strings with ANSI escape codes |
1,479,710,941,000 |
I don't use bash often but I recall in the past to pass a tab or newline on the command line I would have to escape the character using the special $ character before a single quoted string. Like $'\t', $'\n', etc. I had read about quotes and escaping in the bash manual.
What I want to know is when it's appropriate to use an ANSI C style escape. For example I was working with a regex in grep and it appeared I needed an ANSI C style escape for a newline. Then I switched to perl and it seemed I didn't.
Take for example my recent question on stackoverflow about a perl regex that didn't work. Here's the regex I was using:
echo -e -n "ab\r\ncd" | perl -w -e $'binmode STDIN;undef $/;$_ = <>;if(/ab\r\ncd/){print "test"}'
It turns out that is actually incorrect because I gave the string ANSI C style escape by using $. I just don't understand when I'm supposed to prepend the dollar sign and when I'm not.
|
You use $'...' when you want escape sequences to be interpreted by the shell.
$ echo 'a\nb'
a\nb
$ echo $'a\nb'
a
b
In perl, -e option get a string. If you use $'...', the escape sequences in string are interpreted before passing to perl. In your case, \r had gone and never passed to perl.
With $'...':
$ perl -MO=Deparse -we $'binmode STDIN;undef $/;$_ = <>;if(/ab\r\ncd/){print "test"}'
BEGIN { $^W = 1; }
binmode STDIN;
undef $/;
$_ = <ARGV>;
if (/ab\ncd/) {
print 'test';
}
-e syntax OK
Without it:
$ perl -MO=Deparse -we 'binmode STDIN;undef $/;$_ = <>;if(/ab\r\ncd/){print "test"}'
BEGIN { $^W = 1; }
binmode STDIN;
undef $/;
$_ = <ARGV>;
if (/ab\r\ncd/) {
print 'test';
}
-e syntax OK
| When to use bash ANSI C style escape, e.g. $'\n' |
1,479,710,941,000 |
It is bound to menu-complete in GNU readline.
$ bind -p|grep menu
"\e[Z": menu-complete
# menu-complete-backward (not bound)
# old-menu-complete (not bound)
I think it's Meta-something.
|
Look in the terminfo database for your terminal for the key that sends this escape sequence. The infocmp command dumps the terminfo entry for the current terminal.
$ infocmp | grep -oE ' k[[:alpha:]]+=\\E\[Z,'
kcbt=\E[Z,
The terminfo man page explains what cbt is the abbreviation of. (It also gives an example which corresponds to most terminals out there.)
$ man 5 terminfo | grep -w kcbt
key_btab kcbt kB back-tab key
kbs=^H, kcbt=\E[Z, kcub1=\E[D, kcud1=\E[B,
So you have it: \e[Z is backtab, i.e. Shift+Tab (on most terminals).
| How do I generate the sequence "\e[Z" in a terminal? |
1,479,710,941,000 |
So here's my deal: working in BASH, I have already built out a function which works just fine that accepts an array or any number of parameters, and spits out an interactive menu, navigable by arrows up or down, and concluding with the user hitting enter, having highlighted the menu item they desire (the output of which is either the index or the menu item's value, depending on how the menu is initiated):
That's all working fine; I render the menu, then respond to the events parsed from the user's input to the invisible prompt of a read command (auto-triggered after the collection of 3 characters):
read -s -n 3 key 2>/dev/null >&2
The output, having been fed into a $key variable is then run through a case statement evaluating against the predicted acceptable inputs:
\033[A (up)
\033[A (down)
"" (enter)
which in turn fire the behaviors desired.
However, it then dawned of me that with the introduction of 7+ menu items (we may presuppose it shall not exceed 10) it'd be nice to let the user tap the numeric entry of the desired menu item, which would highlight the item in question without submitting it.
My problem is this:
I got that working just fine too, BUT the user, having typed the numeric key desired (5 in the case of this example) is then obliged to hit the enter key for the read statement to trigger its effect, courtesy of my -n 3 modifier flags on my read. This is counter to the usability model already established, unless, of course, they hit their numeric selection thrice, thereby triggering the 3-char minimum requirement (which is equally counterintuitive).
The issue is that \033[A is treated as 3 characters, thereby necessitating the -n 3.
0-9 are treated as SINGLE characters (meaning if I change that to a -n 1, THEY behave as expected, but now the arrow keys fail, collecting only the escape character).
So, I guess what I'm wondering is: is there a way to listen for a -n 1 {OR} 3 (whichever comes first)? I cannot seem to send a \n or \r or similar, as until the read has resolved, they have no effect (meaning I have found no means to simply leave the -n 3 while running a parallel process to check if the entered value is a 0-9 should it prove a single character).
I'm NOT MARRIED to this approach. I'm fine with using awk or sed, or even expect (though that last one I'm confused about still). I don't care if it's a read that does the collecting.
Edit:
SOLUTION
read -n1 c
case "$c" in
(1) echo One. ;;
(2) echo Two. ;;
($'\033')
read -t.001 -n2 r
case "$r" in
('[A') echo Up. ;;
('[B') echo Down. ;;
esac
esac
Status: Resolved
@choroba to the rescue!
Solution Explanation
I'll do my best to paraphrase:
His solution involved nesting the two read statements (I'd been trying them sequentially) It was this, coupled with the -t.001 (thereby setting a near-instant timeout on the function) that enabled the carryover read.
My problem was that the escape keys I'd been monitoring were 3 characters in length (hence my setting the -n3 flag). It wasn't until afterwards that it occurred to me that accepting certain single-character inputs would be advantageous, too.
His solution was to suggest a **($'\033') case:
Basically
'Upon reading the escape character...' (**($'\033'))
Create anotherread`` (this time awaiting TWO characters), and set to timeout after a nanosecond and precluding backslashes on escape chars.
Since the behavior of read apparently is to "spillover" the remaining input into the next read statement, said statement started its countdown-to-timeout with the sought value having already been seeded. Since that met the defined requirement flags for the read, then it became a simple matter of testing the second set of characters for the case result (and since the initializing function is still getting the response its expecting, albeit from a different statement, the program carries on as though it had gotten the results I'd been trying to puzzle my way to in the first place.
|
You can read for -n 1, and read the following two if the first one is \033 and react accordingly. Otherwise, handle the number directly.
#!/bin/bash
read -n1 c
case "$c" in
(1) echo One. ;;
(2) echo Two. ;;
($'\033')
read -t.001 -n2 r
case "$r" in
('[A') echo Up. ;;
('[B') echo Down. ;;
esac
esac
| BASH question: using read, can I capture a single char OR arrow key (on keyup) |
1,479,710,941,000 |
Apologize if you feel this question very basic. Anyways, I am typing
sed '/[iI]t/ a\\
Found it!' data but it says the error event not found.
I tried escaping that ! with backslash \! but it doesn't work.
I don't understand when backslash \ can escape a character and use it literally then why doesn't it work in sed?
|
Which OS did you try it on? On HP-UX 8.11 csh there are 2 ways to cancel the speacial meaning of exclamation mark for history substitutions (see History substitutions in man csh).
Put space after ! (couple of other characters work too)
sed '/[iI]t/ a\\
Found it! ' data
Escape it via \!
sed '/[iI]t/ a\\
Found it\!' data
This actually also works with double quotes:
sed "/[iI]t/ a\\
Found it\!" data
| How to use ! in the sed command? |
1,479,710,941,000 |
From a pattern such as
[string 1]{string 2}
I want to extract string 2, the string between the last pair of matching curly braces -- that is delete [string 1] and the open { and close }. My attempt below breaks when there is a additional [, ] pairs in either string 1 or string 2.
Desired Output:
The desired output from the script below begins with foo and ends with a digit:
foo bar 1
foo bar 2
foo[3]{xyz} bar 3
foo $sq[3]{xyz}$ bar 4
foo $sq[3]{xyz}$ bar 5
foo $sq[3]{xyz}$ bar 6
foo $sq[3]{xyz}$ bar 7
foo $sq[3]{xyz}$ bar 8'
foo $sq[abc]{xyz}$ bar 9'
foo $sq[abc]{xyz}$ bar 10'
Assumptions:
Parameter to RemoveInitialSquareBraces always begins with a [ and ends with a }.
The opening [ for string 1 will have a matching ] at the point where the opening { begins for string 2.
Platform:
MacOS 10.9.5
Script
#!/bin/bash
function RemoveInitialSquareBraces {
#EXTRACTED_TEXT="$(\
# echo "$1" \
# | sed 's/^\[.*\]//' \
# | sed 's/{//' \
# | sed 's/}$//' \
# )"
EXTRACTED_TEXT="$(\
echo "$1" \
| sed 's/.*[^0-9]\]{\(.*\)}/\1/' \
)"
echo "${EXTRACTED_TEXT}"
}
RemoveInitialSquareBraces '[]{foo bar 1}'
RemoveInitialSquareBraces '[abc]{foo bar 2}'
RemoveInitialSquareBraces '[]{foo[3]{xyz} bar 3}'
RemoveInitialSquareBraces '[]{foo $sq[3]{xyz}$ bar 4}'
RemoveInitialSquareBraces '[goo{w}]{foo $sq[3]{xyz}$ bar 5}'
RemoveInitialSquareBraces '[goo[3]{w}]{foo $sq[3]{xyz}$ bar 6}'
RemoveInitialSquareBraces '[goo[3]{w} hoo[3]{5}]{foo $sq[3]{xyz}$ bar 7}'
RemoveInitialSquareBraces '[goo[3]{w} hoo[3]{5}]{foo $sq[3]{xyz}$ bar 8}'
RemoveInitialSquareBraces '[goo[3]{w} hoo[xyz]{5}]{foo $sq[abc]{xyz}$ bar 9}'
RemoveInitialSquareBraces '[goo[3]{w} hoo[xyz]{uvw}]{foo $sq[abc]{xyz}$ bar 10}'
exit 0
|
Regarding to above input examples the script can be:
sed s/[^\"\']*[^0-9]\]{\(.*\)}/\1/ <<\END
"[]{foo bar 1}"
"[abc]{foo bar 2}"
"[]{foo[3]{xyz} bar 3}"
"[]{foo $sq[3]{xyz}$ bar 4}"
"[goo{w}]{foo $sq[3]{xyz}$ bar 5}"
"[goo[3]{w}]{foo $sq[3]{xyz}$ bar 6}"
"[goo[3]{w} hoo[3]{5}]{foo $sq[3]{xyz}$ bar 7}"
END
produces
"foo bar 1"
"foo bar 2"
"foo[3]{xyz} bar 3"
"foo $sq[3]{xyz}$ bar 4"
"foo $sq[3]{xyz}$ bar 5"
"foo $sq[3]{xyz}$ bar 6"
"foo $sq[3]{xyz}$ bar 7"
Other thing is your function which can be simplified:
function RemoveInitialSquareBraces {
printf '%s\n' "$@" |
sed ...
}
thus it will accept many argument(s).
Update: for more general case you can do the task in two steps:
sed -e "
s/\[.*\[.*\][^[]*\]/[]/ #remove square brackets inside square brackets
s/\[[^]]*\]{\(.*\)\}/\1/ #lazy strip square brackets and curle brackets
"
Addition: you can use perl-grep(GNU grep with perl extention):
grep -Po '\[([^][]*\[\w+\][^][]*)*\]{\K.*(?=})'
or sed with same regexp:
sed 's/\[\([^][]*\(\[\w\+\][^][]*\)*\)*\]{\(.*\)}/\3/'
| sed to match pattern between matching curly braces |
1,479,710,941,000 |
I have a log file and I'm looking to sort according to response time, 4th field:
GET /api/user/john 200 0.194 ms - 7307
But it contains color tags, here's the output from vi:
^[[0mGET /api/user/john ^[[32m200 ^[[0m0.194 ms - 7307^[[0m
Is there a simple way to do this?
|
Extract the field you want to sort on (typically with cut, sed or awk), and strip off its formatting escape sequences. You can find scripts for the second part in Removing control chars (including console codes / colours) from script output. I'll use uncolor below to stand for one of these scripts.
Collate the result with the original (paste). Use a separator character that doesn't appear in the data to sort.
Sort.
Remove the sort key.
For example, if your fields are tab-separated:
<input-file.txt cut -f 4 | uncolor |
paste - input-file.txt |
sort |
cut -f 2-
| How to sort a column that also has color data? |
1,479,710,941,000 |
This script outputs 5 lines with the third one being underlined:
#!/usr/bin/env bash
set -eu
bold=$(tput bold)
reset=$(tput sgr0)
underline=$(tput smul)
echo 'line 1
line 2
line 3
line 4
line 5' | awk -v bold="$bold" -v reset="$reset" -v underline="$underline" '
NR == 3 {print underline $0 reset}
NR != 3 {print $0}
'
If I don't reset (in the script) at the end of the third line, all the following lines are underlined, including the commands I type next (in the shell). Until I run reset. With less (./my-script.sh | less -R) not only is reset (in the script) not needed (the third line gets underlined), but it also produces extra symbol in tmux (^O, TERM=screen-256color):
line 1
line 2
line 3^O
line 4
line 5
But no symbol in plain console (TERM=xterm-256color).
What exactly and why that happens? Is there a way to make the script work in all these cases?
$ ./my-script.sh
$ ./my-script.sh | grep line --color=never
$ ./my-script.sh | less -R
E.g., to make the following script work better.
|
less sends its own "reset" at the end of the line, which happens to be derived from the terminfo sgr0 by (ncurses) eliminating the ^O (reset alternate character set) because less is using the termcap interface. The termcap capability me which corresponds to terminfo sgr0 conventionally doesn't modify the alternate character set state, as noted in the manual page curs_termcap(3x):
Note that termcap has nothing analogous to terminfo's sgr string. One
consequence of this is that termcap applications assume me (terminfo
sgr0) does not reset the alternate character set. This implementation
checks for, and modifies the data shown to the termcap interface to ac-
commodate termcap's limitation in this respect.
Perhaps less is doing that reset to recover from unexpected escape sequences: the -R option is only designed to handle ANSI colors (and similarly-formatted escapes such as bold, underline, blink, standout). The source-code doesn't mention that, but the A_NORMAL assignment tells less to later emit the reset:
/*
* Add a newline if necessary,
* and append a '\0' to the end of the line.
* We output a newline if we're not at the right edge of the screen,
* or if the terminal doesn't auto wrap,
* or if this is really the end of the line AND the terminal ignores
* a newline at the right edge.
* (In the last case we don't want to output a newline if the terminal
* doesn't ignore it since that would produce an extra blank line.
* But we do want to output a newline if the terminal ignores it in case
* the next line is blank. In that case the single newline output for
* that blank line would be ignored!)
*/
if (column < sc_width || !auto_wrap || (endline && ignaw) || ctldisp == OPT_ON)
{
linebuf[curr] = '\n';
attr[curr] = AT_NORMAL;
curr++;
}
As an alternative to sgr0 (which resets all video attributes, and is only partly understood by less), you could do
reset=$(tput rmul)
and (for many terminals/many systems, including TERM=screen-256color) reset just the underline. However, that does not affect bold, nor is there an conventional terminfo/termcap capability to reset bold. But screen implements the corresponding ECMA-48 sequence which does this (SGR 22 versus the 24 used in rmul), so you could hardcode that case.
| Why do I not need to reset text attributes with less? |
1,479,710,941,000 |
I'm connected to Ubuntu server that is a member of a corporate Active Directory domain via likewise-open. The users are in the form mydomain\myuser. I can connect to it via ssh, escaping the \ with another \:
ssh mydomain\\myuser@myserver
I've generated a pair of keys via ssh-keygen, then tried to copy it to the remote machine. The user I'm using in the local machine is not the same I want to copy the key into, so I issued the command:
ssh-copy-id mydomain\\myuser@myserver
and the output:
password:
password:
password:
mydomainyuser@myserver's password:
Received disconnect from myserver: 2: Too many authentication failures for myuserydomain
prompting me for the local user's password three times, and then the \\ actually worked as a single \ escaping the first m in mydomain. Am I getting this straight? And how can I escape that \ and correctly copy key to the remote machine?
EDIT: also ssh-copy-id [email protected]@myserver turned out to be a valid syntax.. However, the question was about escaping the \.
|
The reason this is happening is because you are escaping it for the shell, but ssh-copy-id is also attempting to interpret it. This should work:
ssh-copy-id 'mydomain\\myuser@myserver'
| how to escape "\"in ssh-copy-id? |
1,479,710,941,000 |
I know I can move the cursor with the escape codes, and I can also print at the cursor. What I'd like to know is if it is possible to pull the character under the cursor.
I tried searching for such a code, but failed. So I assume it is not possible, but I'd like to ask if there is some way I failed to find.
|
This is not possible unless you modify your terminal emulator to make it possible; use curses library or equivalent.
| Is it possible to get a character at terminal cursor using ANSI escape codes? |
1,479,710,941,000 |
According to Bash tips: Colors and formatting (ANSI/VT100 Control sequences) I attemped to active blink code in my program, But may be blink code has been eliminated. Is it true?
If is not true, Please help me to use blink code.
|
The blink feature depends upon the terminal (or terminal emulator). Most terminals you will use accept the control sequences documented in ECMA-48, e.g., VT100-compatible. The control sequence may
cause blinking on a given terminal, or
show as a particular color, or
simply ignored by a given terminal
Applications usually use a terminal description (terminfo or termcap). If the terminal description does not tell how to blink, then the application will not know either.
If your computer has infocmp (for terminfo), that will show the capabilities listed in the terminal description. bash only looks for blink — using the termcap name, since it is a termcap application. More generally, terminfo can also describe how to blink using sgr (which is not available in termcap descriptions).
For example, this is a terminfo description of vt100:
> infocmp vt100
# Reconstructed via infocmp from file: /usr/local/ncurses/share/terminfo/v/vt100
vt100|vt100-am|dec vt100 (w/advanced video),
am, mc5i, msgr, xenl, xon,
cols#80, it#8, lines#24, vt#3,
acsc=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~,
bel=^G, blink=\E[5m$<2>, bold=\E[1m$<2>,
clear=\E[H\E[J$<50>, cr=^M, csr=\E[%i%p1%d;%p2%dr,
cub=\E[%p1%dD, cub1=^H, cud=\E[%p1%dB, cud1=^J,
cuf=\E[%p1%dC, cuf1=\E[C$<2>,
cup=\E[%i%p1%d;%p2%dH$<5>, cuu=\E[%p1%dA,
cuu1=\E[A$<2>, ed=\E[J$<50>, el=\E[K$<3>, el1=\E[1K$<3>,
enacs=\E(B\E)0, home=\E[H, ht=^I, hts=\EH, ind=^J, ka1=\EOq,
ka3=\EOs, kb2=\EOr, kbs=^H, kc1=\EOp, kc3=\EOn, kcub1=\EOD,
kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA, kent=\EOM, kf0=\EOy,
kf1=\EOP, kf10=\EOx, kf2=\EOQ, kf3=\EOR, kf4=\EOS, kf5=\EOt,
kf6=\EOu, kf7=\EOv, kf8=\EOl, kf9=\EOw, lf1=pf1, lf2=pf2,
lf3=pf3, lf4=pf4, mc0=\E[0i, mc4=\E[4i, mc5=\E[5i, rc=\E8,
rev=\E[7m$<2>, ri=\EM$<5>, rmacs=^O, rmam=\E[?7l,
rmkx=\E[?1l\E>, rmso=\E[m$<2>, rmul=\E[m$<2>,
rs2=\E>\E[?3l\E[?4l\E[?5l\E[?7h\E[?8h, sc=\E7,
sgr=\E[0%?%p1%p6%|%t;1%;%?%p2%t;4%;%?%p1%p3%|%t;7%;%?%p4%t;5%;m%?%p9%t\016%e\017%;$<2>,
sgr0=\E[m\017$<2>, smacs=^N, smam=\E[?7h, smkx=\E[?1h\E=,
smso=\E[7m$<2>, smul=\E[4m$<2>, tbc=\E[3g,
The corresponding termcap is
> infocmp -Cr vt100
# Reconstructed via infocmp from file: /usr/local/ncurses/share/terminfo/v/vt100
vt100|vt100-am|dec vt100 (w/advanced video):\
:5i:am:bs:ms:xn:xo:\
:co#80:it#8:li#24:vt#3:\
:@8=\EOM:DO=\E[%dB:K1=\EOq:K2=\EOr:K3=\EOs:K4=\EOp:K5=\EOn:\
:LE=\E[%dD:RA=\E[?7l:RI=\E[%dC:SA=\E[?7h:UP=\E[%dA:\
:ac=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~:\
:ae=^O:as=^N:bl=^G:cb=\E[1K:cd=\E[J:ce=\E[K:cl=\E[H\E[J:\
:cm=\E[%i%d;%dH:cr=^M:cs=\E[%i%d;%dr:ct=\E[3g:do=^J:\
:eA=\E(B\E)0:ho=\E[H:k0=\EOy:k1=\EOP:k2=\EOQ:k3=\EOR:\
:k4=\EOS:k5=\EOt:k6=\EOu:k7=\EOv:k8=\EOl:k9=\EOw:k;=\EOx:\
:kb=^H:kd=\EOB:ke=\E[?1l\E>:kl=\EOD:kr=\EOC:ks=\E[?1h\E=:\
:ku=\EOA:l1=pf1:l2=pf2:l3=pf3:l4=pf4:le=^H:mb=\E[5m:\
:md=\E[1m:me=\E[0m:mr=\E[7m:nd=\E[C:pf=\E[4i:po=\E[5i:\
:ps=\E[0i:rc=\E8:rs=\E>\E[?3l\E[?4l\E[?5l\E[?7h\E[?8h:\
:..sa=\E[0%?%p1%p6%|%t;1%;%?%p2%t;4%;%?%p1%p3%|%t;7%;%?%p4%t;5%;m%?%p9%t\016%e\017%;$<2>:\
:sc=\E7:se=\E[m:sf=^J:so=\E[7m:sr=\EM:st=\EH:ta=^I:ue=\E[m:\
:up=\E[A:us=\E[4m:
(The termcap name for blink is mb, which you can see in the description).
So... if you are not seeing blinking text, that could be (a) the terminal itself or (b) the terminal description.
Further reading:
infocmp - compare or print out terminfo descriptions
terminfo - terminal capability data base
Standard ECMA-48:
Control Functions for Coded Character Sets
| blink code(escape code) has been removed? |
1,479,710,941,000 |
I am not able to take care of special characters.
I have the following perl script.
while(@mapping_array[$i])
{
chomp(@mapping_array[$i]);
my @core= split ( / / , $mapping_array[$i]) ;
@core[0] =~ tr/ //ds ; ## Deleting blank spaces
@core[1] =~ tr/ //ds ;
system("perl -pi -e 's/@core[0]/@core[1]/' $testproc ");
print "@core[0] \n";
print "@core[1] \n";
$i++;
}
The issue is that my @core[0] variable could be a simple string like abc or a more complex one like TEST[1]. My script works as expected for abc, replacing it with the value of @core[1], but it failes if my @core[0] is TEST[1].
Using ? instead of / in the substitution operator doesn't help. How can I do this correctly?
|
Sounds like you're looking for quotemeta. As explained in perldoc -f quotemeta:
quotemeta EXPR
Returns the value of EXPR with all the ASCII non-"word" characters
backslashed. (That is, all ASCII characters not matching
"/[A-Za-z_0-9]/" will be preceded by a backslash in the returned
string, regardless of any locale settings.) This is the internal
function implementing the "\Q" escape in double-quoted strings.
So, your script would be (note that array elements should be specified as $foo[N], not @foo[N]):
chomp(@mapping_array);
while($mapping_array[$i])
{
my @core= split ( / / , $mapping_array[$i]) ;
$core[0] =~ tr/ //ds ; ## // Deleting blank spaces
$core[1] =~ tr/ //ds ; # / fix SO highlighting
my($k,$l)=(quotemeta($core[0]),quotemeta($core[1]))
system("perl -pi -e 's/$k/$l/' $testproc ");
print "$core[0] \n$core[1] \n";
$i++;
}
| Executing a 'perl command' from shell and executing the same command from perl script using system command |
1,479,710,941,000 |
I was running a pipe command with one section being the following:
sort -t $'\t' -T . -k1,1g
When I was monitoring htop I saw this instead:
What is the reason behind this? Does this mean my command is wrong or is there something wrong with htop?
|
There’s nothing wrong with your command, htop replaces control characters with question marks:
(((unsigned char)data_c[j]) >= 32 ? ((unsigned char)data_c[j]) : '?')
(characters with values less than 32 are control characters).
| Why does `htop` display `$'\t'` as `?` in `sort` command? |
1,479,710,941,000 |
I'm using the following script, but getting unexpected output.
perl -pi -e '/DB_CHARSET/ and $_.="define('SOMETHING'/, true);\n"' file.txt
This command adds define(SOMETHING, true);
Since the text starts with ' then have " inside, how do I escape 'SOMETHING so that I end up with define('SOMETHING', true)?
I've tried usual \ and it did not help.
|
You could use e.g. \x27 in the Perl string (the character code for ' in hex):
$ perl -e 'print "foo\x27bar"' -l
foo'bar
or handle the quoting in the shell so as to give Perl a raw ':
$ perl -e 'print "foo'\''bar"' -l
foo'bar
(First ' ends the quoted string, \' inserts the quote, the third ' starts a new quoted string.)
| How to escape ' inside " in perl? |
1,479,710,941,000 |
I need to understand this code snippet that I found in .profile file
echo -en "\e[32;44m $(hostname) \e[m";echo -e "\e[m"
|
This snippet is used to print out the hostname of the system with a blue background and a green font.
To color your shell, you use special color escape sequences.
\e[ starts the color scheme, 32; will set the foreground color to green, 44 will set the background color to blue and m will end it.
$(command) creates a new shell, executes command and returns the result (not the return value).
hostname returns the hostname of the current system.
\e[m will reset the coloring of the output.
From the echo manpage:
-n do not output the trailing newline
-e enable interpretation of backslash escapes
IMHO your snippet could be simplified to
echo -e "\e[32;44m $(hostname) \e[m";
See the chapter 6.1 Colours of the BASH Prompt HOWTO for more details.
| Unix Shell and colours |
1,448,891,626,000 |
I have the following in my .bashrc file I use for a log:
function log(){
RED="\e[0;31m"
RESET="\e[0m"
echo -e "${RED}$(date)" "${RESET}$*" >> "$HOME"/mylog.txt
}
But when I do something with an apostrophe in it, it comes up with some sort of prompt and does not log it properly.
How do I escape all the text being input into the file?
Example:
$ log this is a testing's post
> hello
>
> ^C
$
Thanks.
|
The problem you have has nothing to do with echo -e or your log() function. The problem is with apostrophes:
log this is a testing's post
The shell (bash, in your case) has special meanings for certain characters. Apostrophes (single quotes) are used to quote entire strings, and prevent most other kinds of interpolation. bash expects them to come in pairs, which is why you get the extra prompt lines until you type the second one. If you want a literal single quote in your string, you need to tell bash, by escaping it via \', like so:
log this is a testing\'s post
Again, log is beside the point. You can try this out with plain old echo if you like:
echo this is a testing\'s post
See Which characters need to be escaped in bash for more info.
| Escape characters from echo -e |
1,448,891,626,000 |
I regularly use the expression line=${line//[$'\r\n']}. But what does [$'\r\n'] mean?
I know it removes the '\r\n' characters but how does it do this? Does it remove the instances of both characters only, or does it also find matches of just one character?
I do not understand the usage of this syntax.
If you can, please, give me a link to the manual. I cannot find the answer on this question.
|
From the Bash manual:
${parameter/pattern/string}
The pattern is expanded to produce a pattern just as in filename
expansion. Parameter is expanded and the longest match of pattern
against its value is replaced with string. The match is performed
according to the rules described below (see Pattern Matching). If
pattern begins with /, all matches of pattern are replaced with string. ... If string is null, matches of pattern are deleted and the / following pattern may be omitted.
You have ${line//[$'\r\n']}, where:
the parameter is line,
the pattern is /[$'\r\n'] (note: begins with /, so all matches of pattern are replaced), and
the string is null, so the / after pattern is omitted, and matches are deleted.
Following the rules for Pattern Matching:
[…]
Matches any one of the enclosed characters.
$'...' tells bash to interpret certain escape sequences (here, \r for carriage return and \n for line feed) and replace them with actual characters represented by the escape sequences.
So this substitution matches all instances of either carriage return (CR, \r) or line feed (LF, \n), and deletes them.
| What does `[$'\r\n']` mean? |
1,448,891,626,000 |
My desired outcome is the following: to recursively search a directory looking for a given string in all found files. The following command is my usual port of call:
find ./ | xargs grep -ns 'foobar'
However, when foobar has quotes the command fails and gives me a > prompt in the shell. The specific command that's causing the problem is as follows:
find ./ | xargs grep -ns 'add_action(\'save_post\','
I've tried to escape the quotes with backslashes but to no avail. What's the correct way to do this?
|
Single quotes are terminated by single quotes; all other characters in between are preserved exactly as is, including backslashes. Thus there is no way to embed a single quote between single quotes. (But you can end the single quotes, escape a single quote, and start a new set of single quotes, as in 'Single quotes aren'\''t ever really embedded in single quotes.')
Suggestion: Avoid find+xargs when grep -r pattern . can recursively grep on the current directory.
The below commands have equivalent behavior:
grep -rns "add_action('save_post'," .
grep -rns 'add_action('\'save_post\', .
The last command is interpreted as:
'add_action(' -> add_action(
\' -> '
save_post -> save_post
\' -> '
, -> ,
Concatenating these parts, the grep command receives the argument add_action('save_post',.
| How can I pass strings with single quotes to grep? |
1,448,891,626,000 |
I'm seeing this in my .bashrc file:
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\
[\033[01;34m\]\w\[\033[00m\]\$ '
and I have absolutely no idea what all those escapes codes mean.
|
There are three kinds of escape codes in there: bash parameter expansion, bash prompt expansion, and terminal escape codes.
${debian_chroot:+($debian_chroot)} means “if $debian_chroot is set and non-empty, then ($debian_chroot), else nothing”. (See /etc/bash.bashrc for how debian_chroot is defined. As the name indicates this is a Debian thing.)
The backslash escapes are prompt escapes. \u is replaced by the user name, \h is replace by the machine name, and so on (see the manual for a list). Parts within \[…\] are terminal escapes; the brackets tell bash that these parts don't take any room on the screen (this lets bash calculate the width of the prompt). \033 is the ESC character (character number 033 octal, i.e. 27 decimal, sometimes written \e or ^[); it introduces terminal escape sequences.
ESC [ codes m (written CSI Pm m in the xterm control sequences list) changes the color or appearance of the following text. For example the code 1 switches to bold, 32 switches the foreground color to green, 0 switches to default attributes.
| Understanding escape codes |
1,448,891,626,000 |
I just struggled (again) over this:
# only in bash
NORM=$'\033[0m'
NORM=$'\e[0m'
# only in dash
NORM='\033[0m'
# only in bash and busybox
NORM=$(echo -en '\033[0m')
The goal is to include special characters in the string, not only for output using echo but also for piping into a cli tool etc.
In the specific use case above using $(tput ...) is probably the best way, but I ask for a general escaping solution with minimal requirements to external tools but maximum compatibility.
Normally, I help myself using conditions like [ -z "$BASH_VERSION" ] but
I didn't find an easy way to detect busybox yet
a normal variable assignment in 5 lines (using if/else/fi) looks like overkill
I prefer simple solutions
|
What you want is "$(printf ...)". Stephane has already written an excellent expose of printf vs echo, more of an article than a mere answer, so I won't repeat the whole thing here. Keynotes pertinent to the current question are:
Stick to POSIX features and it is very portable, and
It is frequently a shell builtin, in which case you have no external calls or dependencies.
I will also add that it took me quite a while (okay, just a few weeks) to get around to switching from echo, because I was used to echo and thought printf would be complicated. (What are all those % signs about, huh?) As it turns out, printf is actually extremely simple and I don't bother with echo anymore for anything but fixed text with a newline at the end.
Printf Made Easy
There are vast numbers of options for printf. You can print numbers to specific decimal places of accuracy. You can print multiple fields, each with a specified fixed width (or a minimum width, or a maximum width). You can cause a shell string variable which contains the character sequences \t or \n to be printed with those character sequences interpreted as tabs and newlines.
You can do all these things, and you should know they are possible so you can look them up when you need them, but in the majority of cases the following will be all you need to know:
printf takes as its first argument a string called "format". The format string can specify how further arguments are to be handled (i.e. how they will be formatted). Further arguments, if not referenced at all* within the format argument, are ignored.
Since alphanumeric characters (and others) can be embedded in the format argument and will print as-is, it may look like printf is doing the same thing as echo -n but that for some unknown reason it's ignoring all but the first argument. That's really not the case.
For example, try printf some test text. In this example some is actually taken as the format, and since it doesn't contain anything special and doesn't tell printf what to do with the rest of the arguments, they are ignored and all you get printed is some.
% followed by a specific letter needs to be used within the format string (the first argument to printf) to specify what type of data the subsequent arguments contain. %s means "string" and is what you will use most often.
\n or \t within the format translate to newlines and tab characters respectively.
That's really all you need to use printf productively. See the following code block for some very simple illustrative examples.
$ var1="First"
$ var2="Second"
$ var3="Third"
$ printf "$var1" "$var2" "$var3" # WRONG
First$ # Only the first arg is printed, without a trailing newline
$
$ printf '%s\n' "$var1" # %s means that the next arg will be formatted as a literal string with any special characters printed exactly as-is.
First
$
$ printf '%s' "$var1" "$var2" "$var3" # When more args are included than the format calls for, the format string is reused. This example is equivalent to using '%s%s%s' as the format.
FirstSecondThird$ # All three args were printed; no trailing newline.
$
$ printf '%s\t%s\t%s\n' "$var1" "$var2" "$var3"
First Second Third # Tab separation with trailing newline. This format is very explicit about what to do with three arguments. Now see what happens if four are used:
$ var4="Fourth"
$ printf '%s\t%s\t%s\n' "$var1" "$var2" "$var3" "$var4"
First Second Third # The specified format is reused after the three expected args,
Fourth # so this line has two trailing tabs.
$
$ printf '%s\n' "$var1" "$var2" "$var3" # This format reuse can be used to advantage in printing a list, for example.
First
Second
Third
$
$ printf '%s\t' "$var1" "$var2" "$var3" ; printf '\n' # Here is a dual command that could have args added without changing the format string...
First Second Third
$ printf '%s\t' "$var1" "$var2" "$var3" "$var4" ; printf '\n'
First Second Third Fourth # ...as you can see here.
$ # It does print a trailing tab before the newline, however.
* Of course, if you include a single argument format specifier sequence such as %s, your whole format string is reused as many times as needed to handle all arguments provided. See examples.
| Shell-independent escaping for strings in scripts |
1,448,891,626,000 |
How do I print ↑ and ↓ in Unix?
I have a script written in bash and I am printing some numbers with whom I want to print arrow keys to show increase/decrease. How do I literally print these arrow keys (let's say I assign them to some variable and print these along with my numbers) with my numbers using echo or printf in my shell script?
Using below link I tried:
echo -e '\x1b\x5b\x35\x7e'
but it does not print anything. I can't find how to print arrow keys in this link.
|
With bash 4.2 and above (or ksh, mksh, pdksh, lksh, zsh), you can use ANSI C Quoting:
echo $'\U2191'
or setting locale with unicode characters:
LC_ALL=C.UTF-8 printf '\U2191\n'
or you can use perl instead:
perl -CS -le 'print "\x{2191}"'
| How tp Print/echo arrow character (not key) in unix - bash, version 3.2.52 |
1,448,891,626,000 |
I have a list of files generated using find that I want to feed (pipe) to cp. My problem is that the files have spaces and apostrophes in them, leading cp to repeatedly complain that it "cannot stat". Is there an elegant solution to this problem?
|
Make sure you have -print0 in the find command.
find . -print0 | xargs -0 cmd ...
| How to copy a piped list of files contained spaces and apostrophes? |
1,448,891,626,000 |
I can change my cursor shape like this:
printf "\x1b[\x31 q" # block :)
printf "\x1b[\x35 q" # ibeam :)
But when I assign that to a variable with %s and expand using printf, it doesn't work:
CURSOR="\x1b[\x3%s q"
printf ${CURSOR} 1 # block :(
printf ${CURSOR} 5 # ibeam :(
How do I fix this?
|
\x31 is the character 1 and \x35 is 5, so in this case you don't have to use escape sequences.
CURSOR="\x1b[%s q"
printf "${CURSOR}" 1
printf "${CURSOR}" 5
| Using printf with escape sequences? |
1,448,891,626,000 |
When I execute the following command:
#!/bin/bash
while IFS= read -r -d '' file; do
files+=$file
done < <(find -type f -name '*.c' -print0)
echo "${files[@]}"
I do not get the same result as this one:
#!/bin/bash
find_args="-type f '*.c' -print0"
while IFS= read -r -d '' file; do
files+=$file
done < <(find $find_args)
echo "${files[@]}"
How can I fix the second scenario to be equivalent to the first one?
My understanding is that, because there are single quotes in the double quotes, the single quotes get escaped, which produces a bad expansion that looks something like that:
find -type f -name ''\''*.c'\'' -print0
|
(Note, you have a typo. You left off the -name flag in the second example.)
One approach is to put the args in an array and pass the array appropriately to find...
#!/bin/bash
find_args=(-type f -name '*.c' -print0)
while IFS= read -r -d '' file; do
files+=$file
done < <(find "${find_args[@]}")
echo "${files[@]}"
The format ${foo[@]} expands to all of the elements of the array, each an individual word (rather than expanding to a single string). This is closer in intent to the original script.
| Bash: escaped quotes in subshell [duplicate] |
1,448,891,626,000 |
Example:
When I run
echo -e "\[\033[;33m\][\t \u (\#th) | \w]$\[\033[0m\]"
the printed response is
\[\][ \u (\#th) | \w]$\[\]
where everything after the first \[ and before the last \] is an orangey-brown.
However when I set the command prompt to
\[\033[;33m\][\t \u (\#th) | \w]$\[\033[0m\]
the command prompt is printed as
[21:55:17 {username} (89th) | {current directory path}]$
were the whole command prompt is the orangy-brown.
In conclusion, my question is: Can I have a command prompt design printed (with echo, cat, less, etc.) as if it were the command prompt?
|
In Bash 4.4+, you can use ${var@P}, similarly to how ${var@Q} produces the contents of var quoted, see 3.5.3 Shell Parameter Expansion in the manual, bottom of page.
$ printf "%s\n" "${var}"
\[\033[00;32m\]\w\[\033[00m\]\$
$ printf "%s\n" "${var@P}" |od -tx1z
0000000 01 1b 5b 30 30 3b 33 32 6d 02 2f 74 6d 70 01 1b >..[00;32m./tmp..<
0000020 5b 30 30 6d 02 24 0a >[00m.$.<
0000027
or if you run the latter without od, you should see the current path in green.
Note that it prints \001 and \002 for \[ and \]. Those probably aren't too useful for you. We could use a temporary variable and the string replace expansion to get rid of them:
$ printf -v pst "%s\n" "${var@P}"
$ pst=${pst//[$'\001\002']}
$ printf "%s\n" "$pst" |od -tx1z
0000000 1b 5b 30 30 3b 33 32 6d 2f 74 6d 70 1b 5b 30 30 >.[00;32m/tmp.[00<
0000020 6d 24 0a 0a >m$..<
0000024
In Zsh, there's print -P:
$ print -P "%F{green}%d%F{normal}%#" |od -tx1z
0000000 1b 5b 33 32 6d 2f 74 6d 70 2f 61 61 61 1b 5b 33 >.[32m/tmp/aaa.[3<
0000020 39 6d 25 0a >9m%.<
and the parameter expansion flag %, so ${(%)var} would be similar to Bash's ${var@P} above, and you could use either print -v othervar -P "..." or othervar=${(%)var} to put the resulting string in othervar.
Note that for things like Bash's \u and \w, you could just use $USER and $PWD instead, but for something like \# or \j that might not be so easy.
| Is it possible to use the escape codes used in shell prompts elsewhere, such as with echo? |
1,448,891,626,000 |
Perl has a function called metaquote() to escape all special characters for a regular expression.
Is there an equivalent technique for egrep?
Example: If I am searching for the string abc.def.ghi, I need to remember to escape the dots manually, e.g., abc\.def\.ghi
I assume egrep does not have a built-in mode/feature to do this, but I am open to "one-liners" in Perl/sed/awk to simulate metaquote() for egrep. Also, Perl's metaquote() might work in trivial cases, but the regular expression syntax is different between egrep and perl.
|
Use the -F options to make grep treat pattern as fixed string:
grep -F 'abc.def.ghi' <file
And also note that you don't need to invoke egrep.
| How to escape metacharacters for egrep like metaquote from Perl? |
1,448,891,626,000 |
When running an interactive shell application, how can I send it a key (or key combination) which would normally be intercepted by GNOME Terminal? In this particular instance it's the F10 key which is intercepted.
Bonus points for a general-purpose solution which would work for things like Alt, PgUp and Alt-Tab (might be useful for a shell script to configure shortcut commands) or in other terminals as well.
|
You can try disable the gnome shortcuts in Edit -> Keyboard shortcuts, so the window won't eat up the function keys.
There seems to be a known gnome-terminal bug relating to this.
Alternatively if this doesn't work, you will have to use another terminal that explicitly sends function keys as control codes to the terminal. rxvt is one I can recommend, or xterm.
| How to bypass GNOME Terminal when sending keyboard input? |
1,448,891,626,000 |
I'm trying to write a simple alternative script for uploading files to the transfer.sh service. One of the examples on the website mentions a way of uploading multiple files in a single "session":
$ curl -i -F filedata=@/tmp/hello.txt \
-F filedata=@/tmp/hello2.txt https://transfer.sh/
I'm trying to make a function that would take any number of arguments (files) and pass them to cURL in such fashion. The function is as follows:
transfer() {
build() {
for file in $@
do
printf "-F filedata=@%q " $file
done
}
curl --progress-bar -i \
$(build $@) https://transfer.sh | grep https
}
The function works as expected as long as no spaces are in the filenames.
The output of printf "-f filedata=@%q " "hello 1.txt" is -F filedata=@test\ 1.txt, so I expected the special characters to be escaped correctly. However, when the function is called with a filename that includes spaces:
$ transfer hello\ 1.txt
cURL does not seem to interpret the escapes and reports an error:
curl: (26) couldn't open file "test\"
I also tried quoting parts of the command, e.g. printf "-F filedata=@\"%s\" " "test 1.txt", which produces -F filedata=@"test 1.txt".
In such case cURL returns the same error: curl: (26) couldn't open file ""test". It seems that it does not care about quotes at all.
I'm not really sure what causes such behaviour. How could I fix the function to also work with filenames that include whitespace?
Edit/Solution
It was possible to solve the issue by using an array, as explained by @meuh. A solution that works in both bash and zsh is:
transfer () {
if [[ "$#" -eq 0 ]]; then
echo "No arguments specified."
return 1
fi
local -a args
args=()
for file in "$@"
do
args+=("-F filedata=@$file")
done
curl --progress-bar -i "${args[@]}" https://transfer.sh | grep https
}
The output in both zsh and bash is:
$ ls
test space.txt test'special.txt
$ transfer test\ space.txt test\'special.txt
######################################################################## 100.0%
https://transfer.sh/z5R7y/test-space.txt
https://transfer.sh/z5R7y/testspecial.txt
$ transfer *
######################################################################## 100.0%
https://transfer.sh/4GDQC/test-space.txt
https://transfer.sh/4GDQC/testspecial.txt
It might be a good idea to pipe the output of the function to the clipboard with xsel --clipboard or xclip on Linux and pbcopy on OS X.
The solution provided by @Jay also works perfectly well:
transfer() {
printf -- "-F filedata=@%s\0" "$@" \
| xargs -0 sh -c \
'curl --progress-bar -i "$@" https://transfer.sh | grep -i https' zerop
}
|
One way to avoid having bash word-splitting is to use an array to carry each argument without any need for escaping:
push(){ args[${#args[*]}]="$1"; }
build() {
args=()
for file
do push "-F"
push "filedata=@$file"
done
}
build "$@"
curl --progress-bar -i "${args[@]}" https://transfer.sh | grep https
The build function creates an array args and the push function adds a new value to the end of the array. The curl simply uses the array.
The first part can be simplified, as push can also be written simply as args+=("$1"), so we can remove it and change build to
build() {
args=()
for file
do args+=("-F" "filedata=@$file")
done
}
| posting data using cURL in a script |
1,448,891,626,000 |
On CentOS 7 I am using cloud-init to spin off a droplet using the DigitalOcean API which requires YAML formatting.
I got most parts working fine, but struggle on escaping the commands running at 'runcmd' below:
#!/bin/sh
set -e # Stop on first error
curl -X POST https://api.digitalocean.com/v2/droplets \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$api_key'' \
-d '{
"name":"'$droplet_name'",
"region":"'$region'",
"size":"'$size'",
"image":"'$image'",
"ssh_keys":'$root_ssh_pub_key',
"backups":false,
"ipv6":false,
"private_networking":false,
"user_data":"
#cloud-config
users:
- name: SomeUser
groups: wheel
shell: /bin/bash
ssh-authorized-keys:
- ssh-dss AAAABBBBCCCCDDDDD...
runcmd:
- sed -i -e '$aAllowUsers SomeUser' /etc/ssh/sshd_config
- sed -i -e '/PermitRootLogin/s/^.*$/PermitRootLogin no/' /etc/ssh/sshd_config
- service sshd restart
"}'
The errors I receive are:
curl: (6) Could not resolve host: no
curl: (3) [globbing] unmatched close brace/bracket in column 63
|
You are using the single quotes for for the -d commandline options both for specifying strings separated by $variable as well as for quoting the argument for the second sed command.
For $aAllowUsers SomeUser this might be what you want, but for /PermitRootLogin/s/^.*$/PermitRootLogin no/ this is probably not what you want /bin/sh to expand. Putting backslashes in there should help:
- sed -i -e \'/PermitRootLogin/s/^.*$/PermitRootLogin no/\' /etc/ssh/sshd_config
YAML should not have any problems with that list item having single quotes in the middle of the scalar value.
| How to escape in YAML the correct way? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.