date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,588,335,338,000 |
On a Debian 9/stretch machine, I can switch from the GUI to the command line by pressing Ctrl+Alt+F4. After doing this... how do you switch back to the GUI?
I haven't found any key commands or terminal commands to achieve this goal, so I am stuck in the command line permanently after doing so. The only way I have found to return to the GUI is to reboot the machine using sudo reboot.
Is there anyway in Debian to switch back to the GUI from the command line without rebooting?
None of the answers seem to work from Command line to return to the GUI after Ctrl-Alt-F1?
|
Usually you may use alt+F7 for GUI switching
| How to switch back to GUI from command line on Debian 9/stretch |
1,588,335,338,000 |
I am trying to zip a directory, but I have a list of specific files that I need to ignore. This list is generated with a script and is quite long so when I pass them to the zip command I get an error saying that the command line is too long.
I basically need the functionality asked in this question - Argument list too long when zipping large list of certain files in a folder - but for the -x option to ignore the files instead of add them.
These files are in various subdirectories and don't follow a specific naming convention so there is no clear pattern to ignore them without specifying them all individually.
Is there any way to do this?
|
You can (as suggested in the manpage) put your list into a file, rather than using positional parameters for the list:
-x files
--exclude files
Explicitly exclude the specified files, as in:
zip -r foo foo -x \*.o
which will include the contents of foo in foo.zip while exclud‐
ing all the files that end in .o. The backslash avoids the
shell filename substitution, so that the name matching is per‐
formed by zip at all directory levels.
Also possible:
zip -r foo foo [email protected]
which will include the contents of foo in foo.zip while exclud‐
ing all the files that match the patterns in the file
exclude.lst.
| zip - ignore a large list of specific files without overrunning the command line |
1,588,335,338,000 |
I have a program that outputs data in the format
date, time,field1,field2,field3,fieldn
12/20/14,08:01:53,318.622,0.93712,21.6867,1.1089
the file has many columns which all need to stay the same
The date format is US, however I need non-US ie
date, time,field1,field2,field3,fieldn,....
20/12/14,08:01:53,318.622,0.93712,21.6867,1.1089,....
What's the easiest way to achieve this? I did a search and there are some examples but not quite my situation. Also looked into the date command, and awk and sed but I don't know enough to make a command. For example this answer using awk looked good, but doesn't do anything on my file.
I'm using Mac so would need commands that work for the versions OSX give.
|
sed -E 's,^([0-9]+)/([0-9]+),\2/\1,'
Explanation
sed -E: use sed with extended regular expressions so we don't have to escape the (), etc.
s/foo/bar/: this is the general sed syntax to search for foo and replace it with bar. Here, I've used , instead of /, because there are /s in the expression, and this will simplify it (again avoiding confusing escaping. Hence, s,foo,bar,.
^([0-9]+)/([0-9]+): search for the beginning of the line ^, followed by one or more digits [0-9]+, and put that in a capturing group (). This is the month. This is followed by / and again one or more digits in another capturing group (i.e. the day).
\2/\1: replace this with the second capturing group (the day) followed by /, followed by the first capturing group (the month).
Usage
You can pipe your file into this command, i.e. <your_command_here>| sed…, or you can run it directly on your file, i.e. sed… file.txt. This will output directly to the command line. To write to a new file append > output.txt to the command.
| Changing the date format within a CSV file |
1,588,335,338,000 |
I am trying to replace an extended regular expression using sed on macOS 10.14.3 (18D109). If I do not use the extended regular expression then the inline flag works otherwise it does not update the file, however without the -i flag it prints the correct result to the console. Why does it happen, How could I fix it?
$ echo "foo" > foo.txt
$ sed -i -E 's/fo{1,}/123123/g' ./foo.txt
Nothing happens.
$ sed -E 's/fo{1,}/123123/g' ./foo.txt
123123
|
When using sed to edit a document in-place (with a sed implementation that supports this), there will not be any output in the console. The file will instead be transformed in accordance with the editing script.
$ echo "foo" >foo.txt
$ sed -i -E 's/fo{1,}/123123/g' ./foo.txt
$ cat foo.txt
123123
On FreeBSD and macOS, the -i flag of the provided sed implementation has different semantics from what it has with GNU sed, and your command would have created a file called foo.txt-E as a backup of the original file (and the -E option does therefore not take the intended effect). To use -i with no backup suffix, do this:
sed -i '' -E ...
Example on FreeBSD/macOS:
$ echo "foo" >foo.txt
$ sed -i '' -E 's/fo{1,}/123123/g' ./foo.txt
$ cat foo.txt
123123
Related:
How can I achieve portability with sed -i (in-place editing)?
| sed inline and extended regex does not work together |
1,588,335,338,000 |
I was reading the man page of find and I found myself confused with the following commands. What is the difference between one and its corresponding one.
What is the difference between the following two commands:
find -execdir command "{}" \;
find -execdir "command {}" \;
Cause of confusion: I thought quotation should instruct the shell to take quoted parts as a single chunk. Hence, when I saw the second one, I thought it would fail because there wouldn't be a command command <file-name>.
What is the difference between the following two:
find -execdir bash -c "command" "{}" \;
find -execdir bash -c "command {}" \;
Cause of confusion: enclosing the command along with the curly braces in the second version, to my understanding, should be passed as a whole to the bash command, and find should not interpret the braces with their corresponding files name.
What is the difference between the following two:
find -execdir bash -c "something \"$@\"" {} \;
find -execdir bash -c 'something "$@"' bash {} \;
To my understanding, both are identical. How does passing the braces to the shell, the second version, will be any safer than the first.
UPDATE
Just noticed that the first version of the commands in question#3 is not working! Tried the following (none is working):
find -execdir bash -c 'something \"$@\"' {} \; # changed external quotes to single ones.
find -execdir bash -c "something $@" {} \; # removed the inner quotes.
find -execdir bash -c 'something $@' {} \; # same as above but with outer single quotes instead.
What am I missing here? I believe I should be able to leave the curly braces out of the quotes of enclosing the command?
|
This is (as you've noticed) rather complicated; I'll try to explain it. It's helpful to think in terms of the parsing/processing sequence the command(s) go through, and watch what happens at each step. In general, the process looks something like this:
The shell parses the command line, breaking it into tokens ("words"), substituting variable references etc, and removing quotes and escapes (after their effect has been applied). It then (usually) runs the first "word" as the command name ("find" in these cases), and passes the rest of the words to it as arguments.
find searches for files, and runs the stuff between its "-execdir" and ";" as commands. Note that it replaces "{}" with the matched filename, but does no other parsing -- it just runs the first arg after "-execdir" as the command name, and passes the following arguments to that as its arguments.
In the case where that command happens to be bash and it gets passed the -c option, it parses the argument right after -c as a command string (sort of like a miniature shell script), and the rest of its arguments as arguments to that mini-script.
Ok, a couple of other notes before I dive into this: I'm using BSD find, which requires that the directory to search be explicitly specified, so I'll be using find . -execdir ... instead of just find -execdir .... I'm in a directory that contains the files "foo.txt" and "z@$%^;*;echo wheee.jpg" (to illustrate the risks of using bash -c wrong). Finally, I have a short script called pargs in my binaries directory that prints its arguments (or complains if it didn't get any).
Question one:
Now let's try out at the two commands in your first question:
$ find . -execdir pargs "{}" \;
pargs got 1 argument(s): '.'
pargs got 1 argument(s): 'foo.txt'
pargs got 1 argument(s): 'z@$%^;*;echo wheee.jpg'
$ find . -execdir "pargs {}" \;
find: pargs .: No such file or directory
find: pargs foo.txt: No such file or directory
find: pargs z@$%^;*;echo wheee.jpg: No such file or directory
This matches your expectation: the first worked (and BTW you could've left off the double-quotes around {}), and the second failed because the space and filename was treated as part of the command name, rather than an argument to it.
BTW, it's also possible to use -exec[dir] ... + instead of -exec[dir] \; -- this tells find to run the command as few times as possible, and pass a bunch of filenames at once:
$ find . -execdir pargs {} +
pargs got 3 argument(s): '.' 'foo.txt' 'z@$%^;*;echo wheee.jpg'
Question two:
This time I'll take the options one at a time:
$ find . -execdir bash -c "pargs" "{}" \;
pargs didn't get any arguments
pargs didn't get any arguments
pargs didn't get any arguments
"Huh", you say? What's going on here is that bash is getting run with an argument list like "-c", "pargs", "foo.txt". The -c option tells bash to run its next argument ("pargs") like a miniature shell script, something like this:
#!/bin/bash
pargs
...and sort-of passes that "mini-script" the argument "foo.txt" (more on this later). But that mini-script doesn't do anything with its argument(s) -- specifically, it doesn't pass them on to the pargs command, so pargs never sees anything. (I'll get to the proper way to do this in the third question.) Now, let's try the second alternate of the second question:
$ find . -execdir bash -c "pargs {}" \;
pargs got 1 argument(s): '.'
pargs got 1 argument(s): 'foo.txt'
pargs got 1 argument(s): 'z@$%^'
bash: foo.txt: command not found
wheee.jpg
Now things are sort of working, but only sort of. bash gets run with the arguments "-c" and "pargs " + the filename, which works as expected for "." and "foo.txt", but when you pass bash the arguments "-c" and "pargs z@$%^;*;echo wheee.jpg", it's now running the equivalent of this as its mini-script:
#!/bin/bash
pargs z@$%^;*;echo wheee.jpg
So bash will split that into three commands separated by semicolons:
"pargs z@$%^" (which you see the effect of)
"*", which expands to the words "foo.txt" and "z@$%^;*;echo wheee.jpg", and hence tries to run foo.txt as a command and pass it the other filename as an argument. There's no command by that name, so it gives an appropriate error.
"echo echo wheee.jpg", which is a perfectly reasonable command, and as you can see it prints "wheee.jpg" to the terminal.
So it worked for a file with a plain name, but when it ran into a filename that contained shell syntax, it started trying to execute parts of the filename. That's why this way of doing things is not considered safe.
Question three:
Again, I'll look at the options one at a time:
$ find . -execdir bash -c "pargs \"$@\"" {} \;
pargs got 1 argument(s): ''
pargs got 1 argument(s): ''
pargs got 1 argument(s): ''
$
Again, I hear you say "Huh????" The big problem here is that $@ is not escaped or in single-quotes, so it gets expanded by the current shell context before it's passed to find. I'll use pargs to show what find is actually getting as arguments here:
$ pargs . -execdir bash -c "pargs \"$@\"" {} \;
pargs got 7 argument(s): '.' '-execdir' 'bash' '-c' 'pargs ""' '{}' ';'
Note that the $@ just vanished, because I was running this in an interactive shell that hadn't received any arguments (or set them with the set command). Thus, we're running this mini-script:
#!/bin/bash
pargs ""
...which explains why pargs was getting a single empty argument.
If this were in a script that had received arguments, things would be even more confusing. Escaping (or single-quoting) the $ solves this, but still doesn't quite work:
$ find . -execdir bash -c 'pargs "$@"' {} \;
pargs didn't get any arguments
pargs didn't get any arguments
pargs didn't get any arguments
The problem here is that bash is treating the next argument after the mini-script as the name of the mini-script (which is available to the mini-script as $0, but is not included in $@), not as a regular argument (i.e. $1). Here's a regular script to demo this:
$ cat argdemo.sh
#!/bin/bash
echo "My name is $0; I received these arguments: $@"
$ ./argdemo.sh foo bar baz
My name is ./argdemo.sh; I received these arguments: foo bar baz
Now try this with a similar bash -c mini-script:
$ bash -c 'echo "My name is $0; I received these arguments: $@"' foo bar baz
My name is foo; I received these arguments: bar baz
The standard way to solve this is to add a dummy script-name argument (like "bash"), so that the actual arguments show up in the usual way:
$ bash -c 'echo "My name is $0; I received these arguments: $@"' mini-script foo bar baz
My name is mini-script; I received these arguments: foo bar baz
This is exactly what your second option does, passing "bash" as the script name and the found filename as $1:
$ find . -execdir bash -c 'pargs "$@"' bash {} \;
pargs got 1 argument(s): '.'
pargs got 1 argument(s): 'foo.txt'
pargs got 1 argument(s): 'z@$%^;*;echo wheee.jpg'
Which finally works -- for real, even on weird filenames. That's why this (or the first option in your first question) is considered a good way to use find -exec[dir]. You can also use this with the -exec[dir] ... + method:
$ find . -execdir bash -c 'pargs "$@"' bash {} +
pargs got 3 argument(s): '.' 'foo.txt' 'z@$%^;*;echo wheee.jpg'
| Commands Differences Using Quotations (Find) |
1,588,335,338,000 |
The ss command (from the iproute2 set of tools which comes as a newer alternative to netstat) has in its --help the following options
-0, --packet display PACKET sockets
-t, --tcp display only TCP sockets
-S, --sctp display only SCTP sockets
-u, --udp display only UDP sockets
-d, --dccp display only DCCP sockets
-w, --raw display only RAW sockets
-x, --unix display only Unix domain sockets
What exactly is the distinction made here between RAW and UNIX domain sockets?
And what actually are the PACKET sockets?
|
A raw socket is a network socket (AF_INET or AF_INET6 usually). It can be used to create raw IP packages which can be used for troubleshooting or to implement your own TCP implementation without using SOCK_STREAM:
Raw sockets allow new IPv4 protocols to be implemented in user space. A raw socket receives or sends the raw datagram not including link level headers. [raw(7)]
Tools like nmap use raw sockets in order to stop the TCP handshake after the initial SYN, SYN-ACK, as the TCP connection never completely established. As a network socket, it uses sockaddr_in for addresses.
However, the creation of raw sockets is usually restricted. Only privileged processes can create them.
A unix socket on the other hand is not a network socket (AF_UNIX). It's a local socket:
The AF_UNIX (also known as AF_LOCAL) socket family is used to communicate between processes on the same machine efficiently. [unix(7)]
It uses another address structure (sockaddr_un). It's a common way to implement two-way communication on a single system for inter-process communication without going through the network layer.
And packet sockets are raw packets at the driver level:
Packet sockets are used to receive or send raw packets at the device driver (OSI Layer 2) level. They allow the user to implement protocol modules in user space on top of the physical layer. [packet(7)]
The other sockets act on the network layer (OSI Layer 3) or higher. At that point, you're talking directly to your network interface's driver.
For more information see socket(2), ip(7), packet(7), raw(7), socket(7) and unix(7).
| ss command: difference between raw and unix sockets |
1,588,335,338,000 |
How to configure bash/zsh to show a small key icon when the prompt asks for a password like Mac terminal?
Is this even possible?
|
Your shell cannot help you because it isn't even active at this point. It's just sitting in the background waiting for the command to terminate. The shell runs the sudo command, after that sudo interacts with the terminal. (Suggested background reading: What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?)
It may be possible for your terminal to do what Mac terminal does. Certainly Mac terminal has this feature. I'm not aware of other terminals emulators with this feature, you may want to make a feature request to the developer of your favorite terminal emulator.
| Show a small key icon when the prompt asks for a password |
1,588,335,338,000 |
When you do something like this:
echo 'Hello World'
Or like this:
x=12345
echo "x is: $x"
In the first example, does the echo command receive 'Hello World', or does it receive Hello World?
And in the second example, does the echo command receive "x is: $x", or does it receive x is: 12345?
So basically my question is: Are the single quotes and the double quotes handled by bash or by echo?
|
Quotes, variable expansion, wildcards, and everything else mentioned in the bash manual section on expansion is handled by bash.
For example, when you run the bash command echo "x is: $x", bash parses this to find out that it needs to run the command echo with one argument which is x is: 12345. Given echo "x is" "$x", bash runs echo with two arguments: x is and 12345 — this is why the two spaces inside the quotes are preserved (they're part of the argument which echo prints unmodified), but the two spaces outside the quotes are not (for the shell, two spaces to separate arguments are as good as a single one, and echo always prints a single space in between).
The echo command (or any other command) has no way to know which shell command produced the argument x is: 12345, or indeed whether a shell was involved at all. Here are a few sample commands that produce this argument:
echo "x is: 12345"
echo 'x is: 12345'
echo x\ is:\ 12345
echo "x is: $x"
echo 'x is:'\ "$x"
echo "$(echo "x ")is: $x"
# Assume there is no other file whose name begins with x in the current directory
touch "x is: $x"; echo x*
Or it could be exec "echo", "x is: 12345" in Perl, or execlp("echo", "echo", "x is: 12345") in C, etc.
This holds for every command. And the same principle holds for other shells, although they each have a slightly (or significantly) different set of expansions.
On the other hand, options are handled by the command. For example, ls -l -t and ls -l "-t" in bash (or any similar shell) both run ls in exactly the same way, with the two arguments -l and -t. This is why no form of shell quoting won't help you if you want to run ls to display information about a file called -t. You can do that with ls -l -- -t, i.e. by passing -- as an argument. This is a convention followed by most commands: nothing after -- is parsed as an option. Once again, this is a feature of the command; for the shell, a leading dash is nothing special.
One thing that can get tricky is backslashes. In bash and other shells, backslash outside of any quotes means that the next character loses its special meaning. For example, echo \"x\ is:\ 12345\" prints "x is: 12345", because each of the characters that are preceded by a backslash lose their special meaning (quote syntax for ", word separator for spaces). But some commands also interpret backslashes. When they do, you need to ensure that the backslash gets to them. For example, in bash, the echo command prints backslashes literally by default, but if you pass the option -e or run shopt xpg_echo first, then echo has its own escape sequences (this is for bash, the echo command in different shells and on different systems have different rules about whether they treat backslashes specially). For example, echo 'a\nb' prints a\nb while echo -e 'a\nb' prints a-newline-b because \n means “newline” to echo -e. On the other hand, echo a\nb prints anb since \n is expanded by the shell and there \ means “quote the next character”.
Also, be careful when multiple shells are involved, e.g. with SSH. When you run ssh in a shell, the local shell parses the command as usual. The SSH client joins its non-option arguments with spaces in between, and sends the resulting string to the SSH server. The SSH server passes this string to the remote shell. This means that there are two full rounds of shell expansion between what you type in a terminal or in a shell script, and what gets executed on the remote machine.
| Are the single quotes and the double quotes handled by "bash" or by "echo"? |
1,588,335,338,000 |
When uploading a file as a form field in curl (for example, curl -F 'file=@path/to/file' https://example.org/upload), curl sometimes sets the MIME type differently than what is returned by other utilities determining MIME type.
For example, on .bmp bitmap files, file -i path/to/file.bmp says it's image/x-ms-bmp, but curl sets the MIME type to application/octet-stream unless I explicitly override it.
However, it works properly for some file types, such as .png and .jpg.
I would like to know how it determines the MIME type and under what conditions it will work as expected.
|
From some source code spelunking for Content-Type curl appears to do some file extension matching otherwise defaulting to HTTPPOST_CONTENTTYPE_DEFAULT which is application/octet-stream, in the oddly named ContentTypeForFilename function:
https://github.com/curl/curl/blob/ee56fdb6910f6bf215eecede9e2e9bfc83cb5f29/lib/formdata.c#L166
static const char *ContentTypeForFilename(const char *filename,
const char *prevtype)
{
const char *contenttype = NULL;
unsigned int i;
/*
* No type was specified, we scan through a few well-known
* extensions and pick the first we match!
*/
struct ContentType {
const char *extension;
const char *type;
};
static const struct ContentType ctts[]={
{".gif", "image/gif"},
{".jpg", "image/jpeg"},
{".jpeg", "image/jpeg"},
{".txt", "text/plain"},
{".html", "text/html"},
{".xml", "application/xml"}
};
if(prevtype)
/* default to the previously set/used! */
contenttype = prevtype;
else
contenttype = HTTPPOST_CONTENTTYPE_DEFAULT;
if(filename) { /* in case a NULL was passed in */
for(i = 0; i<sizeof(ctts)/sizeof(ctts[0]); i++) {
if(strlen(filename) >= strlen(ctts[i].extension)) {
if(strcasecompare(filename +
strlen(filename) - strlen(ctts[i].extension),
ctts[i].extension)) {
contenttype = ctts[i].type;
break;
}
}
}
}
/* we have a contenttype by now */
return contenttype;
}
(Though I suppose the source could be modified to do file(1) type magic checks in the future, maybe...)
| How does `curl` on the command line determine the MIME type of a file being uploaded? |
1,588,335,338,000 |
When typing my password in a console I sometimes notice I did not write it properly. In these cases, I normally use Ctrl + u to reset and start from scratch.
However, there are cases in which I know I typed everything fine but the first letter. Is there a way to go to the beginning of the word and replace it?
The underlying problem is that I don't know which editor is being run when you type a password. I don't know if I can use vim expressions and/or the buttons Home in my keyboard.
I do see that Backspace does work, but Home does not, neither the arrows ←. Using Ctrl-a as indicated in Shell: how to go to the beginning of line when you are inside a screen? does not work either.
I am on GNU bash, version 4.3.11.
|
Control-a is readline keybinding that bash uses by default. Other programs such as sudo or su apparently don't use it (might look into the their source code to learn how they handle input). But you can always simulate readline with a program called rlwrap. For example:
$ rlwrap sudo echo hi
Password: ********
Now * are shown so you know where the pointer is and you can press Control-a to go to the beginning of the line.
| How to go to the beginning of the password when typing it? |
1,588,335,338,000 |
I would like to get the sha1 checksums of all files inside a simple tar archive as a list.
This should be done on a busybox machine where only a minimal tar binary is avalable, see http://linux.die.net/man/1/busybox for the available commands.
Without using the disk space to unpack the big tar file. Something with piping and calculating the sha1 on the fly, directing the output to /dev/null
This would make it possible to verify backups without copying the file over the network or extracting it which is both resource consuming.
This is basically the same question as How to create sha1 checksums of files inside a tar archive without using much disk space which has a great answer, but I realized only later that the busybox tar binary is a minimal version which does not have the --to-command=sha1sum option.
|
Here are some major problems with this solution :
tar tf test.tar|while read file;do echo $file $(tar xOf test.tar $file|sha1sum);done
1-The tar of busybox cannot show differently filenames with newlines.
2-The "read" from shells does not handle backslash properly. ("\" characters are eaten or "\n" is replaced by a new line character)
3-Shell variables without double quotes eat repeated space characters.
I cannot fix the problem 1.
Any way, I can fix 2 and 3.
Create this shell script : "tarsha1.sh" (don't forget "chmod 755 tarsha1.sh")
#!/bin/sh
tarname="$1"
shift
for filename in "$@"
do tar xOf "$tarname" "$filename" | sha1sum | head -c -3
printf '%s\n' "$filename"
done
Then use this command :
tar tf test.tar | tr '\n' '\0' | xargs -0 -r ./tarsha1.sh test.tar
With that you should be able to handle filenames with any characters but new lines ("\n").
Note : "-0" option for xargs must be activated in busybox compilation options.
| How to create sha1 checksums of files inside a tar archive on busybox without using much disk space |
1,588,335,338,000 |
The Angular 2 style conventions says that lazy loaded folders should start with a plus sign (+).
This works fine when doing cd +directory/, but becomes problematic when using commands on files inside those directories from outside. vim +folder/file.ts does not work. Doing git rm --cached **/*.js* in the base directory, will ignore all files inside directories with +. Quoting the command arguments did not work.
I could just ditch the concept of +directory, but I appreciate conventions.
How to use commands on files inside directories starting with a special character?
|
A lot of commands (head/tail, sort, sh, vim...) treat arguments that start with + specially, so it's not a good idea to use that as the first character of a file name. Same goes for - which is even more commonly used as option leader character.
Like for -, to avoid that + being treated specially, you could use a ./ prefix. ./+foo is another path to +foo that doesn't start with +. That ./ trick also helps for other situations where arguments are treated differently based on their content. For instance, it helps with awk for file names that contain = (compare awk 1 a=b with awk 1 ./a=b) or for filenames with : for ImageMagick commands.
For some commands that recognise +x as an option, using -- to mark the end of options may also help. Generally, it works in fewer situations than the ./ prefix though.
| How to use commands on files inside directories starting with a special character? |
1,588,335,338,000 |
I have a command that builds things and then provides an interactive shell through prompt (eg type R to restart, Q to quit, ...).
I would like to use that command but stop it once it reaches the prompt part. Is there a way to either pass the "Q" argument when calling my command, or killing it once it reaches the prompt?
|
Given that your script is reading input "normally" via read, you can provide it input ahead of time with another program like echo or printf via a pipe:
echo Q | your-program-here
A more complex example could be:
(echo 1; echo thing2; echo yes; echo Q) | your-program-here
And even more complex scripting of automatic input can be done with programs like expect.
| Start command and provide prompt parameter |
1,588,335,338,000 |
Why xdotool is not clicking when restoring position?
xdotool mousemove --sync 4000 1000 click 1 mousemove restore
If I don't restore the position, it works, example:
xdotool mousemove --sync 4000 1000 click 1
EDIT1: What I've tried
eval "$(xdotool getmouselocation --shell)"
xdotool mousemove --sync 4000 1000
xdotool click 1
xdotool mousemove --screen $SCREEN $X $Y
To my surprise, it also does not click.
|
Your application may need you to wait for it to get focus before it accepts button events. If possible, use windowactivate to get the window focused first, or if not, do a short sleep .2 say, after the mousemove and before the click.
| Why xdotool is not clicking when restoring position? |
1,588,335,338,000 |
You can use xargs to discover limits about the commandline you're using:
$ xargs --show-limits
Your environment variables take up 1901 bytes
POSIX upper limit on argument length (this system): 2093203
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2091302
Size of command buffer we are actually using: 131072
However, I don't understand the difference between Maximum length of command we could actually use and Size of command buffer we are actually using. What do both of these limits mean, and which one is the actual limit we're facing on the length of the commandline?
|
It's what it says on the tin: “Maximum length of command we could actually use” is the maximum possible command line length, given the limit on the platform where xargs is running and the space taken up by the environment. This value only depends on the platform configuration and the environment. “Size of command buffer we are actually using” is the size that this invocation of xargs is using. It can't be more than the maximum, but it can be less. By default, xargs doesn't use the maximum, but a “sensible” default determined at compile time and capped by platform limits, normally 128 kB. The actually-using size can be changed with the -s option.
| What is the meaning of xargs show limits output |
1,588,335,338,000 |
If you want to execute one command and then another one after the first one finish, you can execute
command1 &
which prints the PID of the process executing command1.
You can then think of what you want to do after command1 has finished and execute:
wait [PID printed by the previous command] && command2
However, this only works in the same terminal window and gets really, really messy if command1 prints output. If you open up a new terminal window and try to wait, you're shown something like this:
$ wait 10668
bash: wait: pid 10668 is not a child of this shell
Is there a terminal emulator which supports waiting for programs without having to write the next command in the output of the currently executing command without throwing the output of the first command away (like piping it to /dev/null)?
It doesn't have to work via wait or something similar. Right-clicking and choosing "execute after current command returned" would be perfectly fine.
I don't mean simply concatenating commands but being able to run a command and then decide on what to run right after that one finished.
|
Often, having given a long-running command, when you want to prepare the next command to run afterwards, you use shell job control to achieve it. Eg, you have given command1, not with an &, and then you want to give command2, so you suspend the current running command by typing control-z. You now have the shell prompt again, so you type
fg; command2
and the fg will resume your first command, and when it finishes the shell will start your second command.
If you did start command1 with &, it is in the background. You can bring it to the foreground with fg, then suspend it and so on as before.
| Wait for program in a clean way (e.g. in a different terminal window) |
1,588,335,338,000 |
The Unix package datamash supports the application of several summarizing operations to groups of input lines. For example1, here datamash is used to compute the sums of column 2 for each value in column 1:
$ cat example.csv
1,10
1,5
2,9
2,11
$ datamash -t, -g 1 sum 2 < example.csv
1,15
2,20
Although datamash supports a wide range of functions besides sum (including mean, stddev, median, iqr, min, max, etc.), it is not extensible, AFAICT. IOW, datamash does not support any mechanism for the user to supply his/her own summarizing function.
My question here boils down to: how can this group-wise application of commands be implemented generically on zsh2?
Below is an attempt to specify the question more precisely. (Hopefully this attempt at precision won't render the question incomprehensible.)
First, suppose that foo stands for a (possibly composite) command that emits to stdout lines with the following structure:
i separator payloadij
...where i, the "group index", is some integer, separator is some constant separator sequence (e.g. ,, or $'\t'), and payloadij is some arbitrary text (including the terminating newline). Moreover, assume that the group index i ranges from 1 to N, and that the lines in this output are sorted according to the group index.
For every integer 1 ≤ k ≤ N, let the "k-th group" refer to the content consisting of the segments payloadkj of all the lines (in foo's output) where the group index is k.
Next, suppose that bar stands for a (possibly composite) command that reads lines from stdin and emits a single line to stdout.
Now, let resultk denote the output of applying bar to the k-th group, and let X<bar> stand for some shell construct that invokes bar.
I'm basically looking for a construct X<bar> such that the pipeline
foo | X<bar>
emits to stdout lines of the form
i separator resulti
EDIT:
Supposing that separator is just ,, then the following seems to do what I want
TMPFILE=$( mktemp )
SEPARATOR=,
LASTGROUPID=
foo | (cat; echo) | while IFS= read -r LINE
do
GROUPID=${LINE%%$SEPARATOR*}
if [[ $GROUPID != $LASTGROUPID ]]
then
if [[ -n $LASTGROUPID ]]
then
echo -n "$LASTGROUPID$SEPARATOR"
cat $TMPFILE | bar
fi
LASTGROUPID=$GROUPID
: > $TMPFILE
fi
PAYLOAD=${LINE#*$SEPARATOR}
echo $PAYLOAD >> $TMPFILE
done
rm $TMPFILE
Basically, this use $TMPFILE to collect the lines in the next group. (I'd prefer to avoid the temporary file, but I don't know how to do it.)
Now I need to figure out a way to implement this as a function that can take the expression denoted by bar as an argument, and use it robustly in the construct given above.
1This example is adapted from one given in the datamash man page.
2Although I am primarily interested in zsh, I have a secondary interest bash case as well.
|
Doesn't sound to me like a job for a shell. I'd do it in perl/python/ruby... though here awk may be enough:
$ cat sum
paste -sd + - | bc
$ sort -t , -k 1,1 input | awk -F, -v cmd=./sum '
function out() {printf "%s,", l;close(cmd)}
NR>1 && $1 != l {out()}
{print $2 | cmd; l=$1}
END {if (NR) out()}'
1,15
2,20
| On applying commands to groups of lines from stdin |
1,588,335,338,000 |
I'd like to know if I can track commands entered by user in a bash shell, in real time.
What I'm trying to do is something similar to thefuck, but I need to prompt the user as and when he enters new commands into the shell.
Is there any way I could write a hook to bash that kind of lets me wrap my code around it ?
Alternatively: is there a way to pull updated bash history? afaik bash writes to history when the shell is exited, unless you run 'history' command in the same terminal.
|
Put export PROMPT_COMMAND='history -a' to /etc/profile or other profile file. This causes the history -a command to execute before every command prompt display. history -a flushes history to .bash_history immediately.
| Is it possible to track bash commands in real time? |
1,588,335,338,000 |
Our hosting providers have installed 3 versions of PHP onto our linux box and when I SSH into it the command php points to use/bin/php which is version 5.2, the command php-5.4 points to usr/bin/php-5.4 which is version 5.4 of course.
This isn't a problem when I just need to run a single script that needs a newer version of php, I can just specify php-5.4, however when I try to run the Laravel installer or try to install Laravel using Composer it is throwing errors that are caused by php 5.2 being used.
Is there a way to change where the php keyword points? Or do I need to remove bin/php and rename bin/php-5.4?
|
You can attack this in a variety of ways.
Method #1 - alias
You can make an alias, php=php-5.4, and then attempt to run your script. Assuming that it relies on the current shells ability to locate how to run things, then it should pickup the alias for php instead of the php that's located under /usr/bin.
Method #2 - $PATH
You can override the precendence of where shells locate executables by manipulating the $PATH environment variable. Simply add the location of some other directory to the front of the $PATH.
export PATH=/path/to/newdir:$PATH
Now put a shell script or link in this directory named php. Here's the script:
#!/bin/bash
php-5.4 $*
Here's the link:
$ cd /path/to/newdir
$ ln -s /usr/bin/php-5.4 php
| PHP CLI and Bash - change behaviour of PHP keyword |
1,588,335,338,000 |
When I install Linux Mint (Mate, Qiana) I like to make Mate panel more wide (or may be "higher").
Default it is near 20 pixels. I make it, for example, 45 pixels. I can easy set it by right button click mouse on the panel.
Now I want to make file with all my preferences that I use to install in Linux. There will be commands for Terminal (command line). And I need help with mate-panel settings.
I found that I can set size of mate-panel in dconf editor:
org - mate - panel - toplevels - bottom
size 45
Question: How to make the same in command line?
NB! This "size" is not simple value. Perhaps it is part of list of values. I can see in dconf in "org - mate - panel - general" value toplevel-id-list is equal 'bottom'. Its description is:
"A list of panel IDs. Each ID identifies an individual toplevel panel. The settings for each of these panels are stored in /apps/panel/toplevels/$(id)".
So must I to edit this list?
|
I don't have a MATE environment to test on but in general, this type of thing can be set using gsettings. Try this:
gsettings set org.mate.panel.toplevel:/org/mate/panel/toplevels/bottom/ size 45
That should set the value you want. For more details, see http://wiki.mate-desktop.org/docs:gsettings.
| How to set size of mate-panel via command line (not via dconf)? |
1,588,335,338,000 |
I use the following command to parse a log file for a particular string. Then I search backwards in the log to find the data I really want. The problem as in the example below, I am only going back 50 lines. It is unknown if the text I am looking for will be 5 lines back, 200 lines back or more. Is there a way to search through a log file for a particular string, and when that string is found, to then search backwards through the log to find a 2nd string despite not knowing how far back the 2nd string is located. Also there could be multiple occurrence of the 1st string. So for for every instance of the 1st string, I want to be able to search backwards to collect the data from the second string.
grep -B50 "Server returned HTTP response code: 500 for URL:" LCSoap_8.log | \
tac | grep -P -o '(?<=qualified-src-dn=).*(?=src-dn)'
|
tac LCSoap_8.log | sed -n '
/Server returned HTTP response code: 500 for URL:/,/qualified-src-dn=.*src-dn/!d
s/.*qualified-src-dn=\(.*\)src-dn.*/\1/p'
or to reuse your grep:
tac LCSoap_8.log | sed '
/Server returned HTTP response code: 500 for URL:/,/qualified-src-dn=.*src-dn/!d' |
grep -Po '(?<=qualified-src-dn=).*(?=src-dn)'
sed '/A/,B/!d' d̲eletes every line except (!̲) those from A̲ to the next B̲ after that.
| How parse a log file for a string, and when found search backwards for another string |
1,588,335,338,000 |
Let's have a look at the followings:
radu@Radu:~$ mkdir test
radu@Radu:~$ cd test
radu@Radu:~/test$ rmdir ~/test
radu@Radu:~/test$ man ls
man: can't change directory to '': No such file or directory
Normally, I would say that the last line from the previous output from my terminal is an error. But how can I understand it? And why does this appear only in the case of man command (as far as I know; even pwd or ls does not have any problem)?
Furthermore, let's see again:
radu@Radu:~/test$ man ls
man: can't change directory to '': No such file or directory
radu@Radu:~/test$ echo $?
0
What? It was a success (see the output of man man |& grep -A 1 '^EXIT STATUS$')?
Another version of man
When attempted with another version of man the same thing works.
$ mkdir mantst
$ cd mantst/
$ man ls <--- works
$ rmdir ../mantst/
$ man ls <--- works
$ man --version
man 2.6.3
|
The difference between man and other commands like ls is that latter ones (those not complaining about non-existent directory) don't try to explicitly change there but already stay there. Man also does, but it additionally tries to explicitly change there, too.
UNIX directories (as files) aren't deleted immediately when you call unlink(2) or rmdir(2) on them, but just their directory entry in the parent directory is removed. The directory/file stays as long as there are processes referencing them. As soon as the last reference is gone, the kernel effectively removes the blocks belonging to the files/directories.
For this cause there is no error when you call ls in a directory not existing anymore, since your shell is still there (it references the directory as its current directory) and ls started from there just inherits this property. But since man explicitly tries to chdir(2) there, thus to a directory entry that doesn't exist anymore, it bails out.
| Strange error (?) when I run `man` command from a folder that no longer exists |
1,588,335,338,000 |
I'm having difficulty converting some zenity based script to use whiptail instead.
The working script looks something like this:
#!/bin/bash
xfreerdp /v:farm.company.com \
/d:company.com \
/u:$(zenity \
--entry \
--title="Username" \
--text="Enter your Username")
I am trying to convert this to use whiptail instead, but keep getting a blank screen.
This is what I have so far:
#!/bin/bash
xfreerdp /v:farm.company.com \
/d:company.com \
/u:$(whiptail \
--inputbox "Username" 10 30)
What am I doing wrong?
|
The reason the that you do not see the input box is because whiptail writes the display to stdout, which you are capturing. The result of the input is written to stderr, which you are not capturing. To make this work, you need the command substitution to capture stderr, but not stdout. You can do this with redirection:
#!/bin/bash
xfreerdp /v:farm.company.com \
/d:company.com \
/u:$(whiptail \
--inputbox "Username" 10 30 3>&1 1>&2 2>&3)
| Change script to use whiptail instead of zenity |
1,588,335,338,000 |
How can I put the output of head -15 textfile.txt to a variable $X to use it in an if command like this:
if $X = 'disabled' then
?
|
x="$(head -15 testfile.txt)"
if [ "$x" = disabled ]
then
echo "We are disabled"
fi
Generally, any time that you want to capture the output from a command into a shell variable, use the form: variable="$(command args ...)". The variable= part is assignment. The $(...) part is command substitution.
Also note that the shell does not do if statements in the form if $X = 'disabled'. The shell expects that a command follows the if and the shell will evaluate the return code of that command. In the case above, I run the test command which can be written as either test ... or [ ... ].
Many consider it best practices to use lower-case variables for shell scripts. This is because system defined variables are upper case and you don't want to accidentally overwrite one. On the other hand, there is no system variable called X so it is not a real problem here.
| setting output of a command to a variable [duplicate] |
1,588,335,338,000 |
I'm parsing data in the following format:
prop1=value1:prop2=value2:prop3=value3+prop1=value4:prop2=value5
parts of the string are delimited by +
properties can appear in any order
the desired output is the value of prop2 from the string part where prop1 has a particular value (input)
Can I achieve this through standard unix command-line tools, or do I have to write a small C program?
Edit - for the line shown, this is the desired functionality:
input: value1 -> output: value2
input: value4 -> output: value5
|
Based on devnull's answer I put together this:
echo $LINE | tr '+' '\n' | grep "prop1=$VALUE" | tr ':' '\n' | grep "prop2=" | cut -d= -f2
I'm still open to any better answers.
| Printing specific section of a line when a trigger value is present |
1,588,335,338,000 |
I want to receive live TCP stream, and make it readable by another processes at the same time, without saving it.
For example, 111.222.233.244:1234 streams actual time. Server supports only one connection.
TTY1:
$ nc 111.222.233.244 1234 | (do something here) /tmp/tcpstream &
$ sleep 5 # stream is received even if there is no process that reads it
$ cat /tmp/tcpstream # it can also be like `(some command) | cat -`
17:00:06
17:00:07
17:00:08
17:00:09
17:00:10
17:00:11
TTY2 (second cat executed 9 seconds after executing nc):
$ cat /tmp/tcpstream
17:00:09
17:00:10
17:00:11
|
Perhaps the tcpclone tool can help you. It listens for incoming connections on a certain port, and any data read from standard input is forwarded to those connections.
Your example should then become something like this:
$ nc 111.222.233.244 1234 | ./tcpclone 5555 &
$ nc 127.0.0.1 5555
| How do I share stdout between multiple processes? |
1,588,335,338,000 |
I'd like to get a custom output from the tree command, but unlike this question, I don't have a fixed format. I'd like to be able to give the command the format in an argument (for instance perhaps -f=y, -f=yaml,-f=xml,-f=~/myformat.fmt).
Obviously this is a huge undertaking, but I feel it would be a good way to get to explore how some of the linux commands work under the hood, along with stretching my programming skills.
Where should I start if I want to edit (and I presume compile etc) 'native' Linux commands? Are they baked in?
|
On Debian, Ubuntu, Mint and other distributions using Dpkg and APT to manipulate packages:
dpkg -S /path/to/file looks for the installed package containing the specified file, e.g.
dpkg -S /usr/bin/tree
dpkg -S $(which tree)
apt-file search /path/to/file looks for the package in the distribution containing the specified file, e.g.
apt-file search /usr/bin/tree
A few commands are built-in, i.e. baked into your shell. Their source code is part of the shell. Use type to find whether a command is built in.
$ type cd
cd is a shell builtin
$ type tree
tree is /usr/bin/tree
The command tree is in the package called tree. You can download and unpack the source code for this package with
apt-get source tree
dpkg-source -x tree_*.dsc
In this case, modifying the source code is not the easiest way. It may be a worthwhile exercise if you want to do some C programming. To achieve the objective, using a higher-level language such as Perl, Python or Ruby will be less work.
| Edit tree to output in custom format? |
1,588,335,338,000 |
I have a file with some data in this form:
Prefix text: First Name, Second Name, Third--
The prefix differs by line. The number of names varies from one to several. The suffix (-- in the example) is optional and non-alphabetic. I need to expand the comma-separated list of names into multiple lines (easy: s/,/\n/g), but in such a way that prefix and suffix (if present) surround each of the new entries:
Prefix text: First name--
Prefix text: Second name--
Prefix text: Third--
Instead of banging out a too-long python script, I thought it'd be more fun to ask if someone here can think of the perfect one-liner. Any ideas?
|
perl -lne 'if(/^(.*?: )(.*?)(\W*)$/){print"$1$_$3"for split/, /,$2}'
| Expanding comma-separated list into separate lines |
1,588,335,338,000 |
I got a VPS with a user dedicated to store my files.
As i am aware of the current situation with the NSA and many other governments, and my VPS is hosted on a doubtful country i would like to ensure my privacy (not that i have any top-secret file, or anything similar).
I am thinking of making the home folder of my user a truecrypt file, which would be mounted after a successful user login.
Is it possible?
And can the SSH login also prompt for the password to mount the file after login and before shell access? Something like: "insert user@host password:" and then "insert home folder password:" before shell access.
If wrong it would start the working directory somewhere else, like on fs root.
Can it be done?
|
So, since you seem Ok with the idea, for any searchers:
Ecryptfs and its associated PAM facilities do more or less what you want. The filesystem stores an encrypted key which the PAM module locks and unlocks as appropriate. This key is used to read and write files on a fuse filesystem that is mounted on top of the real filesystem over the user's home directory. Anyone else just sees the encrypted key and a bunch of encrypted files with obfuscated names (ie, even if you name your file "super secret stuff", without the user's password somebody else only sees "x18vb45" or something like that).
There's a bit of memory and processor overhead, and someone who can see arbitrary memory locations can get more when the user is logged in, but that's true for an file encryption.
| Mount home partition on user login |
1,588,335,338,000 |
I'm a Vim user, so I'm learning Emacs commands for use in the Bash CLI. I know that on my systems I can use Vim-keybindings in Bash but I'd like to learn how to use it effectively in its default configuration.
First order of business: Moving around. Let us assume that I've got the following on the CLI:
$ mv some_long_file_name.txt some_long_file_name_with_slight_change.txt
I'm at the end of the line and I'd like to go back 4 words. In Emacs one would Ctrl+U, 4, Alt+B however on the Bash CLI Ctrl+U erases to the beginning of the line. How to perform the motion, then?
Add auxiliary question, if I may? Assuming that I had successfully completed a move command and I'd like to do it again. How might I do that? So if I contorted my fingers on 5 keypresses to move back 4 words, how might I move back another 4 words?
Note that I am interested in learning how to use the default configuration. I don't want to configure Zsh, Vim keybindings, or .bash_profile.
|
Alt-4 Alt-B (like in ksh and zsh, and tcsh where they all most probably copied it from). That assumes the terminal sends the sequence of characters ESC, 4, ESC, b upon those key presses. The same combination works in Emacs too, by the way.
I'm not aware that there's any way to repeat a motion (other than the search ones).
Using Ctrl-R, Ctrl-S (except that for that one, for bash you may need to disable terminal xon) might be a more effective way to move the cursor around.
| Navigating the CLI: Go back N words |
1,588,335,338,000 |
I am trying to figure out how to create a zip file for each subfolder containing only files that match my criteria.
For instance I have:
Folder1
Folder2
Folder3
Each folder contains the same set of files but the filenames in each are slightly different, but the extensions are always the same. I would like to zip the .shp, .shx, .qpj, .prj and .dbf in each folder. Each folder would be its own zip file. I would rather not store the actual folder name other than as the name of the zip file.
I have tried:
find . -type d | xargs -I {} zip -r {}.zip {}
This creates each zip file but would zip every file not just the ones with the extensions I would like, it also stores the folder name in the zip.
find . -type d | xargs -I {} zip -r {}.zip {}'/'*.shp {}'/'*.shx {}'/'*.dbf {}'/'*.prj {}'/'*.qpj
The above does nothing other than gives errors that there is nothing to do.
Hopefully my poor attempts give a better idea of what I'm trying to do.
Any help appreciated.
|
If all the weirdness in your directory names is that they have spaces, this should do:
shopt -s nullglob
for dir in */;do
dir="${dir%/}"
zip "$dir".zip "$dir"/*.{shp,shx,qpj,prj,dbf}
done
| Create Zip for each subfolder but containing only matched files |
1,588,335,338,000 |
I have a run.sh in a directory in ubuntu linux 12.04 LTS. I've been changing the Path variable so that it can "see" binaries elsewhere in the directory structure. But I am still getting a command not found even if I specify the full path. I have only basic working knowledge of linux. What is going on? Why can't it see run.sh?
memsql@memsql-virtual-machine:~/voltdb/doc/tutorials/helloworld$ sudo /home/memsql/voltdb/doc/tutorials/helloworld/run.sh
sudo: /home/memsql/voltdb/doc/tutorials/helloworld/run.sh: command not found
memsql@memsql-virtual-machine:~/voltdb/doc/tutorials/helloworld$ ls
Client.class deployment.xml Insert.class log run.sh Select.java
Client.java helloworld.sql Insert.java README Select.class statement-plans
memsql@memsql-virtual-machine:~/voltdb/doc/tutorials/helloworld$ pwd
/home/memsql/voltdb/doc/tutorials/helloworld
|
You should make it executable with chmod a+x run.sh and then try again.
This will make the file executable.
| why can't linux see my run.sh command? |
1,375,797,819,000 |
When I execute a command in Ubuntu, which results in a listing, I get results without the field names. Example is ls -l or ps l.
I am not very experienced and always need to go digging through man pages and online documentation. And the names are quite crypcit already.
Is there a way to turn on field name listing globally i.e. for all commands?
Note: actually ps l shows field names, while ls -l does not. It is true that the second is very trivial. However, the question stands - is there a way to overwrite this behaviour?
|
As @StephaneChazelas stated this isn't possible. You're only other options are to modify the source (don't do this) and/or develop some wrapper scripts and aliases for yourself to assist.
There is this technique for preserving the columns of ps in output that you're going to pipe to sort.
sort but keep header line in the at the top?
I would take this as an opportunity to hone your alias/scripting skills by putting together the pieces that you need. Much of using Unix/Linux is in tricking out your environment so that things are more accessible to your work habits and style.
| How to set what field names are displayed in listings? |
1,375,797,819,000 |
One can look up Unicode Characters with Regular Expressions. On Jan Goyvaerts website I found a RegExp whose meaning I don't understand :
\p{Zs} or \p{Space_Separator}: a whitespace character that is
invisible, but does take up space
So I wonder if I got this right: a Whitespace Character
is the 'empty' space between two words, columns, lines, whatever
it's 'invisible' in so far as it contains nothing than the blank paper ⁄ screen
it 'takes up space' in so far as the place taken through it can't be occupied through a letter, symbol, anything comparable
According to this I came to the following questions:
are there 'visible whitespace characters'
could a whitespace character 'take no space up'
This would be quite the opposite of what is defined. Both make perfectly sense, but then both could describe the same, depending on the point of view: an empty space is visible through the absence of anything displayed there except the blank paper ⁄ screen but then it is invisible as there is nothing to see. At this point I sense a border with Philosophy: How does one measure the amount of Nothingness than through it's counterpart, or so.
|
Some classic ASCII invisible whitespace characters are:
Tab : \t
New line: \n
Carriage return : \r
Form feed : \f
Vertical tab: \v
All of these are treated as characters by the computer and displayed as whitespace to a human.
Other invisible characters include
Audible bell : \a
Backspace : \b
As well as the long list in the wikipedia article given by frostschutz.
| what is "an invisible whitespace character that takes up space" |
1,375,797,819,000 |
I suspect there’s some terminology for this question that I’m not aware of. It’s hard to check if a question has already been answered if one doesn’t know the proper vocabulary. So, sorry if this is a repeat question…
I’ve slowly become comfy with bash over the past 2 years or so. (I use the Homebrew package manager for OS X to install the latest version whenever the package is updated. At the time of this writing, that’s version 4.2.37.)
The thing is, I’m one of those people who likes to have certain pieces of information on the screen at all times. For many hackers, this simply means customizing PS1. And I’ve done that.
But it’s not enough. I’d like to have some more information displayed at all times, but I don’t want my prompt to grow into an unruly monster.
less provides an option to always display the status of a currently displayed manpage. One such option is the position of the current document. So, if (for example) you’re currently viewing the document 73% of the way through, it will display 73% at the bottom of the screen.
Is there a way to “glue” some information like CWD, IP address, etc., into a “status bar” in bash?
|
I always use bash within tmux (was screen till recently). tmux/screen allows you to set these. Read up the tmux/screen manual on how to setup these. I find it tyring to use bash without tmux/screen.
| How to put “glue” CWD (etc.) to part of the screen instead of putting into PS1? |
1,375,797,819,000 |
I'm using Arch Linux and Urxvt as terminal emulator. When I scroll up/down, text lines gets rendered so slow I can count by top to bottom (heh, 1st line get rendered... ohh, 2nd one! ...). It takes like a full second to get new content rendered in terminal. The worst is reading man pages, looking at Git's log etc.. I have no conf for Urxvt, so it can't me misconfed. The strangest thing is that on both other computers that are also running Arch Linux and Urxvt with almost the same conf everything works. Also, it can't be performance problems — hardware is quite new.
Any clue why's that? I even don't know where to start searching for problem!
|
The problem was that I got shortcut for opening urxvt which called the terminal with addition arguments (“-lsp 2 -bc“). Removing arguments solved the problem.
| Urxvt draws lines slowly |
1,375,797,819,000 |
Possible Duplicate:
what does the @ mean in ls -l?
What does the @ sign mean in the following "ls" output?
-rw-r--r--@ 1 root wheel 489 Jan 4 13:14 boot.plist
|
the @ indicates an extended permissions set. The -e option will display the extended attributes.
| What does @ sign mean in 'ls' output on Mac OSX Lion terminal? [duplicate] |
1,375,797,819,000 |
How could I change the lock option for the xscreensaver from the command line?
Been looking around and couldn't find anything about it.
xscreensaver-command -lock will lock it right away, which is not what I'm looking for.
I'm using Fedora 14.
|
I haven't been able to find an actual command to change the lock feature, but in the configuration file .xscreensaver, located in the home folder, I've found the value of lock: lock: False
In order to modify its value, I can change the value in the config file by using the command:
sed -i 's/\(lock:\t\t\).*/\1False/' /home/username/.xscreensaver
False can be replaced with True based on the requirements.
| Change xscreensaver lock option command |
1,375,797,819,000 |
I can't seem to find any documentation on this.
This forum post shows someone trying to change the voice used by festival, outside of the festival interpreter, using a command-line flag.
festival --\(voice_kal_diphone\) --tts "Langalist.txt"
It doesn't work. As a solution, the OP's program's configuration file ends up being edited. Everyone here also seems to use that method to select voices. But surely, if from within the program's scheme interpreter the expression
luisetta@riverbrain:~$ festival
Festival Speech Synthesis System 2.1:release November 2010
Copyright (C) University of Edinburgh, 1996-2010. All rights reserved.
clunits: Copyright (C) University of Edinburgh and CMU 1997-2010
hts_engine:
The HMM-based speech synthesis system (HTS)
hts_engine API version 1.04 (http://hts-engine.sourceforge.net/)
Copyright (C) 2001-2010 Nagoya Institute of Technology
2001-2008 Tokyo Institute of Technology
All rights reserved.
For details type `(festival_warranty)'
festival> (voice_name_here)
from the list of voices returned by typing
festival> (voice.list)
works, then there must be a way to get the program to interpret its own scheme expressions via the command line too, right?
|
If you just want to select a voice before doing TTS, you can use text2wave
echo 'hello world' | text2wave -eval '(voice_kal_diphone)' > hello.wav
text2wave is a Festival script itself, so you could fairly easily customize it.
You can do similar with the Festival command line:
festival '(voice_ked_diphone)' '(SayText "hello world")' '(exit)'
but that unfortunately does not work along with the --tts option.
| How to make festival evaulate its own scheme expressions from the command line, so as to change voices as needed? |
1,375,797,819,000 |
I'm using the command sudo nethogs to watch network traffic. I have a problem in that the name of the program is too long (it doesn't fit in the PROGRAM column).
I haven't found any nethogs configuration switch dealing with this issue. Do you know how to see the whole name of the process?
|
I don't think nethogs offers a feature like that, but you can use the process id it shows in the first column to look up the information.
cat /proc/$PID/cmdline
or
ps -p $PID -o 'args='
should both work on Linux, for example.
| Whole program name not visible in nethogs |
1,375,797,819,000 |
When I ssh as root to a remote machine, the command output looks like this:
root@Machine:/current/path#:
However, if it's a non-root user, all I see is:
$
How can I get the same behavior as for the root user? Why is it different?
|
The "command output" that you referred to is called "the prompt". At the end of the prompt usually there is a character (in bash usually # or $) to indicate the end of the prompt and the start of user input. It is different so that you know if you are the root user or not. Generally when you see a # at the end of you prompt you know that you are root and should be careful with your commands.
Controlling the prompt depends on the shell that you use. For bash you use the environment variable PS1 to do so. For example if you run:
export PS1='\u@\h \w \$ '
Your next prompt will change to something like:
phunehehe@workstation ~/Desktop $
Refer to the PROMPTING section in man bash for the format of the PS1 variable. A point of interest to your question:
\$ if the effective UID is 0, a #, otherwise a $
To change the prompt permanently, put the export line in /etc/profile (system-wise), or ~/.profile (user-wise), or something equivalent.
| ssh behavior for root and non-root user |
1,375,797,819,000 |
How do I reimage openwrt in such a way that all my settings will be lost. I've been having some issues, and I want to ensure that it's not a lingering setting, I want this to be a fresh install.
|
OpenWRT versions from Kamikaze onwards (which is basically Kamikaze and Backfire, but not White Russian) do not use NVRAM to store settings or configuration. It is all stored in the filesystem, either in the base squashfs image or the overlayed jffs image. This means you should be able to re-flash the image and get back to "factory defaults".
The way to flash an OpenWRT image is described at http://wiki.openwrt.org/doc/howto/installing . Once you have OpenWRT installed the first time, the easiest way to reflash is to use the "via the OpenWrt command line" method. Pay attention to the differences between .trx images and .bin images. .trx images are "raw" generic openwrt images used by the command line installation method. .bin images have vendor-specific headers and, so you need to have the appropriate image for your router.
There are some settings stored in NVRAM that are used by the bootloader but I don't think they should persist once the Linux image has booted. Possibly MAC addresses may persist, but can be overridden in the filesystem configuration anyway.
Whatever you do, do not indiscriminately wipe the NVRAM. You will almost brick the device, and may be bricked unless you can find on the net the appropriate settings to restore manually for your device.
| How do I reimage OpenWRT? |
1,375,797,819,000 |
As the title says, I'm looking for a monolingual french dictionary that I can query on the commandline, ideally offline. For example to have a low latency way to check the gender of nouns.
I looked at the freedict language packs that are available for dictd via apt-get (on Ubuntu), but they all seem to be bilingual?
Does anyone have a suggestion?
|
On most repositories, there's sdcv, the console version of StarDict program.Freely available dictionaries in StarDict format for offline use.
Use html2text to correct the dictionary's HTML output for reading at the console.
Set up an alias for easy lookup.
sdcvfr() {
sdcv --data-dir="/path/to/french_file" --non-interactive "$1" | html2text --ignore-emphasis
}
alias sdf='sdcvfr'
Now, just type: sdf word
| Monolingual french offline commandline dictionary? |
1,375,797,819,000 |
In my machine I, have multiple xdg-desktop-portal
$ ls -la /usr/share/xdg-desktop-portal/portals
.rw-r--r-- 100 root 23 Mar 14:48 gnome-keyring.portal
.rw-r--r-- 99 root 20 Mar 02:25 gnome-shell.portal
.rw-r--r-- 548 root 18 Oct 2022 gnome.portal
.rw-r--r-- 495 root 29 Nov 2022 gtk.portal
What is the command to switch to a different xdg-desktop-portal?
|
The xdg-desktop-portal is an interface that allows applications to communicate with the desktop environment, it is not something that can be switched between different implementations using an environment variable.
An XDG Desktop Portal (later called XDP) is a program that lets other applications communicate swiftly with the compositor through D-Bus. It's used for stuff like e.g. opening file pickers, screen sharing.
The different portal implementations you see in the /usr/share/xdg-desktop-portal/portals directory are different portal backends provided by different desktop environments. Each portal backend serves as a bridge between the application and the corresponding desktop environment.
To use a specific xdg-desktop-portal implementation, you would typically need to use a desktop environment that provides that implementation. The desktop environment you are currently using determines which portal backend is used.
If you have multiple desktop environments installed on your machine, you can switch between them by logging out and choosing a different desktop environment at the login screen. Each desktop environment will come with its own default xdg-desktop-portal implementation.
| How to switch to a different xdg-desktop-portal? |
1,375,797,819,000 |
Assume I have many *.txt files on directory texts with the below contents.
Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
Aliquam tincidunt mauris eu risus.
Vestibulum auctor dapibus neque.
And I want to replace them with the following contents recursively.
Vestibulum commodo felis quis tortor.
Ut aliquam sollicitudin leo.
Cras iaculis ultricies nulla.
Donec quis dui at dolor tempor interdum.
As this is a quite large replacement. Typing each one of them can be time consuming.
Hence, I think it would be better if there is an option like this.
Copy and Paste the original texts into file original.txt and the required replacements into another file update.txt.
And then execute a command to find all the *.txt files in the directory texts that consist of the content in original.txt and replace them with the contents of update.txt.
Similar to simple replacements like:
find texts -name "*.txt" -exec sed -i 's/original/update/g' {} \;
I think this way there will be no mistakes as manual typing and less time will be consumed.
But I don't know what command I should use to achieve this? Is this possible.
However, first of all I must be able to verify the availability and number of occurrence of the original text.
Similar to Simple Checks like:
cd texts
grep -r --color=always "original" | wc -l
Thanks.
|
I'd use perl instead of sed (or awk):
find texts/ -name '*.txt' \
-exec perl -0777 -p -i.bak -e '
BEGIN {
$search = q{Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
Aliquam tincidunt mauris eu risus.
Vestibulum auctor dapibus neque};
$replace = q{Vestibulum commodo felis quis tortor.
Ut aliquam sollicitudin leo.
Cras iaculis ultricies nulla.
Donec quis dui at dolor tempor interdum.};
};
s/\Q$search\E/$replace/mg' {} +
-0777 tells perl to "slurp" in the entire file at once and process it as one long string
-p makes perl behave similarly to sed (and the counterpart -n option makes it work like sed -n).
-i.bak does an "in-place" edit of the file, saving the original with a .bak extension. Again, similar to sed -i.
If you don't want the backup copies, use just -i instead of -i.bak.
\Q in a perl regex tells perl to treat the following pattern (until it sees a \E) as literal string even if it contains regex special characters.
From man perlre:
\Q quote (disable) pattern metacharacters until \E
\E end either case modification or quoted section
q{} uses the perl q quoting operator that works exactly the same as single-quotes. It's particularly useful in a one-liner where the perl script is already in single-quotes (which can't be backslash-escaped because escape codes are ignored inside single quotes). See man perlop and search for "Quote and Quote-like Operators". See also perldoc -f q (and compare with perldoc -f qq, the double-quote operator).
BTW, I recommend testing just the perl portion of this on a single file and examine the output to make sure it's going to do what I want (i.e. without find and especially without -i.bak).
| How to Replace Multiple Lines using Files on Termux |
1,375,797,819,000 |
Using tools like curl or wget it's easy to "get" the response of an HTTP GET request, but both tools aren't installed by default on OpenBSD, and writing a portable shell script, it cannot be assumed that they are installed on ones another machine.
I want a "secure" way to get the server response (for example for wikipedia.org) onto my terminal using tools which are installed by default. Secure means the response should not be plaintext but encrypted with current standards like HTTP/2 and TLS 1.3/TLS 1.2 (if supported by the server, of course) on the way to my machine.
|
You don't specify if you want the headers, the response code or specifics about the TLS protocol.
As already answered, you can use ftp. The -d switch on ftp gives you quite some information on the HTTP(S) level:
$ ftp -d -o /dev/null https://en.wikipedia.org
host en.wikipedia.org, port https, path , save as /dev/null, auth none.
Trying 91.198.174.192...
Requesting https://en.wikipedia.org
GET / HTTP/1.1
Connection: close
Host: en.wikipedia.org
User-Agent: OpenBSD ftp
received 'HTTP/1.1 301 Moved Permanently'
received 'Date: Thu, 03 Mar 2022 10:42:56 GMT'
received 'Server: mw1324.eqiad.wmnet'
received 'X-Content-Type-Options: nosniff'
received 'P3p: CP="See https://en.wikipedia.org/wiki/Special:CentralAutoLogin/P3P for more info."'
received 'Vary: Accept-Encoding,X-Forwarded-Proto,Cookie,Authorization'
received 'Cache-Control: s-maxage=1200, must-revalidate, max-age=0'
received 'Last-Modified: Thu, 03 Mar 2022 10:42:56 GMT'
received 'Location: https://en.wikipedia.org/wiki/Main_Page'
Redirected to https://en.wikipedia.org/wiki/Main_Page
host en.wikipedia.org, port https, path wiki/Main_Page, save as /dev/null, auth none.
Trying 91.198.174.192...
Requesting https://en.wikipedia.org/wiki/Main_Page
GET /wiki/Main_Page HTTP/1.1
Connection: close
Host: en.wikipedia.org
User-Agent: OpenBSD ftp
received 'HTTP/1.1 200 OK'
received 'Date: Thu, 03 Mar 2022 07:48:57 GMT'
received 'Server: mw1393.eqiad.wmnet'
received 'X-Content-Type-Options: nosniff'
received 'P3p: CP="See https://en.wikipedia.org/wiki/Special:CentralAutoLogin/P3P for more info."'
received 'Content-Language: en'
received 'Vary: Accept-Encoding,Cookie,Authorization'
received 'Last-Modified: Thu, 03 Mar 2022 07:48:56 GMT'
received 'Content-Type: text/html; charset=UTF-8'
received 'Age: 11005'
received 'X-Cache: cp3052 hit, cp3058 hit/120231'
received 'X-Cache-Status: hit-front'
received 'Server-Timing: cache;desc="hit-front", host;desc="cp3058"'
received 'Strict-Transport-Security: max-age=106384710; includeSubDomains; preload'
received 'Report-To: { "group": "wm_nel", "max_age": 86400, "endpoints": [{ "url": "https://intake-logging.wikimedia.org/v1/events?stream=w3c.reportingapi.network_error&schema_uri=/w3c/reportingapi/network_error/1.0.0" }] }'
received 'NEL: { "report_to": "wm_nel", "max_age": 86400, "failure_fraction": 0.05, "success_fraction": 0.0}'
received 'Permissions-Policy: interest-cohort=()'
received 'Set-Cookie: WMF-Last-Access=03-Mar-2022;Path=/;HttpOnly;secure;Expires=Mon, 04 Apr 2022 00:00:00 GMT'
received 'Set-Cookie: WMF-Last-Access-Global=03-Mar-2022;Path=/;Domain=.wikipedia.org;HttpOnly;secure;Expires=Mon, 04 Apr 2022 00:00:00 GMT'
received 'X-Client-IP: 148.69.164.57'
received 'Cache-Control: private, s-maxage=0, max-age=0, must-revalidate'
received 'Set-Cookie: GeoIP=PT:06:Coimbra:40.21:-8.42:v4; Path=/; secure; Domain=.wikipedia.org'
received 'Accept-Ranges: bytes'
received 'Content-Length: 84542'
received 'Connection: close'
100% |*******************************************************************************************************************************************************| 84542 00:00
84542 bytes received in 0.22 seconds (368.47 KB/s)
For more specific information about TLS, I'd use openssl, which is also on the base system:
$ openssl s_client -connect en.wikipedia.org:443 < /dev/null
(...)
New, TLSv1/SSLv3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.3
Cipher : TLS_AES_256_GCM_SHA384
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1646305125
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
DONE
| How to get HTTPS response from a Website using OpenBSD base tools? |
1,375,797,819,000 |
I made a silly mistake by using the wrong output filename when resuming a ddrescue. This is what happened:
ddrescue -b 2048 -d -v /dev/sr1 IDTa.img IDTa.ddrescue.log
Then the computer crashed and I mistakenly resumed with:
ddrescue -b 2048 -d -v /dev/sr1 IDTa.iso IDTa.ddrescue.log
I gather that both image files will start off all zeroed, so I guess that if I were to boolean OR both files together then the result would be what ddrescue would have output if I had not made the mistake?
The files are not continuations of one another (like How can I merge two ddrescue images?) since I had already run ddrescue -n previously, which completed successfully. i.e. IDTa.img contains most of the data, IDTa.iso contains scattered blocks from all over the image (and those blocks would be zero in IDTa.img).
Is there a simple CLI way to do this? I could prob do this in C, but I'm very rusty! Also might be a nice first exercise in Python, which I've never got round to learning! Nevertheless, don't particularly want to reinvent the wheel if something out there already exists. Not too fussed about performance.
Update: (apologies if this is the wrong place to put a reply to an answer. The 'comment' option seems to be too allow too few characters, so I'm replying here!)
I have also tried ddrescue with '--fill-mode=?' as a solution to the above, but it did not work. This is what I did:
ddrescue --generate-mode -b 2048 -v /dev/sr1 IDTa.img IDTa.img.log
cp IDTa.img IDTa.img.backup
ddrescue '--fill-mode=?' -b 2048 -v IDTa.iso IDTa.img IDTa.img.log
To check, I looked for the first position that IDTa.iso has data:
hexdump -C IDTa.iso |less
the output was:
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
001da800 00 00 01 ba 21 00 79 f3 09 80 10 69 00 00 01 e0 |....!.y....i....|
...
001db000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
...
I looked up 001da800 in IDTa.img:
hexdump -C IDTa.img |less
/001da800
Output:
001da800 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
001db000 00 00 01 ba 21 00 7b 00 bf 80 10 69 00 00 01 e0 |....!.{....i....|
...
So, the data at position 001da800 has not copied over from file IDTa.iso to IDTa.img?
Checking IDTa.img.log:
# Mapfile. Created by GNU ddrescue version 1.22
# Command line: ddrescue --fill-mode=? -b 2048 -v IDTa.iso IDTa.img IDTa.img.log
# Start time: 2021-06-28 13:52:39
# Current time: 2021-06-28 13:52:46
# Finished
# current_pos current_status current_pass
0x299F2000 + 1
# pos size status
0x00000000 0x00008000 ?
0x00008000 0x001D2800 +
0x001DA800 0x00000800 ?
0x001DB000 0x00049000 +
...
and a reality check:
diff -q IDTa.img IDTa.img.backup
returns no difference.
Update 2:
@Kamil edited the solution (see below) by dropping the --fill-mode=? argument. Appears to work!
|
I think this can be done with ddrescue itself. You need --generate-mode.
When ddrescue is invoked with the option --generate-mode it operates in "generate mode", which is different from the default "rescue mode". That is, in "generate mode" ddrescue does not rescue anything. It only tries to generate a mapfile for later use.
[…]
ddrescue can in some cases generate an approximate mapfile, from infile and the (partial) copy in outfile, that is almost as good as an exact mapfile. It makes this by simply assuming that sectors containing all zeros were not rescued.
[…]
ddrescue --generate-mode infile outfile mapfile
(source)
Make copies of the two images, just in case. If your filesystem supports CoW-copy then use cp --reflink=always for each image to make copies virtually instantly.
You need to make sure the two images are of equal size. If one of them is smaller then it should be enlarged, i.e. zeros (possibly sparse zeros) should be appended. This code will do this automatically (truncate is required):
( f1=IDTa.img
f2=IDTa.iso
s1="$(wc -c <"$f1")"
s2="$(wc -c <"$f2")"
if [ "$s2" -gt "$s1" ]; then
truncate -s "$s2" "$f1"
else
truncate -s "$s1" "$f2"
fi
)
(I used a subshell so variables die with it and the main shell is unaffected.)
Now let the tool analyze your first image and find out which sectors were probably not rescued:
ddrescue --generate-mode -b 2048 -v /dev/sr1 IDTa.img new_mapfile
Note new_mapfile here is a new file, not your IDTa.ddrescue.log. Do not touch IDTa.ddrescue.log.
After new_mapfile is generated, lines in it should show status + or ?, depending on if the corresponding fragment was considered "rescued" or "non-tried".
Now you want to fill the allegedly "non-tried" block of IDTa.img with data from IDTa.iso. The next command will modify IDTa.img.
Rescue the allegedly "non-tried" block of IDTa.img by reading data from IDTa.iso:
ddrescue -b 2048 -v IDTa.iso IDTa.img new_mapfile
Now the modified IDTa.img along with the untouched IDTa.ddrescue.log should be as good as if you didn't make the mistake.
Notes:
It can have happened some sectors containing all zeros were actually rescued. --generate-mode will classify them as ?. They will be filled with data taken from IDTa.iso "in vain". This doesn't matter for the ultimate result because they are all zeros in this other file as well.
The result should be the same if you interchange IDTa.iso and IDTa.img in the entire procedure (but keep in mind if you do this then the result will be in IDTa.iso). So there's a choice. With --generate-mode I would use the file from which I expect less sectors containing all zeros because this should minimize the amount of work for the last command.
The method works for regular files IDTa.iso and IDTa.img. If instead any of them you had a block device, its "random" content from before your work with ddrescue would interfere and spoil the result (so there's no point in solving a potential problem with different sizes in the first place, where truncate doesn't help).
I tested the procedure after replicating your mistake while trying to rescue a flakey device.
| Merge two binary image files by boolean OR (ddrescue output filename mistake) |
1,375,797,819,000 |
I am a long time user of FileZilla. Now for want of efficiency, I am switching to command line sftp from Linux desktop to a Linux server.
The sftp put command works perfectly fine for uploads. However, unlike FileZilla, there is no prompt of confirmation for overwriting an existing file on the server. I certainly fear any accidental overwrites. Is there a way to make sftp ask for confirmation before overwriting?
|
No, the put command in sftp is not able to provide an interactive prompt to you for confirming the overwriting of an existing file. It assumes that you know what you are doing.
If you want to make sure that you upload files without overwriting existing files, use the sftp command mkdir to make a directory on the remote host and cd into it before uploading your files in that new and empty directory.
For example,
uploaddir=$( date +upload_%F ) # i.e. something like "upload_2020-05-18"
sftp remote <<END_SFTP
cd some/remote/path
mkdir $uploaddir
cd $uploaddir
put myfile
END_SFTP
The mkdir command would fail if there already is a directory with the same name as the one you're trying to create. When sftp is running a non-interactive batch script, as above, the script would terminate at that point.
| sftp put: how to prevent accidental overwriting of files |
1,375,797,819,000 |
I would like to export all certificates in a certificate chain to separate .crt files with a single command. How can I do that?
To provide some background information:
I would like to use the openssl bash utility: (openssl s_client -showcerts -connect <host>:<port> & sleep 4)
the above command may print more than one certificate, that is, it may print more than one string with the following pattern: -----BEGIN CERTIFICATE----- X.509 certificate encoded in base64 -----END CERTIFICATE-----. For example:
-----BEGIN CERTIFICATE-----
MIIFNzCCAx+gAwIBAgITUwAAAAJpqCKn3YTQ6gAAAAAAAjANBgkqhkiG9w0BAQsF...
-----END CERTIFICATE-----
the contents of the .crt files should be exactly the printed base64 encoded certificates, including tags.
|
It turns out awk can be used to solve the problem:
(openssl s_client -showcerts -connect <host>:<port> & sleep 4) | awk '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/{if(/-----BEGIN CERTIFICATE-----/){a++}; out="/tmp/<host>"a".crt"; print > out}'
Replace <host> and <port> with actual values. The sleep command is there to limit the timeout of the openssl command.
| How to export all certificates in a certificate chain to separate .crt files with a single command |
1,375,797,819,000 |
Is there an elegant, high-performance one-liner way to remove multiple complete strings from an input?
I process large text files, e.g., 1 million lines in inputfile, and 100k matching strings in hitfile. I have a perl script which loads the hitfile into a hash, and then checks all 'words' in each line of an inputfile, but for my workflow I'd prefer a simple command to my script.
The functionality I seek is equivalent to this:
perl -pe 's/\b(string1|string2|string3)\b)//g'
or this method of nested sed's:
sed -e "$(sed 's:.*:s/&//ig:' hitfile)" inputfile
or looping in the shell:
while read w; do sed -i "s/$w//ig" hitfile ; done < inputfile
But those are way too expensive. This slightly more-efficient method works (How to delete all occurrences of a list of words from a text file?) but it's still very slow:
perl -Mopen=locale -Mutf8 -lpe '
BEGIN{open(A,"hitfile"); chomp(@k = <A>)}
for $w (@k){s/(^|[ ,.—_;-])\Q$w\E([ ,.—_;-]|$)/$1$2/ig}' inputfile
But are there any other tricks to do this more concisely? Some other unix command or method I'm overlooking? I don't need regex, I only need to compare pure/exact strings against a hash (for speed). i.e. "pine" should not match "pineapple", but it should match "(pine)".
For example, one idea I had was to expand the words in a file into separate lines
Before:
Hello, world!
After:
¶
Hello
,
world
!
And then process with grep -vf, and then re-build/join the lines.
Any other ideas that would run fast and easy?
|
How big is your hitfile exactly? Could you show some actual examples of what you're trying to do? Since you haven't provided more details on your input data, this is just one idea to try out and benchmark against your real data.
Perl regexes are capable of becoming pretty big, and a single regex would allow you to modify the input file in a single pass. Here, I'm using /usr/share/dict/words as an example for building a huge regex, mine has ~99k lines and is ~1MB big.
use warnings;
use strict;
use open qw/:std :encoding(UTF-8)/;
my ($big_regex) = do {
open my $wfh, '<', '/usr/share/dict/words' or die $!;
chomp( my @words = <$wfh> );
map { qr/\b(?:$_)\b/ } join '|', map {quotemeta}
sort { length $b <=> length $a or $a cmp $b } @words };
while (<>) {
s/$big_regex//g;
print;
}
I don't need regex, I only need to compare pure/exact strings against a hash (for speed). i.e. "pine" should not match "pineapple", but it should match "(pine)".
If "pine" should not match "pineapple", you need to check the characters before and after the occurrence of "pine" in the input as well. While certainly possible with fixed string methods, it sounds like the regex concept of word boundaries (\b) is what you're after.
Is there an elegant, high-performance one-liner way ... for my workflow I'd prefer a simple command to my script.
I'm not sure I agree with this sentiment. What's wrong with perl script.pl? You can use it with shell redirections/pipes just like a one-liner. Putting code into a script will unclutter your command line, and allow you to do complex things without trying to jam it all into a one-liner. Plus, short does not necessarily mean fast.
Another reason you might want to use a script is if you have multiple input files. With the code I showed above, building the regex is fairly expensive, so calling the script multiple times will be expensive - processing multiple files in a single script will eliminate that overhead. I love the UNIX principle, but for big data, calling multiple processes (sometimes many times over) and piping data between them is not always the most efficient method, and streamlining it all in a single program can help.
Update: As per the comments, enough rope to shoot yourself in the foot 😉 Code that does the same as above in a one-liner:
perl -CDS -ple 'BEGIN{local$/;($r)=map{qr/\b(?:$_)\b/}join"|",map{quotemeta}sort{length$b<=>length$a}split/\n/,<>}s/$r//g' /usr/share/dict/words input.txt
| Remove multiple strings from file on command line, high performance [closed] |
1,375,797,819,000 |
I have a computer with Ubuntu + a graphical desktop installed where I often run OpenGL applications just to capture the screen and make videos. I only care about the generated video, but to create the OpenGL context, I need to open a window, so I have a program that I can run from the terminal that opens the window, renders stuff with OpenGL and sends the pixel data to an ffmpeg process to make the video and it works. Now, I want to run this video generator remotely via ssh, but when I run the program remotely, window creation fails. I suppose this has something to do with X assuming I want to get some graphical output in the machine I'm connecting from or something like that, I don't know much about this. I just want it to make the video, I don't care about seeing the window, in theory it should be able to open the window in the remote machine as it always does when I run the script locally. Should I set some environment variable like DISPLAY to make this work remotely?
|
You need to set the DISPLAY variable to the one where the GUI session (X, Wayland or Mir) is running on the host.
You can use the who command to see which display your GUI session is running on (assuming you're already logged in on the remote host's GUI in another session).
Another solution would be to use VNC or SPICE to connect to the remote host for the full desktop.
| Run X application remotely, run GUI on remote host [closed] |
1,375,797,819,000 |
I want to write a PROMPT_COMMAND that's responsive to whatever was provided to the command prompt immediately previous. For example, to switch between an expansive, informative prompt or simple, compact prompt, like so:
mikemol@serenity ~ $ echo hi
hi
$ echo ho
ho
$ echo hum
hum
$
mikemol@serenity ~ $
Values only appear to get added to the shell's history if they're not empty, so I can't simply test if the most recent history entry is blank. Running set | grep some_command after running some_command gives me nothing, so it doesn't appear there's an environment variable containing that information.
I principally use bash, but would be curious about POSIX-compatible solutions and other shells.
|
I ultimately didn't need PROMPT_COMMAND at all. Thanks to Christopher for pointing me in the right direction.
Instead, consider this file, ps1.prompt:
${__cmdnbary[\#]+$(
echo '\u@\h: \w' # Your fancy prompt goes here, with all the usual special characters available.
) }${__cmdnbary[\#]=}\$
I can then feed this into my PS1:
PS1=$(cat ps1.prompt)
(You don't have to do it this way, but I found it convenient for illustration and editing.)
And so we see:
mikemol@zoe:~$ echo hi
hi
mikemol@zoe:~$ echo ho
ho
mikemol@zoe:~$ echo hum
hum
mikemol@zoe:~$
mikemol@zoe:~$ PS1=$(cat ps1.prompt)
$
mikemol@zoe: ~ $ echo hi
hi
$ echo ho
ho
$ echo hum
hum
$
mikemol@zoe: ~ $
We're using the array hack demonstrated here, but instead of using bash's ${parameter:-word} parameter substitution, we use ${parameter+word} so we trigger only on there having been no previous command run.
This requires some explanation, as we're forced to use a double-negative in our logic.
How ${__cmdnbary[\#]-word}${__cmdnbary[\#]=} works
In the original array hack demonstration, the construct ${__cmdnbary[\#]-word}${__cmdnbary[\#]=} was used. (I've replaced $? with word for clarity). If you're not particularly familiar with parameter expansion and arrays (I wasn't), it's not at all clear what's happening.
First, understand \#
Per the manual:
\# the command number of this command
...
The command number is the position in the sequence of commands executed during the current shell session.
This means \# will only change if and only if a command is executed. If the user enters a blank line at the prompt, no command is executed, and so \# won't change.
The setting of an empty string in ${__cmdnbary[#]=}
${__cmdnbary[\#]=} uses paramater expansion. Going back to the manual:
${parameter:=word}
Assign Default Values. If parameter is unset or null, the expansion of word is assigned to parameter. The value of parameter is then substituted.
So, if __cmdnbary[\#] is unset or null, this construct will assign an empty string (word is an empty string in our case) and the whole construct will be replaced in our output with that same empty string.
__cmdnbary[\#] will always be unset or null the first time we see it, since # is monotonic--it always increments or stays the same. (That is, until it loops, likely around 2^31 or 2^63, but there are other problems we'll have long before we get there. There's a reason I describe the solution as a bit of a hack.)
The conditional in ${__cmdnbary[\#]-word}
${__cmdnbary[\#]-word} is another parameter expansion. From the manual:
${parameter:-word}
Use Default Values. If parameter is unset or null, the expansion of word is substituted. Otherwise, the value of parameter is substituted.
So, if the array entry at \# is unset or null, word gets used in its place. Since we don't try to assign to __cmdnbary[\#] (using the ${parameter:=word} substitution) until after we check it, the first time we check it for a given \# should find that position in the array unset.
bash uses sparse arrays
One point of clarification for those accustomed to C-style arrays. bash actually uses sparse arrays; until you assign something to a position in an array, that position is unset. An empty string is not the same thing as "null or unset".
Why we use ${__cmdnbary[#]+word}${__cmdnbary[#]=} instead
${__cmdnbary[\#]+word}${__cmdnbary[\#]=} and ${__cmdnbary[#]-word}${__cmdnbary[#]=}look very siilar. The *only* thing we change between the two constructs can be found in the first portion; we use${parameter:+word}instead of${parameter:-word}`.
Remember that with ${parameter:-word}, word gets presented only if parameter is null or unset--in our case, only if we haven't set the position in the array yet, which we won't have done if and only if \# has incremented, which will only happen if we've just executed a command.
That means that, with ${parameter:-word}, we'll only present word if we haven't executed a command, which is precisely the opposite of what we want to do. So, we use ${parameter:-word} instead. Again, from the manual:
${parameter:+word}
Use Alternate Value. If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted.
Which is (unfortunately), more double-negative logic to understand, but there you are.
The prompt itself
We've explained the switching mechanism, but what about the prompt itself?
Here, I use $( ... ) to contain the meat of the prompt. Primarily for my own benefit for readability; you don't have to do it that way. You can replace $( ... ) with whatever you might normally stuff in a variable assignment.
Why is it a hack?
Remember how we're adding entries to a sparse array? We're not removing those entries, and so the array will forever grow until the shell session is exited; the shell is leaking through PS1. And so far as I know, there's no way to unset a variable or array position in the prompt. You could try in $(), but you'll find it won't work; changes made to the variable namespace inside a subshell won't be applied to the space the subshell was forked from.
You might try using mktmp early in your .bashrc, before PS1 assignment, and stuff information in the resulting file; then you could compare your current \# against what you've stored in there, but now you've made your prompt dependent on disk I/O, which is a good way to lock yourself out in emergency situations.
| How to detect if a command was provided to the shell prompt |
1,375,797,819,000 |
How can I run a Scheme expression from the command-line using neither a script saved in a file, nor starting the interactive shell?
The equivalent in Python would be: python -c "print 1+1". scheme (+ 1 1) just starts the interactive shell and shows the result inside it.
|
I installed guile and was able to have it execute code four ways:
1
$ guile <<< "(+ 1 1)"
GNU Guile 2.0.9
Copyright (C) 1995-2013 Free Software Foundation, Inc.
Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.
Enter `,help' for help.
$1 = 2
$
2
$ echo "(+ 1 1)" | guile
GNU Guile 2.0.9
Copyright (C) 1995-2013 Free Software Foundation, Inc.
Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.
Enter `,help' for help.
$1 = 2
scheme@(guile-user)>
$
3
$ echo "(+ 1 1)" > guile.script
$ guile < guile.script
GNU Guile 2.0.9
Copyright (C) 1995-2013 Free Software Foundation, Inc.
Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'.
This program is free software, and you are welcome to redistribute it
under certain conditions; type `,show c' for details.
Enter `,help' for help.
$1 = 2
$
4
Thanks to GAD3R for this one:
$ guile -c "(display (+ 1 1)) (newline)"
2
$
In all cases, I'm returned to my original shell prompt (indicated by the bare $ lines).
| Run Scheme one-liner from the command-line |
1,375,797,819,000 |
Having installed the AWS CLI with pip install --user awscli what's the syntaxt to invoke awscli? It's listed here:
thufir@doge:~$
thufir@doge:~$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
adium-theme-ubuntu (0.3.4)
attrs (15.2.0)
awscli (1.11.35)
beautifulsoup4 (4.4.1)
botocore (1.4.92)
chardet (2.3.0)
colorama (0.3.7)
cryptography (1.2.3)
dnspython (1.12.0)
docutils (0.13.1)
enum34 (1.1.2)
futures (3.0.5)
gmpy2 (2.0.7)
greenlet (0.4.9)
html5lib (0.999)
idna (2.0)
iotop (0.6)
ipaddress (1.0.16)
jmespath (0.9.0)
lxml (3.5.0)
nglister (0.0.0)
PAM (0.4.2)
pip (9.0.1)
pyasn1 (0.1.9)
pyasn1-modules (0.0.7)
pyOpenSSL (0.15.1)
pyserial (3.0.1)
python-application (2.0.2)
python-dateutil (2.6.0)
python-eventlib (0.2.2)
python-gnutls (3.0.0)
python-msrplib (0.19.0)
python-otr (1.2.0)
python-sipsimple (3.1.0)
python-xcaplib (1.2.0)
PyYAML (3.12)
rsa (3.4.2)
s3transfer (0.1.10)
service-identity (16.0.0)
setuptools (32.3.1)
sipclients (3.0.0)
six (1.10.0)
Twisted (16.0.0)
unity-lens-photos (1.0)
wheel (0.29.0)
zope.interface (4.1.3)
thufi
thufir@doge:~$
per cdunklau on #python IRC:
thufir@doge:~$
thufir@doge:~$ ll .local/bin/aws*
-rwxrwxr-x 1 thufir thufir 814 Jan 2 00:06 .local/bin/aws*
-rwxrwxr-x 1 thufir thufir 204 Jan 2 00:06 .local/bin/aws_bash_completer*
-rwxrwxr-x 1 thufir thufir 1432 Jan 2 00:06 .local/bin/aws.cmd*
-rwxrwxr-x 1 thufir thufir 1135 Jan 2 00:06 .local/bin/aws_completer*
-rwxrwxr-x 1 thufir thufir 1915 Jan 2 00:06 .local/bin/aws_zsh_completer.sh*
thufir@doge:~$
Looking at the path:
thufir@doge:~$
thufir@doge:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
thufir@doge:~$
I need to add ~/.local/bin/ to the path?
see also:
https://askubuntu.com/q/802544/45156
|
solution:
thufir@doge:~$
thufir@doge:~$ tail .bashrc -n 1
export PATH="/home/thufir/.local/bin/:$PATH"
thufir@doge:~$
thufir@doge:~$ aws configure
AWS Access Key ID [None]:
Would've been much easier to just install with sudo.
| how to invoke awscli from pip install? |
1,375,797,819,000 |
So I have kinda just resigned to using nano for this, but I though I would put it out on Unix.Linux to A) Challenge somebody and B) learn how/if It can be done.
I want to prepend a link to an rsa file (command="/sbin/shutdown -h now").
Most of the things I found when google "cat prepend to file" make it so it would end up like this .
command="/sbin/shutdown -h now"
ssh-rsa MyRSsAkEyasetcetc
What I need is :
command="/sbin/shutdown -h now" ssh-rsa MySRasKeytsadnasdnasd
Aka all one line, prepend to first line.
|
This is a simple sed command:
sed 's!^!command="/sbin/shutdown -h now" !'
If the public key is in a file then you can use the -i flag to edit the file in place:
$ cat key.pub
ssh-rsa MySRasKeytsadnasdnasd
$ sed -i 's!^!command="/sbin/shutdown -h now" !' key.pub
$ cat key.pub
command="/sbin/shutdown -h now" ssh-rsa MySRasKeytsadnasdnasd
| Cat prepend to first line, NOT new line |
1,375,797,819,000 |
I need to copy around 40.0000 files into date structured folders.
example file:
/var/public/voicelogging/quality_monitoring/20151209/bbbbbb_I_20151209-185841_xxxxxx_12434_89343.WAV
Is one of the many files I need to copy to /home/username/logging/
The file name has 2 variables in it that I need to use:
bbbbbb_I_20151209-185841_xxxxxx_12434_89343.WAV
20151209 is of course the date
12434 is the id of the user who made the file.
What I need is a script/one liner that can search in a dir for the user id.
Then create a dir with the user id in /home/username/logging.
After it created the folder it needs to create a dir for every date it can find.
And place every file in to the right userid/date directory.
example of result dir.
/home/username/logging/12434/20151209/bbbbbb_I_20151209-185841_xxxxxx_12434_89343.WAV
I have build a one-liner for making the date dir's , but I still need to make the user id dir myself.
find /var/public/voicelogging/quality_monitoring/ -type f -name "*12434*" | sed -r 's/^.{65}//' | cut -c1-8 | xargs -I {} mkdir {} /home/username/logging/12434
How can I copy the right file to the right place?
|
One way with find and install:
find /var/public/voicelogging/quality_monitoring -name \*.WAV -exec sh -c '
bn=${0##*/}; x=${bn%%-*}; dt=${x##*_}; y=${bn%_*}; id=${y##*_}
install -D "$0" "/home/username/logging/${id}/${dt}/${bn}"' {} \;
this uses parameter expansion to extract the date: ${dt} and the user id: ${id} from the filename and then uses install to copy each file to the corresponding userID/date directory (this is because I'm lazy) - without install replace the last line with:
dest=/home/username/logging/${id}/${dt}; mkdir -p "${dest}" && cp "$0" "${dest}"' {} \;
If you prefer to loop over those "date" directories and then loop over the .WAV files in each dir:
for d in /var/public/voicelogging/quality_monitoring/*; do
dt=${d##*/}
for f in $d/*.WAV; do
bn=${f##*/}; y=${bn%_*}; id=${y##*_}
dest=/home/username/logging/${id}/${dt}
mkdir -p "${dest}" && cp "${f}" "${dest}"
done
done
If you have zsh it's easier and shorter with zmv (also because zsh is smarter and you can nest variable expansions e.g. ${${file%_*}##*_} would be enough to extract the user ID):
dtcp () {
mkdir -p $3 && cp $1 $2 $3
}
autoload zmv
zmv -n -p dtcp '/var/public/voicelogging/quality_monitoring/(*)/(*).WAV' \
'/home/username/logging/${${2%_*}##*_}/$1'
The (*)s create back references that can be used in the second parameter as $1, $2 etc.
Here zmv with -p executes the function dtcp instead of mv. The function creates the directory and then copies the file to the newly created directory. The arguments (not to be mistaken for the back references above) are:
$1 : --
which means end of options
$2 : /var/public/voicelogging/quality_monitoring/(*)/(*).WAV'
that is the file that has to be copied and
$3 : /home/username/logging/${${2%_*}##*_}/$1
which is the destination
Note that -n stands for dry-run; remove it to actually run the command.
| copy huge number of files into date structured directory order |
1,375,797,819,000 |
How do I get multi-word autocompletion with rlwrap for tclsh?
Example: I type file <space> then pressing <tab> <tab> I only want to see the sub-commands to file, such as exists isdirectory or isfile.
I tried adding file\ isfile (i.e. escaping the space) to the completion file, but this did not help. It just caused isfile to appear as another autocomplete command.
I guess I could accomplish multi-word autocompletion with an rlwrap filter, but there were no obvious examples in /usr/share/rlwarp/filters/ for me to hook on.
|
With the help from the excellent answer from thrig I cooked the following tclsh multi-word completion filter for tclsh. The script below should be stored in tclsh_filter and executed with rlwrap -z tclsh_filter tclsh. Remember to chmod +x tclsh_filter.
#!/usr/bin/env perl
use strict;
use warnings;
use lib $ENV{RLWRAP_FILTERDIR};
use RlwrapFilter;
my @tcl_cmds = qw/encoding if pid tcl_endOfWord eof incr pkg::create tcl_findLibrary after error info pkg_mkIndex tcl_startOfNextWord append eval interp proc tcl_startOfPreviousWord array exec join puts tcl_wordBreakAfter auto_execok exit lappend pwd tcl_wordBreakBefore auto_import expr lindex re_syntax tcltest auto_load fblocked linsert read tclvars auto_mkindex fconfigure list regexp tell auto_mkindex_old fcopy llength registry time auto_qualify file load regsub trace auto_reset fileevent lrange rename unknown bgerror filename lreplace return unset binary flush lsearch scan update break for lset seek uplevel catch foreach lsort set upvar cd format memory socket variable clock gets msgcat source vwait close glob namespace split while concat global open string continue history package subst dde http parray switch/;
# Below is copy paste from tcl.tk web pages.
my $tcl_txt = <<END;
file atime name ?time?
file attributes name
file attributes name ?option?
file attributes name ?option value option value...?
file channels ?pattern?
file copy ?-force? ?--? source target
file copy ?-force? ?--? source ?source ...? targetDir
file delete ?-force? ?--? pathname ?pathname ... ?
file dirname name
file executable name
file exists name
file extension name
file isdirectory name
file isfile name
file join name ?name ...?
file link ?-linktype? linkName ?target?
file lstat name varName
file mkdir dir ?dir ...?
file mtime name ?time?
file nativename name
file normalize name
file owned name
file pathtype name
file readable name
file readlink name
file rename ?-force? ?--? source target
file rename ?-force? ?--? source ?source ...? targetDir
file rootname name
file separator ?name?
file size name
file split name
file stat name varName
file system name
file tail name
file type name
file volumes
file writable name
string compare ?-nocase? ?-length int? string1 string2
string equal ?-nocase? ?-length int? string1 string2
string first needleString haystackString ?startIndex?
string index string charIndex
string is alnum ?-strict? ?-failindex varname? string
string is alpha
string is ascii
string is boolean
string is control
string is digit
string is double
string is false
string is graph
string is integer
string is lower
string is print
string is punct
string is space
string is true
string is upper
string is wordchar
string is xdigit
string last needleString haystackString ?lastIndex?
string length string
string map ?-nocase? mapping string
string match ?-nocase? pattern string
string range string first last
string repeat string count
string replace string first last ?newstring?
string tolower string ?first? ?last?
string totitle string ?first? ?last?
string toupper string ?first? ?last?
string trim string ?chars?
string trimleft string ?chars?
string trimright string ?chars?
lsort -ascii
lsort -dictionary
lsort -integer
lsort -real
lsort -command command
lsort -increasing
lsort -decreasing
lsort -index index
lsort -unique
regexp -about
regexp -expanded
regexp -indices
regexp -line
regexp -linestop
regexp -lineanchor
regexp -nocase
regexp -all
regexp -inline
regexp -start index
regexp --
END
my @multi;
foreach my $line (split /\n/, $tcl_txt) {
$line =~ s/\?//g;
$line =~ s/ - -/ --/g;
$line =~ s/ \.\.\.//g;
$line =~ s/\s{2,}/ /g;
$line =~ s/\s+$//;
push @multi, $line;
if ($line =~ /^(.*\s)(-\w+)\s(-\w+)(.*)$/) {
push @multi, "$1$3 $2$4";
}
}
my $filter = RlwrapFilter->new;
$filter->completion_handler(\&completion);
$filter->run;
sub completion {
my ($input, $prefix, @completions) = @_;
$input =~ s/\s+/ /g;
# Support completion on composite expressions. Hacky, limited syntax support.
$input =~ s/^[^[]+\[//;
$input =~ s/^.*;\s*//;
# If last complete words were options, remove these so we can restart option
# matching.
$input =~ s/(?:\s-\w+)+\s((?:-\w+)?)$/ $1/;
my $word_cnt = () = $input =~ m/\b\s+/g;
if ($word_cnt == 0) {
@completions = grep /^\Q$input\E/, @tcl_cmds;
} else {
my @mmatch = grep /^\Q$input\E/, @multi;
@completions = map {my @F = split /\s/, $_;
$F[$word_cnt]} @mmatch;
# rlwrap seem to have a 'feature' where words beginning with '-' are
# prepended with '-', forcing us to remove the dash. Downside is that
# will list the options without '-'.
@completions = map {s/^-//; $_} @completions;
}
return @completions;
}
| rlwrap: tclsh multi-word autocompletion |
1,375,797,819,000 |
I have one dvb-t and one dvb-s card in my system which are in
/dev/dvb/adapter0
and
/dev/dvb/adapter1
Is there a way to find out which card is currently working? and which one isn't?
|
You can use lsof to see which processes are using a file. In your case:
$ lsof /dev/dvb/adapter0
$ lsof /dev/dvb/adapter1
Each call will give you a list of the processes having requested a handler (file descriptor) to your device. If nothing is printed, you can conclude that your device is not currently in use. Here's an example with /dev/urandom, used by Conky, Chromium and Thunderbird on my system:
$ lsof /dev/urandom
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
conky ---- ---- 3r CHR 1,9 0t0 1034 /dev/urandom
chromium- ---- ---- 55r CHR 1,9 0t0 1034 /dev/urandom
thunderbi ---- ---- 24r CHR 1,9 0t0 1034 /dev/urandom
r (in the FD field) means that the processes have requested read access: the number just before that is their descriptor number. CHR and 1,9 are the device type and major/minor numbers. 0t0 is the current file offset for that descriptor, and 1034 is the device file's inode number (on my system).
For more information about this output, see lsof(8). By the way, lsof returns a different status code depending on whether it found processes for a file or not. This means you can use it quite simply in a shell script:
#!/bin/bash
if lsof /dev/dvb/adapter0 > /dev/null 2>&1; then
echo "Adapter 0 is in use."
elif lsof /dev/dvb/adapter1 > /dev/null 2>&1; then
echo "Adapter 1 is in use."
else
echo "No adapter is in use."
fi
| Which device is working |
1,375,797,819,000 |
Am trying to enable proxy arp for some of the interfaces, with the normal interface name eth0, eth1, etc
[[email protected]]# sysctl net.ipv4.conf.eth0.proxy_arp
0
But for interface names such as "eth1.11, eth2.1" its giving the below error.
Tried different format "", '' etc. but no help.
[[email protected]]# sysctl net.ipv4.conf.eth2.1.proxy_arp
error: "net.ipv4.conf.eth2\.1.proxy_arp" is an unknown key
can anyone please point out the correct way to do this ?
|
Finally found out the way to doing this, here it goes.
It seems that is replaced by /, that made it work.
sysctl net.ipv4.conf.eth2/1.proxy_arp
| Enabling proxy_arp for interface eth2.1 |
1,375,797,819,000 |
OS: AIX 7100-04-1216
So basically I'm trying to write a for loop that sees which volumegroups I have on my system, which filesystems reside in those volume groups and what the size of each of those filesystems is.
I have following code:
for LINE in `lsvg` ; do
echo "Volume Group: "${LINE}
for LINE2 in `lsvgfs ${LINE}` ; do
echo "`lsvgfs ${LINE}` \n"
df -g ${LINE2}
done
done
The output of lsvg
rootvg
nimvg
The output of lsvgfs (for rootvg)
/
/usr
The output of lsvgfs (for nimvg)
/export/nim/lpp_source
/export/nim/spot
The output of df -g (for / in volumegroup rootvg)
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.25 2.86 46% 9957 2% /
The output of df -g (for /usr in volumegroup rootvg)
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd2 2.00 0.17 92% 43194 50% /usr
The output of df -g (for /export/nim/lpp_source in volumegroup nimvg)
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/fslv02 10.00 8.24 18% 597 1% /export/nim/lpp_source
The output of df -g (for /export/nim/spot in volumegroup nimvg)
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.25 2.86 46% 9957 2% /
What it should do:
Volume Group: rootvg
File System: /
/dev/hd4 5.25 2.86 46% 9957 2% /
File System: /usr
/dev/hd2 2.00 0.17 92% 43194 50% /usr
File System: /var
Volume Group: nimvg
File system: /export/nim/lpp_source
/dev/fslv02 10.00 8.24 18% 597 1% /export/nim/lpp_source
File system: /export/nim/spot
/dev/hd4 5.25 2.86 46% 9957 2% /
What I get:
Volume Group: rootvg
/
/usr
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.25 2.86 46% 9957 2% /
/
/usr
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd2 2.00 0.17 92% 43194 50% /usr
Volume Group: nimvg
/export/nim/lpp_source
/export/nim/spot
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/fslv02 10.00 8.24 18% 597 1% /export/nim/lpp_source
/export/nim/lpp_source
/export/nim/spot
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 5.25 2.86 46% 9957 2% /
|
First a note on style. Using a for loop to iterate over lines of output is rarely a good idea. The for loop will split its input on whitespace so if you have more than a single word per line, it will break each into a different variable.You should use while instead since this deals with whitespace more gracefully. Also, it is generally preferred to use $(command) instead of `command`.
That is probably not your problem here though. The main issue is that you are echoing the results of lsvgfs ${LINE} for each line of results of lsvgfs ${LINE}:
for LINE2 in `lsvgfs ${LINE}` ; do
echo "`lsvgfs ${LINE}` \n"
done
So, of course, you're getting these lines twice:
/
/usr
Try this instead:
lsvg | while read LINE; do
echo "Volume Group: ${LINE}"
lsvgfs "${LINE}" | while read LINE2; do
echo "File System: ${LINE2}"
df -g ${LINE2}
done
echo ""
done
| Run command based on output of another command |
1,393,375,604,000 |
My first question on this site, I come quickly.
I'm a fan of command line tools and text-based application. I use tmux with a minimalist tiling wm is qtile, I can not change the environment. I'm a developer, I mainly use Python and Perl.
My first question is about mutt a great client. I use the sidebar to be able to display mailboxes. I used imap with google accounts, Here is my configuration:
account-hook . 'unset preconnect imap_user imap_authenticators'
#First account
account-hook 'imaps://[email protected]@imap.gmail.com:993/' \
' set imap_user = "[email protected]" \
imap_pass = "password" '
folder-hook 'imaps://[email protected]@imap.gmail.com:993/INBOX' \
' set imap_user = "[email protected]" \
imap_pass = "1password" \
smtp_url = "smtp://[email protected]@smtp.gmail.com:587/" \
smtp_pass = "password" \
from = "[email protected]" \
realname = "Natal Ngétal" \
folder = "imaps://[email protected]@imap.gmail.com:993" \
spoolfile = "+INBOX" \
postponed="+[Gmail]/Drafts" \
mail_check=60 \
imap_keepalive=300 '
#Second account
account-hook 'imaps://[email protected]@imap.gmail.com:993/' \
' set imap_user = "[email protected]" \
imap_pass = "password" '
folder-hook 'imaps://[email protected]@imap.gmail.com:993/INBOX' \
' set imap_user = "[email protected]" \
imap_pass = "password" \
smtp_url = "smtp://[email protected]@smtp.gmail.com:587/" \
smtp_pass = "password" \
from = "[email protected]" \
realname = "Natal Ngétal" \
folder = "imaps://[email protected]@imap.gmail.com:993" \
spoolfile = "+INBOX" \
postponed="+[Gmail]/Drafts" \
mail_check=60 \
imap_keepalive=300 '
mailboxes + 'imaps://[email protected]@imap.gmail.com:993/INBOX' \
+ 'imaps://[email protected]@imap.gmail.com:993/INBOX' \
+ 'imaps://[email protected]@imap.gmail.com:993/[Gmail]/Messages envoyés' \
+ 'imaps://[email protected]@imap.gmail.com:993/[Gmail]/Messages envoyés' \
+ 'imaps://[email protected]@imap.gmail.com:993/[Gmail]/Spam' \
+ 'imaps://[email protected]@imap.gmail.com:993/[Gmail]/Spam' \
+ 'imaps://[email protected]@imap.gmail.com:993/Divers' \
+ 'imaps://[email protected]@imap.gmail.com:993/Divers' \
+ 'imaps://[email protected]@imap.gmail.com:993/[Gmail]/Tous les messages' \
+ 'imaps://[email protected]@imap.gmail.com:993/[Gmail]/Tous les messages'
# Where to put the stuff
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = "~/.mutt/certificates"
set mail_check = 30
set move = no
set imap_keepalive = 900
set editor = "vim"
set date_format = "%D %R"
set index_format = "[%Z] %D %-20.20F %s"
set sort = threads # like gmail
set sort_aux = reverse-last-date-received # like gmail
set uncollapse_jump # don't collapse on an unread message
set sort_re # thread based on regex
set reply_regexp = "^(([Rr][Ee]?(\[[0-9]+\])?: *)?(\[[^]]+\] *)?)*"
bind index gg first-entry
bind index G last-entry
bind index R group-reply
bind index <tab> sync-mailbox
bind index <space> collapse-thread
# Ctrl-R to mark all as read
macro index \Cr "T~U<enter><tag-prefix><clear-flag>N<untag-pattern>.<enter>" "mar
# Saner copy/move dialogs
macro index C "<copy-message>?<toggle-mailboxes>" "copy a message to a mailbox"
macro index M "<save-message>?<toggle-mailboxes>" "move a message to a mailbox"
bind index \CP sidebar-prev
bind index \CN sidebar-next
bind index \CO sidebar-open
bind pager \CP sidebar-prev
bind pager \CN sidebar-next
bind pager \CO sidebar-open
set pager_index_lines = 10 # number of index lines to show
set pager_context = 3 # number of context lines to show
set pager_stop # don't go to next message automatically
set menu_scroll # scroll in menus
set tilde # show tildes like in vim
unset markers # no ugly plus signs
bind pager k previous-line
bind pager j next-line
bind pager gg top
bind pager G bottom
bind pager R group-reply
set quote_regexp = "^( {0,4}[>|:#%]| {0,4}[a-z0-9]+[>|]+)+"
auto_view text/html # view html automatically
alternative_order text/plain text/enriched text/html
set sidebar_delim = '│'
set sidebar_visible = yes
set sidebar_width = 24
set status_chars = " *%A"
set status_format = "───[ Folder: %f ]───[%r%m messages%?n? (%n new)?%?d? (%d to delete)?%?t? (%t tagged)? ]───%>─%?p?( %p postpone
set beep_new # bell on new mails
unset mark_old # read/new is good enough for me
color normal white black
color attachment brightyellow black
color hdrdefault cyan black
color indicator black cyan
color markers brightred black
color quoted green black
color signature cyan black
color status brightgreen blue
color tilde blue black
color tree red black
color index red black ~D
color index magenta black ~T
set signature="~/.signature"
So it works well, I can see both my inbox and when there are new posts in it. But when I open mutt it first opened a local box, I do not understand why, and to see the new posts in other inbox, I have to move first in each of they. Maybe it's normal, but how by asking mutt to open domain.com for example first and not a local one that does not exist.
|
It's a little hard to see what you are trying to do, but are you by any chance looking for the $spoolfile variable/configuration setting in a global context?
I'm not sure how it interacts with Mutt's IMAP support, but it allows you to set the folder which will be opened by default when Mutt is started.
It looks like you set it in the account folder-hooks, but you'd need to set it outside of those in order for it to apply before the folder-hook folder is entered.
Try adding the following to the end of your ~/.muttrc, and see if it helps:
set spoolfile="imaps://[email protected]@imap.gmail.com:993/INBOX"
| Mutt imap multiple account |
1,393,375,604,000 |
I want to quickly compare files in two different directories to see if the files are the same (same content). I want to see the results in Kompare (I'm on KDE - Kubuntu 12.04).
Here's my diff command:
diff -EwbBsy /directory/one /directory/two
(That command would suit me even better if it ignored any files in /directory/one that are not already present in /directory/two, but I couldn't figure out how to achieve that.)
To use Kompare, I do this:
diff -EwbBsy /directory/one /directory/two | kompare -o -
However, that gives the following error:
Error: Could not parse diff output.
I also tried:
diff -Ewbus /directory/one /directory/two | kompare -o -
and just
diff /directory/one /directory/two | kompare -o -
and a few other variations without success.
What am I doing wrong? Thanks.
|
It doesn't seem to be able to handle the -y switch which does the side-by-side style of diff, but you can use the unified diff (-u). You can't mix these 2 styles so it's either -y or -u. So doing this worked for me:
$ diff -EwbBsu /directory/one /directory/two | kompare -o -
This will not show the entire file with the matches, just the line that was different, with 3 lines of context, by default. If you want more context you can provide -u a argument of a number (u 10) for example.
$ diff -EwbBsU 10 /directory/one /directory/two | kompare -o -
| How to pipe diff into Kompare? |
1,393,375,604,000 |
I need to average upload and download speed using dstat -n. How can I add all the received and sent data sizes that appear after dstat -n, so that I can add them and find average upload and download speed over some period of time ?
|
As no one answered,I have figured it out.
Here is how to do it. Let's say we need to average it for "2 min(120 sec)". First write it to a file named stat.txt.Refresh every second fro 120 times.
dstat -n 1 120 >> stat.txt
Add the columns of stat.txt
awk -F" " '{t1=t1+$1;t2=t2+$2}END{t1=t1/120;t2=t2/120;print t1"\t"t2}' stat.txt
Remove stat.txt
rm stat.txt
We can make a script too from these commands.
| Averaging output of dstat |
1,393,375,604,000 |
I am trying to run oprofile on my ubuntu host but cannot find the vmlinux file. The set up sfor oprofile needs this file:
As given here : http://oprofile.sourceforge.net/doc/overview.html#getting-started
opcontrol --vmlinux=/boot/vmlinux-`uname -r`
What should I do so that I can profile the ubuntu kernel.
I am using 2.6.32-34-generic-pae (uname -r)
|
Under Ubuntu & variants, it's named vmlinuz. So your command line for oprofile becomes :
opcontrol --vmlinux=/boot/vmlinuz-`uname -r
| Trying to run oprofile on ubuntu kernel but cannot find vmlinux file |
1,393,375,604,000 |
I'm trying to automate creating vhosts on my computer. This is merely a learning experience for bash scripting. I'm currently novice. I'm trying to learn more about awk, sed.
Anyways, this is my conf file. What would be the most efficient way to find and replace from command line? I'll eventually replace some forms with tokens, like {DOMAIN} and {PATH}
NameVirtualHost commerce.l:*
<Directory "/home/chris/workspace/dev.commerce/html">
Options Indexes Includes execCGI
AllowOverride All
Order Allow,Deny
Allow From All
</Directory>
<VirtualHost commerce.l>
DocumentRoot /home/chris/workspace/dev.commerce/html
ServerName commerce.l
ErrorLog logs/commerce.error
<IfModule mod_rewrite.c>
<Directory "/home/chris/workspace/dev.commerce/html">
RewriteEngine on
# needed by Drupal 7 for "clean URLs"
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !=/favicon.ico
RewriteRule ^ index.php [L]
</Directory>
</IfModule>
</VirtualHost>
P.S. I'm on Ubuntu 11.04 natty
|
This looks like a job for a here-document: include the template in your script, and use $variable_name when you want to substitute variables, or $(shell-command) to substitute the output of any shell command.
The here-document begins on the line after the marker <<EOF (you can replace EOF by any word) and ends on a line containing exactly EOF (no indentation allowed). Inside the template, the same characters are special as inside double quotes: "$`\ (note the backquote, which needs to be protected \`).
DOMAIN=commerce.l
PATH=/home/chris/workspace/dev.commerce/html
cat >>/etc/apache/sites-available/$DOMAIN
NameVirtualHost $DOMAIN:*
<Directory "$PATH">
…
</VirtualHost>
EOF
| Templating values for a bash script for apache conf files |
1,393,375,604,000 |
udisksctl is my tool of choice when dealing with file system images (recent example, but I've been doing this all over the place).
The dance typically looks like
fallocate -l ${img_size} filesystem.img
mkfs.${fs} filesystem.img
# Set up loop device as regular user
loopback=$(udisksctl loop-setup -b "${img_file}" | sed 's/.* \([^ ]*\)\.$/\1/')
# Mount as regular user
mounted=$(udisksctl mount -b "${loopback}" | sed 's/^.* \([^ ]*\)$/\1/')
# Do the testing/benchmarking/file management
# e.g.:
cp -- "${files[@]}" "${mounted}"
Quite frankly, I have a bad feeling about the way I parse the output of udisksctl; these are clearly human-aimed strings:
Mapped file filesystem.img as /dev/loop0.
Mounted /dev/loop1 at /run/media/marcus/E954-81FB
And I don't think anyone considers their actual format "API". So, my scripts might break in the future! (Not to mention the nasal demons I invite if the image file name contains line breaks.)
udisksctl doesn't seem to have a "porcelain" output option or similar. Is there an existing method that does udisksctl's job of loopback mounting with user privilege through udisks2, with a proper, unambiguous output?
|
I feel your pain... I also love the sudo-less power of udisksctl, I use UDisks2 in several snippets and projects, and I also hate how "machine-unfriendly" its output is.
That said, one approach I'm leaning towards to is not to parse the output of udisksctl mount,loop-*,..., use it for actions only, and leave parsing to udisksctl info, udevadm or even lsblk -lnpb (which can even have JSON output if you're willing to use jq!).
If you do parse udisksctl, at least prepend it with LC_ALL=C to avoid localized messages by using the fixed C "virtual" locale, so you at least guarantee the strings matched won't change per user environment.
Examples:
Listing the (writable) partitions with a known filesystem of a drive device (if any):
lsblk "$device" -lnpb --output NAME,RO,FSTYPE | awk '$2 == 0 && $3 {print $1}'
Getting the mountpoint of one such partition above after mounting:
LC_ALL=C udisksctl info -b "$partition" | grep -Po '^ *MountPoints: *\K.*'
No more grepping human-intended messages!
(or, better yet, make your voice heard in this still open 2017 request for a script-friendly interface)
| udisksctl: get loop device and mount point without resorting to parsing localized output? |
1,393,375,604,000 |
When in the graphical environment, seahorse unlocks my ssh key (locked with a passphrase) so I can ssh to another host without entering a passphrase.
But when on the command line, I am still asked for such a passphrase.
Is there a way to have ssh-agent unlock my key on login the way seahorse does? Also, what is the proper way to start ssh-agent on login?
|
sudo apt install keychain
and add
if [ -z "$TMUX" ] ; then
keychain -q ~/.ssh/id_rsa;
fi
. ~/.keychain/$(hostname)-sh 2> /dev/null
to ~/.bashrc
https://linux.die.net/man/1/keychain
| Unlock ssh key on login |
1,393,375,604,000 |
I'm trying to batch convert some Excel schedules to CSV format
from the command line with the following LibreOffice command:
libreoffice --convert-to csv *.xlsx
I'm getting:
error
xsltParseStylesheetFile : cannot parse
I/O warning : failed to load external entity ""
error
xsltParseStylesheetFile : cannot parse
convert file.xlsx -> file.csv using filter : Text - txt - csv (StarCalc)
I'm actually getting a csv file like this, but I also get a GUI-message saying "the maximum number of columns per sheet was exceeded."
Also I am getting lots of ���
for unsupported German characters ä ü ö.
Does anyone know how to fix those errors?
Update:
running: libreoffice --convert-to "txt:Text (encoded):UTF8" File.xlsx
I get:
Warning loading document File.xlsx:
The data could not be loaded completely because the maximum number
of columns per sheet was exceeded.
OK
[manually transcribed from this image]
and on the command-line:
convert /home/user/File.xlsx -> /home/user/File.txt using filter : Text (encoded):UTF8
Error: Please verify input parameters... (SfxBaseModel::impl_store <file:///home/user/File.txt> failed: 0xc10(Error Area:Io Class:Write Code:16) /build/libreoffice-RsXkGA/libreoffice-7.0.1~rc1/sfx2/source/doc/sfxbasemodel.cxx:3153 /build/libreoffice-RsXkGA/libreoffice-7.0.1~rc1/sfx2/source/doc/sfxbasemodel.cxx:1735)
Might that be because of the Excel xlsx format?
|
Are you obliged to use libreoffice?
A great command-line tool to convert xlsx to csv is xlsx2csv
Description: convert xlsx files to csv format xlsx files are zip
archives where spreadsheet data is stored. In order to process a
file, various bits inside the archive need to be located. This
utility uses the Expat SAX parser to collect the strings into a
simple dictionary, keyed by their relative position in the XML file.
This makes it possible to process files of any size.
Installation on a Debian-System:
sudo apt install xlsx2c
Simple usage:
xlsx2csv infile.xlsx outfile.csv
| LibreOffice - convert Calc/Excel schedule to CSV from command line |
1,393,375,604,000 |
I am trying to install a Let's Encrypt certificate on a Oracle Linux Server 7.6. Since the server does not have a public IP, I had to validate via DNS.I followed the instructions here https://github.com/joohoi/acme-dns-certbot-joohoi and the validation worked and I got the certificate. How do I now install the certificate?
I followed instructions online and moved the certificate to etc/ssl/certs and deleted the old certificate. After restarting the machine however the website does not work and I get an error site cannot be reached.
I can interact with the server only via SSH.
|
I believe this should be comparable to CentOS 7.6. The path etc/ssl/certs is simply a symbolic link to /etc/pki/tls/certs/. The certificate is divided into two parts, the first which you have already mentioned is the *.crt file which contains the public key and shall be placed in /etc/pki/tls/certs/ which is in my case certificate.crt, while the other part is the private key, and shall be placed in /etc/pki/tls/private/, usually has *.key extension, in my case private.key.
In case you are using Apache web server, here is a working example of my redmine.conf, it should be enough to guide you thru:
<VirtualHost *:80>
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]
</VirtualHost>
<VirtualHost *:443>
ServerName www.example.com
ServerAlias 192.0.2.37
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/certificate.crt
SSLCertificateKeyFile /etc/pki/tls/private/private.key
SSLCertificateChainFile /etc/pki/tls/certs/ca_bundle.crt
DocumentRoot /var/www/html/redmine/public
<Directory /var/www/html/redmine/public>
Allow from all
Options -MultiViews
Require all granted
</Directory>
</VirtualHost>
I almost forgot to mention - which might solve your problem - is that you need to make sure that you have firewall rules in place, and permanent ones as follows:
firewall-cmd --permanent --add-service=http --add-service=https --zone=public
firewall-cmd --reload
Also, make sure you have SeLinux disabled in case you have not changed its rules for your web service.
| Install Let's Encrypt SSL certificate on Oracle Linux Server |
1,393,375,604,000 |
I use Debian 5. I was building GN. I followed the instruction provided here.
I was executing these commands:
git clone https://gn.googlesource.com/gn
cd gn
python build/gen.py
ninja -C out
While executing ninja -C out/ I receive this message:
ninja: Entering directory `out/'
[1/238] CXX tools/gn/input_file.o
FAILED: tools/gn/input_file.o
clang++ -MMD -MF tools/gn/input_file.o.d -I/home/us/WebRTCBuild/gn -I/home/us/WebRTCBuild/gn/out -DNDEBUG -O3 -fdata-sections -ffunction-sections -D_FILE_OFFSET_BITS=64 -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -pthread -pipe -fno-exceptions -fno-rtti -fdiagnostics-color -std=c++14 -Wno-c++11-narrowing -c /home/us/WebRTCBuild/gn/tools/gn/input_file.cc -o tools/gn/input_file.o
/bin/sh: clang++: command not found
[2/238] CXX base/callback_internal.o
FAILED: base/callback_internal.o
clang++ -MMD -MF base/callback_internal.o.d -I/home/us/WebRTCBuild/gn -I/home/us/WebRTCBuild/gn/out -DNDEBUG -O3 -fdata-sections -ffunction-sections -D_FILE_OFFSET_BITS=64 -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -pthread -pipe -fno-exceptions -fno-rtti -fdiagnostics-color -std=c++14 -Wno-c++11-narrowing -c /home/us/WebRTCBuild/gn/base/callback_internal.cc -o
base/callback_internal.o
/bin/sh: clang++: command not found
ninja: build stopped: subcommand failed.
As far as I understand problem shown in this message:
/bin/sh: clang++: command not found
I already installed llvm. But it didn't work.
I also read that it may be caused by absence of g++. But g++ installed.
Result of executing echo $PATH:
/usr/local/bin:/usr/bin:/bin:/usr/games:/opt/gcc49/bin
|
I solved this problem by avoiding clang compiler. I noticed that in build/gen.py there is option that gives me possibility to set compiler. By default it's clang. So in build/gen.py I changed this part that is below.
def WriteGNNinja(path, platform, host, options):
if platform.is_msvc():
cc = os.environ.get('CC', 'cl.exe')
cxx = os.environ.get('CXX', 'cl.exe')
ld = os.environ.get('LD', 'link.exe')
ar = os.environ.get('AR', 'lib.exe')
elif platform.is_aix():
cc = os.environ.get('CC', 'gcc')
cxx = os.environ.get('CXX', 'g++')
ld = os.environ.get('LD', 'g++')
ar = os.environ.get('AR', 'ar -X64')
else:
cc = os.environ.get('CC', 'clang')
cxx = os.environ.get('CXX', 'clang++')
ld = cxx
ar = os.environ.get('AR', 'ar')
I Changed these lines:
cc = os.environ.get('CC', 'clang')
cxx = os.environ.get('CXX', 'clang++')
ld = cxx
ar = os.environ.get('AR', 'ar')
to this:
cc = os.environ.get('CC', 'gcc')
cxx = os.environ.get('CXX', 'gcc')
ld = cxx
ar = os.environ.get('AR', 'gcc')
Now I receive errors during executing ninja -C out but they connected to compiling of code. Problem of /bin/sh: clang++: command not found solved
| /bin/sh: clang++: command not found |
1,393,375,604,000 |
I have files
..
00016_0912RP10R6_RampMotorway9_0912RP10R6_13.646852_100.687103.jpg
00017_0912RP10R6_RampMotorway9_0912RP10R6_13.646956_100.686897.jpg
00018_0912RP10R6_RampMotorway9_0912RP10R6_13.647067_100.686684.jpg
...
I would like to have
00016.jpg
00017.jpg
00018.jpg
What is the best linux command to loop through subfolders and rename it ?
|
Using find:
find . -type f -name '*_*.jpg' -exec sh -c '
for pathname do
newname=${pathname##*/}
newname="${pathname%/*}/${newname%%_*}.jpg"
printf "Would move %s to %s\n" "$pathname" "$newname"
# mv -i "$pathname" "$newname"
done' sh {} +
This would find the pathnames all regular files in or below the current directory whose names match the given pattern. For batches of these pathnames, a short shell script is executed that loops over the given pathnames and renames the files (the actual renaming is commented out for safety).
Given a pathname like ./a/b/c/foo0_some_other_bits.jpg, the shell script would transform this into ./a/b/c/foo0.jpg by first deleting the directories (producing foo0_some_other_bits.jpg) and then deleting everything after the first _ character and adding the directories back again. The directory bit of the pathname is deleted and replaced just in case it also happens to contain one or several _ characters. This is done using standard parameter expansions.
Using the globstar shell option in bash:
shopt -s globstar # use "set -o globstar" in ksh93, or remove completely in zsh
for pathname in ./**/*.jpg; do
[ -f "$pathname" ] || continue
newname=${pathname##*/}
newname="${pathname%/*}/${newname%%_*}.jpg"
printf "Would move %s to %s\n" "$pathname" "$newname"
# mv -i "$pathname" "$newname"
done
This is exactly equivalent to the above find command, with the only difference that it wouldn't find hidden names (add shopt -s dotglob for that).
| Batch rename file by substring the filename |
1,393,375,604,000 |
How do you start a specific web application created by GNOME Web via the command line?
Usage example: The user creates a web application and wants to add that specific web application as a startup program (without opening GNOME Web)
|
I found the solution to my problem, but it's not very elegant. If you find a better and simpler way to do this, please share.
Here it is:
Go to:
~/.local/share/applications
Open the file with your text editor. It's name will be:
epiphany-yourAppName-RandomAlphanumericalCharacters.desktop
Copy the text following "Exec=" (on line 3). It should look like:
epiphany --application-mode --profile="/home/yourusername/.config/epiphany/app-epiphany-yourAppName-randomAlphanumericalCharacters" https://example.com/
Use this command to start the web application.
| Start a GNOME Web (Epiphany) "web application" via command line |
1,393,375,604,000 |
I have a script I class like so gitploy up -t 2.0.0 test_repo. I pull out the "action" up right away, then I need to be able to get the test_repo before I process the options. I don't want to lose that arguments in line, if that makes sense.. don't want to shift it away, just get it and let it be? Basically I want to get that test_repo spot before I do
while getopts "zhvdurqw:c:i:e:o:b:t:f:p:g" opt; do
case $opt in
#flag----------------------
h)
usage;
exit 0
;;
#callback------------------
c) queue_callback "$OPTARG"
shift $((OPTIND-1)); OPTIND=1
;;
### so one and so forth
section of the script. So basically I can do something like this
# i would get the first argument after the options here first so "test_repo"
# would be a $@ or $* or something?
root_arg="test_repo"
while getopts "zhvdurqw:c:i:e:o:b:t:f:p:g" opt; do
case $opt in
#flag----------------------
h)
usage;
exit 0
;;
#callback------------------
c) queue_callback "$OPTARG"
echo "$root_arg was here"
shift $((OPTIND-1)); OPTIND=1
;;
### so one and so forth
In a broader scope of this maybe the question I would guess is, "How to get an arg by position relative to the options?"
revisal of question:
The hope is to say something like,
positon of getopts output in command in var
call the position of getopts output +1
test_repo argument is located +1 after getopts output
do normal processing
I'm trying to not move the command for backward compatibility reasons to start with, but not really limited to that. I figured I could just write a little shim here since I know the pattern is already set to gitploy <__action> [__options] <__alias> [__remote_url].
I guess I could make everything an option and deprecate the other arguments. Not sure that is a bad way to do this but it would seem that I would have to scan in an order to look for the <__alias> (or the test_repo as presented in the example) and look for it as an option such as -a test_repo.
Even if it is not the right way to do this in the end, I would like to have an answer on if you can "read the cursor" here and determine the argument value or if is an impossible thing to do.
result of answer below
while getopts "zhvdurqw:c:i:e:o:b:t:f:p:g" opt; do
case "$opt" in
esac
done
index="$((OPTIND))"
GD_REPO="${!index}";
OPTIND=1
This what I ended up doing. Seems like dirty trick to fast-forward then rewind, but it works. If there is better ideas, I would still love learn them.
|
Your requirements are logically contradictory. Given input like gitploy up -t 2.0.0 test_repo, you need to parse the options, and in particular notice the presence of the -t option and the fact that it takes one argument, in order to identify that the first non-option argument is test_repo.
So first parse the options normally. Then you can know what the first operand is. At that point, process the options. Store the information you need about the options in variables.
action="$1"
shift
unset t_arg
while getopts "zhvdurqw:c:i:e:o:b:t:f:p:g" opt; do
case "$opt" in
c) c_arg="$OPTARG";;
…
esac
done
shift "$((OPTIND-1))"
root_arg="$1"
if [ -n "${c_arg+1}" ]; then
queue_callback "$OPTARG"
echo "$root_arg was here"
fi
| get first CLI argument after the options in shell scipt |
1,393,375,604,000 |
I am an absolute beginner at UNIX scripting (and have searched here for something explaining how to do this in a simple way to no avail). I am trying to pipe the contents of a claimName.txt file
find . -name 'claimName.txt' -exec cat {} \;
into a Node.js script that takes this value after a -c flag
npm run import -- -r ./ -c {claimName.txt contents go here}
What's the simplest way to pipe the claimName.txt contents to follow the -c flag?
|
You can use this without a pipe:
npm run import -- -r ./ -c "$(find . -name 'claimName.txt' -exec cat {} \; -quit)"
| pipe the output of cat into a node script |
1,393,375,604,000 |
I'm trying to do the following for a while but without success.
The data I received has comma separated values in each separate columns. The first value in column 6 before the comma is always related to the first value in column 7 before the comma. I want to extract data and put them into a table in the right order as such:
Input Data:
1 2 3 4 5 A1,A2 B1,B2
1 7 3 3 5 C1,C2,C3 D1,D2,D3
1 2 R 4 b E1,E2,E3,E4 G1,G2,G3,G4
Output Data:
1 2 3 4 5 A1 B1
1 2 3 4 5 A2 B2
1 7 3 3 5 C1 D1
1 7 3 3 5 C2 D2
1 7 3 3 5 C3 D3
1 2 R 4 b E1 G1
1 2 R 4 b E2 G2
1 2 R 4 b E3 G3
1 2 R 4 b E4 G4
I understand I need to split them by \t before placing them into an array of sort, but I'm utterly new to this and having received such huge data.
|
With awk:
awk '{split($6,a,","); split($7,b,","); for(i in a){print $1,$2,$3,$4,$5,a[i],b[i]}}' file
awk reads the input space or tab delimited, default: [\t ]+.
split($6,a,",") split the 6th field $6 separated by comma , and store the output in an array called a.
split($7,b,",") split the 7th field $7 separated by comma , and store the output in an array called b.
for(i in a) now loop trough the a array...
print ...,a[i],b[i] ... and print the values $1 to $5 and the two array values a[i] and b[i] by their indexes i.
The output:
1 2 3 4 5 A1 B1
1 2 3 4 5 A2 B2
1 7 3 3 5 C1 D1
1 7 3 3 5 C2 D2
1 7 3 3 5 C3 D3
1 2 R 4 b E1 G1
1 2 R 4 b E2 G2
1 2 R 4 b E3 G3
1 2 R 4 b E4 G4
| Processing table with comma separated values in different columns |
1,393,375,604,000 |
Is there any technical merit/necessity to numerous *nix commands (mkdir, mkfifo, mknod) having a -m (--mode) option?
I ask this because near as I can tell, umask (both the shell command and the syscall) provides everything you need to control a file's permissions:
For example, I can do this:
mkdir -m 700 "$my_dir"
..but I can just as easily do:
old_umask=`umask` \
&& umask 0077 \
&& mkdir "$my_dir"
umask "$old_umask"
To be clear, I can see that the former is much more user-friendly and convenient (especially for command-line useage), but I don't really see a technical advantage of the former over the latter.
Note also that I understand the merits of this flexibility at the underlying syscall level: if I want to call open or sem_open or what have you with at most 600 permissions, it makes sense that I would just pass S_IRUSR | S_IWUSR to the open syscall, never bother with the umask syscall, saving a syscall round-trip (or two, if I want to then reset the umask since the umask call modifies the current umask) and my code is simpler/cleaner. This does not apply in the command line example because the -m/--mode option of such a command will have to call umask to zero out the umask of that command's process anyway, to ensure the mode/permission bits that it's supposed to set on the new file/whatever are set. (E.g. if my shell's umask is 022, then mkdir -m 777 /tmp/foo can only work as expected if it's first calling umask internally to zero out the umask it inherited from the shell.)
So what I want to make sure I didn't miss in my considering of the problem is, is there something you could not accomplish with just the umask command, without relying on the -m/--mode options of the mk* commands?
|
There are things you can't do with umask alone:
create a regular file with permissions above 0666.
create a directory with permissons above 0777.
So you do need chmod or --mode as well. If, for security reasons, you never want to create an object with temporarily higher rights than intended, chmod without umask isn't enough either. In some corner cases you have to use even both resulting in the rather ugly sequence umask / mkdir / chmod / umask. (Example: create a group temp directory (01770).)
So --mode can be replaced with chmod and umask, but not with only one of them.
| umask vs -m, --mode command options |
1,424,365,645,000 |
There are many Clipboard Manager for Unix-based Operating System but is there a way to actually know which one is being used?
I am on Fedora 20 under Gnome 3.10.1 and I know that I'm using GPaste 3.10.
But I would like to know if there is a command line which would ouput GPaste 3.10 (except gpaste --version obviously).
|
After doing an extensive search I wasn't able to find a method for doing this. So it would seem impossible to find out what downstream tools are collecting the results of the clipboards in an attempt to provide a "management" facility around them.
| Knowing default clipboard manager |
1,424,365,645,000 |
I'd like to try to stress-test a PHP script I've written, which accesses the filesystem, to see how it copes with load and parallel access.
I'd like to run this script x times in y different processes parallelly.
Is there a tool for this?
|
xargs allows easy parallel processing. Here is an example (which assumes that your version of xargs supports the -0 switch, which is not a POXIX requirement. If portability is an issue, simply use echo and drop the -0).
maxruns=2000
instances=50
printf '%s\0' {1..$maxruns} | xargs -0 -I, -n 1 -P $instances <program>
printf outputs the numbers from 1 to 2000 delimited by NULL characters. This is piped to xargs. The -0 option notifies xargs that the values are delimited by a NULL characters instead of whitespace. The -I switch replaces the following character (a comma, can be any character sequence) with the input value. Since the input values are the numbers, which we don't need, and there is no other comma in the command line, the input is simply discarded. -n 1 provides a maximum of one argument to <program>. -P 50 runs no more than 50 instances of <program> at a time.
| How can I stress test a command line tool? |
1,424,365,645,000 |
I am looking for a substitution of the W3C RDF Validator, as it is broken, in addition I want something a bit more automated, such as a command line tool.
I have been using xmllint for checking XML files in the past. Are there any command line tools similar to that?
|
You can use rapper tool for validation or http://jena.sourceforge.net/Eyeball/
| Is there a command line tool for validating RDF files? |
1,424,365,645,000 |
I use RHEL 6 as my regular operating system and for one of my user accounts I made one of the desktop panels as auto-hide and other as a fixed panel. I expected the hidden panel to appear above the fixed panel but was surprised to see a clash of both the panels in the same space (bottom). This has caused me to not get the session at all as the clash between the panels does not stop. However, if I could somhow use the terminal from another user account to change the properties of one of the desktop panels, it would solve the problem.
|
Do following
vim /home/<username>/.gconf/apps/panel/toplevels/bottom_panel/%gconf.xml
or
vim /home/<username>/.gconf/apps/panel/toplevels/top_panel/%gconf.xml
If you renamed your panel, change top_panel or bottom_panel accordingly.
Look for orientation section
<entry name="orientation" mtime="1356417211" type="string">
<stringvalue>bottom</stringvalue>
</entry>
Change bottom to top, left or right.
| How to change the properties of a desktop panel from the command line? |
1,424,365,645,000 |
I used to use Up/Down to move through the history of commands.
Then, a few days later it changed to Ctrl-p/Ctrl-n.
Now this also doesn't work to move through the history of commands entered.
How can I view all these settings or change it?
I tried to see the terminal setting by giving the command stty but it was of no help.
I searched through Google and found something called bindkey. I hope I am moving in the right direction.
I am not the root user, anyway I would like to know more about this even if could do nothing about it.
KORN SHELL
**OS Info :**
rcihp145 :/home/msingh2> uname -a
HP-UX rcihp145 B.11.23 U 9000/800 3683851961 unlimited-user license
|
You are using ksh (the Korn shell). This shell is fairly primitive in terms of command line capabilities, but do check the “key bindings” or “line editing” section to see what your version of ksh can do.
History navigation with Ctrl+P and Ctrl+N works in all ksh versions that I know of. They might be disabled in a configuration file; look in ~/.kshrc to see what has been configured.
There are shells with better and more configurable line edition capabilities: zsh and the more popular but less powerful bash. bindkey is a zsh command, and bind is its bash equivalent.
| Moving through history of commands on command line? |
1,424,365,645,000 |
I have about 15 instances of screen running on my linux server. They are each running processes I need to monitor. I had to close terminal (hence the reason I launched screen).
Is there a way to reopen all 15 instances of Screen in different tabs without having to open a new tab, login to the server, print all the available screens to resume, and then type in the id for each screen session?
|
This python script just did the job for me. I made three screen sessions and this fires up three xterms with the sessions reattached in each. It's a bit ugly but it works.
#! /usr/bin/env python
import os
if __name__ == '__main__':
tempfile = '//tmp//screenList'
# capture allthescreenIds
os.system('screen -ls | grep Det | cut -d . -f 1 > ' + tempfile)
f = open(tempfile, 'r')
screenIds = f.readlines()
f.close()
screenIds = [x.lstrip() for x in screenIds]
for eachId in screenIds:
cmdLine = 'xterm -e screen -r ' + eachId.strip() + ' &'
os.system(cmdLine)
| How to resume multiple instances of Screen from command line with minimal steps? |
1,424,365,645,000 |
There was such tool but I cannot remember its name. I needed to configure precedence of addresses by /etc/gai.conf. I finally managed to find an error, but for future, what's the name of tool which displays the addresses of hostname as getaddrinfo(3) displays it?
|
I know there is a tool resolveip for this that comes with MySQL. It should also be dead-simple to write something with e.g. Python or Perl...
| Testing precedence of resolving addresses from commandline |
1,424,365,645,000 |
I recently installed Fedora 36. I have a script that plays certain sound files.
The script was used under Ubuntu 20.04 before, showing the expected behaviour.
Inside the script, I use the following command:
paplay --volume=65536 -d alsa_output.pci-0000_33_00.6.HiFi__hw_Generic_1__sink ~/soundfiles/notification.wav
On Ubuntu, this led to the notification being played on maximum volume due to the --volume=65536 setting, but since I switched to Fedora, this setting is no longer having any effect. No matter what value I give (even lower ones), the notification sound will always play with the current default system volume.
I tried with canberra-gtk-play, but that shows the same behaviour: no matter if I try canberra-gtk-play -f ~/soundfiles/notification.wav --volume=5 or canberra-gtk-play -f ~/soundfiles/notification.wav --volume=10, the sound will always play on the default system volume level.
Anybody got any idea why that might be?
|
I had the same issue but found this thread, and switched to pw-play. I realized something like this snippet works as expected:
pw-play --volume=0.5 ~/soundfiles/notification.wav
| paplay: --volume option does not take effect |
1,424,365,645,000 |
I'm using neomutt (an updated fork of mutt) as my CLI MUA (read: mail reading software in the terminal) and have all my messages synced offline using isync/mbsync and stored in the maildir-format on my Debian Stable system.
Sometimes I want to reply to a message and attach another email (e.g. as a reference). This can be easily done when using the maildir-storage format since all messages are separate files; I just need to attach the file in my local folder. The problem is that I have difficulties finding the email files.
Obviously I can search through all of my messages (e.g. by using mu, which is my mail indexer) and then attach it, but this is tedious. It would be a lot easier to just display the path and filename somewhere when I read an email, optimally in my pager within neomutt.
But despite looking for a solution, I wasn't able to find that. Any ideas or workarounds?
|
I just stumbled onto a solution: before sending the email, instead of pressing a to attach a file you can use A to select a mail folder, then using t to tag one (or multiple) messages and then attach the selected messages using Return.
| Display name and/or path of currently viewed email in mutt/neomutt |
1,424,365,645,000 |
I'm trying to setup mailx to use my Gmail account. I've found a configuration that can send mail successfully but it requires me to store my email password in a configuration file in my home directory. I would like to be prompted for the password every time rather than storing it.
I've tried leaving out the smtp-auth-password field where the password is present but the program does not prompt for a password and instead gives me this error: User and password are necessary for SMTP authentication.
Is there any way configure mailx such that my email password is used in a secure manner?
Here is my mailx config file:
account gmail {
set folder=imap://(removed)@imap.gmail.com
set smtp-use-starttls
set ssl-verify=ignore
set smtp=smtp://smtp.gmail.com:587
set smtp-auth=login
set from=(removed)@gmail.com
set smtp-auth-user=(removed)
set smtp-auth-password=(removed)
set nns-config-dir=~/.certs
}
|
Which version of mailx are you using?
heirloom-mailx 12.5 on Ubuntu 14.04 prompts me for the password every time if there's no smtp-auth-password setting in ~/.mailrc. This feature was added in 12.0 in March 2006 according to ChangeLog.
| Using mailx without storing a password |
1,424,365,645,000 |
I have an input RTSP stream that I want to apply the "cartoon" gradient filter to before streaming on http. I've managed to stream and apply the filter to the local playback, but the http stream does not have the filtering applied.
cvlc -vvv input_stream rtsp://10.217.12.20:554/axis-media/media.amp?videocodec=h264 --video-filter "gradient{type=1}" --sout '#duplicate{dst=http{vfilter=gradient{type=1},mux=ffmpeg{mux=flv},dst=:8080/coffeecam},dst=display}'
|
I managed to get this to work:
cvlc -vvv --daemon --pidfile ./coffee_stream.pid rtsp://10.217.112.30:554/axis-media/media.amp?videocodec=h264 --sout="#transcode{vfilter=gradient{type=1},vcodec=theo,acodec=vorb,vb=800,ab=128}:standard{access=http{mime=video/ogg},mux=ogg,dst=:8091}"
| How do I re-stream a filtered video stream using VLC? |
1,353,826,112,000 |
Is there a lightweight virtualized Linux or other unix environment that I can run on the iPad? Like VirtualBox for iPad. I only really need a minimal system — something along the lines of Microcore Linux, so no X server or anything like that. Just a console with a reasonable C compiler; if gcc is not available, tcc (Tiny C Compiler) or something like it would be fine.
I'd really like it to be virtualized so that I don't inadvertently mess up my iPad by playing with the linuxbox inside.
|
While I think this should be feasible, it is very unlikely even on a jailbroken iPad, and extremely unlikely on a non-jailbroken device.
Get a Linux VPS or a system to which you can SSH to, and install iSSH on your iPad, it's as closest as you can get to Linux-on-iPad.
| Virtual unix command-line environment on the iPad |
1,353,826,112,000 |
I was wondering how to restrict access to a specific drive in Unix on the Mac. I was thinking to do this in Terminal where I create a file like this mkfile 6k secure_access. And where secure_access will be on the external drive and will only allow a specific user to be allowed to access the drive, thereby preventing other users from accessing it.
|
You can create an encrypted disk image on your external drive, which will require a password to mount. As long as you don't give out the password, other users will not be able to mount the image. See: http://support.apple.com/kb/ht1578 for details.
Basically, you use Disk Utility located in the Applications -> Utilities folder to create a new disk image, give it a name, a size, and then in the Encryption dialog, select an encryption type.
Disk Utility will then prompt you for a password which you must use to mount the disk image.
| Restricting access to files on an external drive |
1,353,826,112,000 |
Please do not ask why, but is it possible to do it?
p/s: I know it's not a good thing, let's just say someone from the top management who is computer illiterate want some sort of control over the server.
|
Don't do that... you can either give them root's password or you could execute sudo passwd root (this assumes that sudo is set to use the users password or no password, and that passwd is a command that sudo has authorized to be run by that user).
| How to give a normal user permission to change root password |
1,353,826,112,000 |
Let's say I have a command accepting a single argument which is a file path:
mycommand myfile.txt
Now I want to execute this command over multiple files in parallel, more specifically, file matching pattern myfile*.
Is there an easy way to achieve this?
|
With GNU xargs and a shell with support for process substitution
xargs -r -0 -P4 -n1 -a <(printf '%s\0' myfile*) mycommand
Would run up to 4 mycommands in parallel.
If mycommand doesn't use its stdin, you can also do:
printf '%s\0' myfile* | xargs -r -0 -P4 -n1 mycommand
Which would also work with the xargs of modern BSDs.
For a recursive search for myfile* files, replace the printf command with:
find . -name 'myfile*' -type f -print0
(-type f is for regular-files only. For a glob-equivalent, you need zsh and its printf '%s\0' myfile*(.)).
| Execute command on multiple files matching a pattern in parallel |
1,353,826,112,000 |
I have 2 files:
$ cat file1
jim.smith
john.doe
bill.johnson
alex.smith
$ cat file2
"1/26/2017 8:02:01 PM",Valid customer,jim.smith,NY,1485457321
"1/30/2017 11:09:36 AM",New customer,tim.jones,CO,1485770976
"1/30/2017 11:14:03 AM",New customer,john.doe,CA,1485771243
"1/30/2017 11:13:53 AM",New customer,bill.smith,CA,1485771233
I want from file2 all the names that do not exist in file1.
The following does not work:
$ cut -d, -f 3 file2 | sed 's/"//g' | grep -v file1
jim.smith
tim.jones
john.doe
bill.smith
Why the pipe to grep -v does not work in this case?
|
This is virtually the last step in my answer to your earlier question.
Your solution works, if you add -f in front of file1 in the grep:
$ cut -d, -f3 file2 | grep -v -f file1
tim.jones
bill.smith
With the -f, grep will look in file1 for the patterns. Without it, it will simply use file1 as the literal pattern.
You might also want to use -F since otherwise, the dot in the pattern will be interpreted as "any character". And while you're at it, put -x in there as well to make grep perform the match across the whole line (will be useful if you have a joe.smith that shouldn't match joe.smiths):
$ cut -d, -f3 file2 | grep -v -F -x -f file1
This requires, obviously, that there are no trailing spaces at the end of the lines in file1 (which there seems to be in the text in the question).
Note that the sed is not needed since the output of the cut doesn't contain any ". Also, if you had needed to remove all ", then tr -d '"' would have been a better tool.
| Piping sed to grep does not seem to work as expected |
1,353,826,112,000 |
I'm trying to use find to return all file names that have a specific directory in their path, but don't have another specific directory anywhere in the file path. Something like:
myRegex= <regex>
targetDir= <source directory>
find $targetDir -regex $myRegex -print
I know I might also be able to do this by piping one find command into another, but I would like to know how to do this with a single regular expression.
For example, I want every file that has the directory "good" in it's path, but doesn't have the directory "bad" anywhere in its path no matter the combination. Some examples:
/good/file_I_want.txt #Captured
/good/bad/file_I_dont_want.txt #Not captured
/dir1/good/file_I_want.txt #Captured
/dir2/good/bad/file_I_dont_want.txt #Not captured
/dir1/good/dir2/file_I_want.txt #Captured
/dir1/good/dir2/bad/file_I_want.txt #Not captured
/bad/dir1/good/file_I_dont_want.txt #Not captured
Keep in mind some file names might contain "good" or "bad", but I only want to account for directory names.
/good/bad.txt #Captured
/bad/good.txt #Not captured
My research suggests I should use a Negative Lookahead and a Negative Lookbehind. However, nothing I have tried has worked so far. Some help would be appreciated. Thanks.
|
As Inian said, you don't need -regex (which is non standard, and the syntax varies greatly between the implementations that do support -regex¹).
You can use -path for that, but you can also tell find not to enter directories called bad, which would be more efficient than discovering every file in them for later filtering them out with -path:
LC_ALL=C find . -name bad -prune -o -path '*/good/*.txt' -type f -print
(LC_ALL=C so find's * wildcard doesn't choke on filenames with sequence of bytes not forming valid characters in the locale).
Or for more than one folder name:
LC_ALL=C find . '(' -name bad -o -name worse ')' -prune -o \
'(' -path '*/good/*' -o -path '*/better/*' ')' -name '*.txt' -type f -print
With zsh, you can also do:
set -o extendedglob # best in ~/.zshrc
print -rC1 -- (^bad/)#*.txt~^*/good/*(ND.)
print -rC1 -- (^(bad|worse)/)#*.txt~^*/(good|better)/*(ND.)
Or for the lists in arrays:
good=(good better best)
bad=(bad worse worst)
print -rC1 -- (^(${(~j[|])bad})/)#*.txt~^*/(${(~j[|])good})/*(ND.)
To not descend into dirs called bad, or (less efficient like with -path '*/good/*' ! -path '*/bad/*'):
print -rC1 -- **/*.txt~*/bad/*~^*/good/*(ND.)
In zsh -o extendedglob, ~ is the except (and-not) globbing operator while ^ is the negation operator and # is 0-or-more-of-the-preceding-thing like regexp *. ${(~j[|])array} joins the elements of the array with |, with that | being treated as a glob operator instead of a literal | with ~.
In zsh, you'd be able to use PCRE matching after set -o rematchpcre:
set -o rematchpcre
regex='^(?!.*/bad/).*/good/.*\.txt\Z'
print -rC1 -- **/*(ND.e['[[ $REPLY =~ $regex ]]'])
But that evaluation of shell code for every file (including those in bad directories) is likely to make it a lot slower than other solutions.
Also beware that PCRE (contrary to zsh globs) would choke on sequences of bytes that don't form valid characters in the locale, and doesn't support multi-byte charsets other than UTF-8. Fixing the locale to C like for find above would address both for this particular pattern.
If you'd rather [[ =~ ]] only does extended regexp matching like in bash, you can also instead just load the pcre module (zmodload zsh/pcre) and use [[ -pcre-match ]] instead of [[ =~ ]] to do PCRE matching.
Or you could do the filtering with grep -zP (assuming GNU grep or compatible):
regex='^(?!.*/bad/).*/good/.*\.txt\Z'
find . -type f -print0 |
LC_ALL=C grep -zPe "$regex" |
tr '\0' '\n'
(though find still discovers all files in all bad directories).
Replace tr '\0' '\n' with xargs -r0 cmd if you need to do anything with those files (other than printing them one per line).
¹ In any case, I don't know any find implementation that supports perl-like or vim-like regular expressions which you'd need for look-around operators.
| Find: Use regex to get all files with specific directory name in path, but without another specific directory name in path |
1,353,826,112,000 |
I wonder what is a simpler way to do this:
awk 'NR > 1 {print $1"\t"$2"\t"$3"\t"$4"\t"$5"\t"$6"\t"$7"\t"$8"\t"$9$10$11$12$13$14$15$16}' file.in > file.out
which is simply speaking " concatenate columns 9 to 16 by removing tabs in-between"
Merged columns 9-16 become "Notes" so may include whitespaces.
As of today there are 16 columns but this may evolve in more/less if required. Eventually column 9 (concatenated 9-16) becomes "notes" field.
Cheers,
Xi
|
paste <(cut -f 1-8 file) <(cut -f9- file | tr -d '\t')
| awk / sed / etc. concatenating colums in one file |
1,353,826,112,000 |
I was reading a book published in 2018 titled "Linux Basics for Hackers: Getting Started with Networking, Scripting, and Security in Kali" from no starch press.
And this was written there that you can move up as many levels as you want using the corresponding number of double dots separated by spaces:
You would use .. to move up one level
You would use .. .. for two levels
You would use .. .. .. to move up three levels, and so on.
So, for example, to move up two levels, enter cd followed by two sets of double dots with a space in between.
This is the page from the book:
Was that ever working? It is not working in 2020.
|
This is an error in the book which the publisher addresses in the "Updates" section on the book's "homepage" (https://nostarch.com/linuxbasicsforhackers#updates):
Updates
Page 7
The following text regarding moving up through directory levels is incorrect:
You would use .. to move up one level.
You would use .. .. to move up two levels.
You would use .. .. .. to move up three levels, and so on.
This text should read:
You would use .. to move up one level.
You would use ../.. to move up two levels.
You would use ../../.. to move up three levels, and so on.
The errata does not mention the example that you also quote, which shows cd .. .., but this is obviously also wrong.
Some shells support a cd command with two arguments, where the second argument replaces whatever matches the first argument in the pathname of the current working directory, and the resulting pathname is changed into. But the pathname of current directory, as found by pwd and in $PWD, would not contain .., and even if it did, the cd .. .. command would not change directory at all (given the semantics that I just described).
| Has "cd .. .. .." ever worked for going up 3 directories? |
1,353,826,112,000 |
The actual data is:
Dolibarr techpubl http://techpublications.org/erp
tekstilworks.com WordPress tekstilw
wbq.dandydesigns.co WordPress cbeqte
WordPress cbeqte http://wbq.dandydesigns.co
WordPress cbeqte http://qbd.dandydesigns.co
WordPress cbeqte http://uqdq.dandydesigns.co
dandydesigns.co WordPress cbeqte
stunlockers.info WordPress nmmuop
What I want to get:
tekstilworks.com WordPress tekstilw
wbq.dandydesigns.co WordPress cbeqte
dandydesigns.co WordPress cbeqte
stunlockers.info WordPress nmmuop
|
Using awk:
awk '$1 ~ /\./' input-file-here
The period in the awk expression has to be escaped with a backslash so that it's not treated as a regular expression syntax.
| Extract a line if the first field contains a dot |
1,353,826,112,000 |
so i have this command
ps ax | grep apache | awk '{ print "cat /proc/"$1"/status | grep State" }'
which outputs something like
cat /proc/9989/status | grep State
cat /proc/9992/status | grep State
cat /proc/9993/status | grep State
cat /proc/9994/status | grep State
But i'd love to go one step forward and execute those lines. So i am missing something after awk command to run the output. something like | exec or alike .
Is this possible ?
|
In addition to piping the commands to a shell as icarus correctly said, awk can execute a shell command itself:
ps ax | grep apache | awk '{ system("cat /proc/"$1"/status | grep State") }'
But you don't need cat because grep can read a file as RomanPerekhrest quietly showed:
ps ax | grep apache | awk '{ system("grep State /proc/"$1"/status") }'
And you don't need the second grep because awk can read a file and match a regexp:
ps ax | grep apache | awk '{F="/proc"$1"/status";
while((getline <F)>0) if(/State/) print; close(F)}'
# if maxfiles is enough you don't need the close(F)
nor the first one:
ps ax | awk '/apache/{F="/proc"$1"/status";
while((getline <F)>0) if(/State/) print; close(F)}'
# ditto
But you don't really need to look at /proc because ps already outputs the state, albeit abbreviated:
ps ax | awk '/apache/{print $3}'
# or print $1,$3 to include PID like Kusalananda's ps-based answer
# or better check $5~/apache/ to avoid including processes that
# aren't _running_ apache but have apache in their _arguments_
| command | grep | awk | ..... how to execute |
1,353,826,112,000 |
I'm trying to return a boolean if a curl response has a status of 200.
curl https://www.example.com -I | grep ' 200 ' ? echo '1' : echo '0';
This however brings back:
grep: ?: No such file or directory
grep: echo: No such file or directory
grep: 1: No such file or directory
grep: :: No such file or directory
grep: echo: No such file or directory
grep: 0: No such file or directory
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
So I'm thinking (and from reading other threads) that grep doesn't support the ternary operator?
Is there a simple one liner to do this without creating a shell script? (also my grep ' 200 ' is loose but I figure I can make that more specific later)
|
Like a quick study of the grep manual page should reveal, it allows for options, a search pattern (multiple patterns if you specify multiple -e options), and an optional list of file names, falling back to reading standard input like many other Unix filters if no file names are specified. There is nothing about a ternary operator.
If it were supported, it would arguably need to be a feature of the shell, but no shell I know of supports anything like this. It would also clash with the syntax for ? as a wildcard metacharacter which (in isolation) matches any single-character file name.
The usual shorthand idiom in Bourne-compatible shells looks like
curl https://www.example.com/ -I | grep -q ' 200 ' && echo 1 || echo 0
This could accidentally match on a "200" somewhere else in the output. For improved accuracy, try curl https://www.example.com/ -I | head -1 | grep ' 200 ' or perhaps curl https://www.example.com/ -I | awk 'NR==1 { print ($2=="200" ? "1" : "0"); exit }' where the precision of the matching operation is significantly improved.
As an afterthought, if you want to print "0" in the case of invalid URLs and other unexpected errors too, you could set the default output to 0 and change it to 1 when you see 200, then print whatever it ended up as at the end.
curl https://www.example.com/ -I |
awk 'BEGIN { s=0 } NR==1 { s=($2=="200"); exit } END { print s }'
| Grep Ternary Curl Reponse [duplicate] |
1,353,826,112,000 |
Here's a layup for someone...
I'm having to run a command repeatedly:
$ wp input csv MyCSV01.csv directory_name
$ wp input csv MyCSV02.csv directory_name
$ wp input csv MyCSV03.csv directory_name
The only change is the filename is incrementing. How can I run all these back to back?
Perhaps find all the files that start with MyCSV* and then run them in order? And/or specify a range of the files to run MyCSV03.csv through MyCSV05.csv?
Ideally, the solution is short enough for the command line, but it could be a script.
|
for i in {01..20}; do #replace with your own range
echo \
wp input csv "MyCSV$i.csv" directory_name
done
Comment out the echo line if it gives you the results you want.
zsh, which you tagged your question with, has a shorter form:
for i (MyCSV{01..20}.csv) wp input csv $i directory_name
Or you could use its zargs function:
autoload zargs # best in ~/.zshrc
zargs -i MyCSV{01..20} -- wp input csv {} directory_name
| Repeating command with different filenames |
1,353,826,112,000 |
Is there a way in Linux to make sudo command to remember the password the user entered for in the first of the lines?
For example, for a list of commands that the user has to enter, the some of then requiring a sudo prefix, how can one make sure that if the user copy+pastes the instructions into a terminal all in one go, is only asked the password once?
Example:
mkdir ~/acpiinfo ; cd ~/acpiinfo
sudo acpidump > acpidump.txt
# enter password
sudo acpixtract acpidump.txt
ls *.dat | while read i; do iasl -d "${i}"; done
pid=`sudo dmidecode -s system-product-name`
vid=`sudo dmidecode -s system-version`
name=$pid.$vid
mkdir "${name}" && cp *.dsl "${name}"/
tar czf "${name}.tar.gz" "${name}"/ && ls -l "$( pwd )/${name}".tar.gz
|
Double sudo is not necessary:
sudo sh -c "apt-get update && apt-get -y dist-upgrade && apt-get -y autoremove && apt-get autoclean"
This works fine even if one command can take very long.
| sudo to remember password for list of commands? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.