date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,304,454,439,000
I've typically have gnome-terminal open with ~8 tabs, using 2 consecutive tabs for the same task (one has emacs, the other is used to do git checkins and unittest runs and so). When changing tasks, I need to move to a new directory - in both tabs. How can I change the work directory of the second tab to the one of the first tab, with as few steps as possible? Preferably keyboard only.
Here is a work-around: on one tab, record the CWD into a temp file, on the other tabs, cd to the just-saved dir. I would put these two aliases to my .bashrc or .bash_profile: alias ds='pwd > /tmp/cwd' alias dr='cd "$(</tmp/cwd)"' the ds (dir save) command marks the CWD, and the dr (dir recall) command cd to it. You can do something similar for C-shell.
Change working directory of 2 terminals at once
1,304,454,439,000
Suppose I have the directories: /foo/ /A/B/C/foo/D/E/ /F/foo/G/H/foo/I/ How can get a result that lists all the directories whose basename exactly matches a given string (for example foo in here)? /foo/ /A/B/C/foo/ /F/foo/ /F/foo/G/H/foo/
You can use find: $ find / -type d -name foo -print
What is the best way to find directories that exactly match a string irrespective of the path?
1,304,454,439,000
As a linux user, I spend a lot of time mucking about in bash. I've always been a bit frustrated by how poor the readability is. Today this annoyed me enough to spur me into action. My immediate problem was solved by this snippet, which changes the directories colour to magenta. (Annoyingly, this caused me to lose all the other ls colours.) So I started wondering, what other tweaks exists to make bash a bit more readable?
Maybe Solarized? http://ethanschoonover.com/solarized There's a bunch of projects that do similar terminal colorization schemes with an eye to usability.
How can I make bash more readable? [closed]
1,304,454,439,000
I'm seeing this in my .bashrc file: PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\ [\033[01;34m\]\w\[\033[00m\]\$ ' and I have absolutely no idea what all those escapes codes mean.
There are three kinds of escape codes in there: bash parameter expansion, bash prompt expansion, and terminal escape codes. ${debian_chroot:+($debian_chroot)} means “if $debian_chroot is set and non-empty, then ($debian_chroot), else nothing”. (See /etc/bash.bashrc for how debian_chroot is defined. As the name indicates this is a Debian thing.) The backslash escapes are prompt escapes. \u is replaced by the user name, \h is replace by the machine name, and so on (see the manual for a list). Parts within \[…\] are terminal escapes; the brackets tell bash that these parts don't take any room on the screen (this lets bash calculate the width of the prompt). \033 is the ESC character (character number 033 octal, i.e. 27 decimal, sometimes written \e or ^[); it introduces terminal escape sequences. ESC [ codes m (written CSI Pm m in the xterm control sequences list) changes the color or appearance of the following text. For example the code 1 switches to bold, 32 switches the foreground color to green, 0 switches to default attributes.
Understanding escape codes
1,304,454,439,000
g++ -Wall -I/usr/local/include/thrift *.cpp -lthrift -o something This is from the Apache Thrift website. Also is the -I/usr supposed to be -I /usr?
Here is a breakdown of the command. First the original command, for reference g++ -Wall -I/usr/local/include/thrift *.cpp -lthrift -o something Now, for the breakdown. g++ This is the actual command command, g++. It is the program that is being executed. Here is what it is, from the man page: gcc - GNU project C and C++ compiler This is a compiler for programs written in C++ and C. It takes C or C++ code and turns it into a program, basically. -Wall This part makes it display all warnings when compiling. (Warn All) -I/usr/local/include/thrift This part tells g++ to use /usr/local/include/thrift as the directory to get the header files from. And with the question about whether to put a space after the I or not. You can do it either way. The way the options (options are things in a command after - signs. -Wall and -I are options) are parsed allows you to put a space or not. It depends on your personal preference. *.cpp This part passes every .cpp file in the current directory to the g++ command. -lthrift This can also be -l thrift. It tells g++ to search the thrift library when linking. -o something This tells it that when everything is compiled to place the executable in the file something.
What does this Linux command do?
1,304,454,439,000
I have a file like this: x = { y = { z = { block = { line1 line2 line3 } } } } x2 = { y2 = { block = { line4 line5 } } } xyz block = { line6 } and so on. I need to reverse lines within block but everything else should stay in order, like this: x = { y = { z = { block = { line3 line2 line1 } } } } x2 = { y2 = { block = { line5 line4 } } } xyz block = { line6 } How do I do that with sed or awk? I managed to get the blocks reversed but I can't fit them back into place: sed -n '/block = {.$/,/}$/p' inputfile | tac
If you are okay with Perl: perl -ne 'if($f && /}/){$f=0; print @blk}; $f ? unshift(@blk, $_) : print; if(/block = {/){$f=1; @blk=()}' ip.txt if(/block = {/){$f=1; @blk=()} initializes @blk array and sets the flag $f if input line contains block = { $f ? unshift(@blk, $_) : print if the flag is active, insert input line at the front of the @blk array, otherwise print the input line if($f && /}/){$f=0; print @blk} if the flag is active and input line contains }, then unset the flag and print the contents of @blk array With awk: awk 'f && /}/{f=0; for(i=c; i>=1; i--) print blk[i]} {if(f) blk[++c] = $0; else print} /block = {/{f=1; c=0}' ip.txt
How to reverse lines inside repeated blocks in a text file (using sed/awk?)
1,304,454,439,000
I just learned about brace expansion and hoped I could make use of them to launch the same C++ program with different command line arguments. My code is run like this from the terminal: mpirun -n 1 main.exe 1 10 0.1 1 5 The numbers after main.exe are the input arguments of my program. I would like to do something like this: mpirun -n 1 main.exe 1 10 {0.1,0.2} 1 5 where I expect the code to be run twice, once with 0.1 and once with 0.2 as the third argument. Why does it not work and how can I fix it? Best
No, that won't work. Brace expansions are expanded when you run the command. Only the brace expansion is affected, they don't cause you to run multiple commands. In your case, this: mpirun -n 1 main.exe 1 10 {0.1,0.2} 1 5 Will simply become this: mpirun -n 1 main.exe 1 10 0.1 0.2 1 5 If you want to run it twice, with different values, you could do something like this instead: for i in {0.1,0.2}; do mpirun -n 1 main.exe 1 10 "$i" 1 5 done And that will run mpirun -n 1 main.exe 1 10 0.1 1 5 followed by mpirun -n 1 main.exe 1 10 0.2 1 5 Of course, in this specific case, the brace expansion is needlessly complicated and you should just do: for i in 0.1 0.2; do mpirun -n 1 main.exe 1 10 "$i" 1 5 done A useful trick for understanding this sort of thing is set -x (assuming you're using bash) which will show you what command is actually being executed: $ set -x $ mpirun -n 1 main.exe 1 10 {0.1,0.2} 1 5 + mpirun -n 1 main.exe 1 10 0.1 0.2 1 5 [. . .] You can turn it off again with set +x, but using this lets you see exactly how a complex command is expanded by the shell and check what is really being executed.
Brace expansion to run program multiple times with different arguments
1,304,454,439,000
When i update my OS, suddenly the resolution set's 1024 x 768, i do everything to change it back to 1920 x 1080 but nothing helps me, maybe someone know how can i solve this problem? In terminal screenfetch ./+o+- levon@levon-desktop yyyyy- -yyyyyy+ OS: Ubuntu 18.04 bionic ://+//////-yyyyyyo Kernel: x86_64 Linux 5.3.0-28-generic .++ .:/++++++/-.+sss/` Uptime: 47m .:++o: /++++++++/:--:/- Packages: 2000 o:+o+:++.`..```.-/oo+++++/ Shell: bash 4.4.20 .:+o:+o/. `+sssoo+/ Resolution: 1024x768 .++/+:+oo+o:` /sssooo. DE: GNOME /+++//+:`oo+o /::--:. WM: GNOME Shell \+/+o+++`o++o ++////. WM Theme: Adwaita .++.o+++oo+:` /dddhhh. GTK Theme: Ambiance [GTK2/3] .+.o+oo:. `oddhhhh+ Icon Theme: ubuntu-mono-dark \+.++o+o``-````.:ohdhhhhh+ Font: Ubuntu 11 `:o+++ `ohhhhhhhhyo++os: CPU: Intel Core i7-7700 @ 8x 4.2GHz [36.0°C] .o:`.syhhhhhhh/.oo++o` GPU: EFI /osyyyyyyo++ooo+++/ RAM: 1793MiB / 15977MiB ````` +oo+++o\: `oo++. In terminal xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected primary 1024x768+0+0 0mm x 0mm 1024x768 76.00* 1920x1200_60.00 (0x2c1) 193.250MHz -HSync +VSync h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.56KHz v: height 1200 start 1203 end 1209 total 1245 clock 59.88Hz 1920x1080_60.00 (0x2c2) 193.250MHz -HSync +VSync h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.56KHz v: height 1200 start 1203 end 1209 total 1245 clock 59.88Hz 1920x1080R (0x2c3) 138.500MHz +HSync -VSync h: width 1920 start 1968 end 2000 total 2080 skew 0 clock 66.59KHz v: height 1080 start 1083 end 1088 total 1111 clock 59.93Hz In terminal xrandr --addmode HDMI1 1920_1080_60.00 xrandr: Failed to get size of gamma for output default xrandr: cannot find output "HDMI1" In terminal sudo vi /etc/default/grub GRUB_DEFAULT=0 GRUB_TIMEOUT_STYLE=hidden GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" In the image below i show that i use nvidia-driver-440
I had the same issue and ,this was my configuration. so went to nvidia site (https://www.nvidia.in/Download/index.aspx?lang=en-in) and downloaded latest driver and manually installed it and after a reboot my issue seems to be solved.
Suddenly my resolution of screen changed and i can't reset it
1,574,366,215,000
I have a script myscript.sh #!/bin/sh echo $1 $2 which is used something like ... ./myscript.sh foo bar which gives me an output of ... foo bar but how can i make this script include custom command options? for example ... ./myscript.sh -a foo and corespond to $1 and ./myscript.sh -b bar and corespond to $2
Since you are listing bash as the shell used, type: $ help getopts Which will print something like: getopts: getopts optstring name [arg] Parse option arguments. Getopts is used by shell procedures to parse positional parameters as options. OPTSTRING contains the option letters to be recognized; if a letter is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. Each time it is invoked, getopts will place the next option in the shell variable $name, initializing name if it does not exist, and the index of the next argument to be processed into the shell variable OPTIND. OPTIND is initialized to 1 each time the shell or a shell script is invoked. When an option requires an argument, getopts places that argument into the shell variable OPTARG. Getopts normally parses the positional parameters ($0 - $9), but if more arguments are given, they are parsed instead. Related: Getopts tutorial. getopts How to pass command line options How to use getopts in bash Getopt, getopts or manual parsing?
How to include custom command options in a bash script with $1 and $2?
1,574,366,215,000
I am documenting some commands for future usage, some of them are too long and I want to document them in multiple lines for visualization, and then just copy and paste them for usage. For example: Raw: openssl pkcs12 -export -in intermediate/certs/lala-lira.cert.pem -inkey intermediate/private/lala-lira.key.pem -out intermediate/private/lala-lira.pfx Presentational: openssl pkcs12 -export -in intermediate/certs/lala-lira.cert.pem -inkey intermediate/private/lala-lira.key.pem -out intermediate/private/lala-lira.pfx The problem is if I copy and paste the presentational form, each line will be interpreted as one individual and independent command.
End every line but for the last with a backslash. To use your command as an example: openssl pkcs12 -export \ -in intermediate/certs/lala-lira.cert.pem \ -inkey intermediate/private/lala-lira.key.pem \ -out intermediate/private/lala-lira.pfx What you are doing here is escaping the end-of-line, causing the shell to treat is as non-delimiting whitespace. Since the escape marker only has an effect upon the next character, the next character must be the end-of-line. (That means no trailing spaces allowed; beware!)
How to present a command in multiple lines for instant copy/paste usage?
1,574,366,215,000
I checked the od man page, but it does not explain this. What do the numbers on the left column of the od output mean?
This is actually mentioned in the info page for od (available by running info od or by visiting https://www.gnu.org/software/coreutils/manual/html_node/od-invocation.html#od-invocation which is also linked to at the end of the man page) file, albeit not in very much depth: Each line of output consists of the offset in the input, followed by groups of data from the file. By default, od prints the offset in octal, and each group of file data is a C short int’s worth of input printed as a single octal number. So, in your output, the numbers shown are octal 0000000, 0000020 and 0000030 which are decimal 0, 16 and 24. Note that the n of the word written is the 17th character (byte, here) of the file, therefore it can be found by beginning to read with an offset of 16 and the final newline is the 24th, so the next (empty) line of output starts with an offset of 24.
What are those numbers on the left of od output?
1,574,366,215,000
In *nix, if I don't have a mouse, nor am I running a GUI, what can I do to copy from what is on the screen? Take this for example: What if I want to copy things from "Entering /mnt/..." to the last "}" Thanks for the answer Read a character from an x-y coordinate on the screen But something unique on Chromebook is that I only have /dev/tty and /dev/tty8. And I don't have /dev/vcsN what should I do?
In such circumstances, script is very handy: it runs a shell, recording all the output. In your example, before entering the chroot you'd run script temp_file.txt and then sudo enter-chroot etc. On exit from the chroot, you'd exit again to exit script, and you'd find the text you wanted (along with everything else you did) in temp_file.txt. Another possibility is to run your session within screen; that allows both saving the current "window" (in screen parlance) to a file (Ctrl+a followed by h by default; this dumps the contents of the screen to a file named hardcopy.n where n is a counter) and copying and pasting between windows (Ctrl+a followed by Esc by default will enter scrollback/copy mode; see the documentation for details).
How to copy from CLI without a GUI or a mouse
1,574,366,215,000
Here is my Port.sh file echo 'Give me a maximum of 5 seconds to run please.' lsof -i | grep Xvnc | grep ESTABLISHED | grep $USER lsof -i | grep $USER | grep Xvnc | grep -o -P '(?<=:).*(?=->)' echo 'Brought to you by a cloudy and unlucky day.' echo 'Press enter to exit' I would like to make the terminal wait for the user to press enter after the last echo before closing. All of the users on my ubuntu desktop environment have the terminal set to close after the script is done. Can anyone tell me what i can do to make the terminal wait for the user to press enter. I have google searched and nothing relevant came up.
Add at the end of your script: read junk See Bash Manual for more info.
How to Prompt the user to press Enter to Exit in terminal so that the terminal doesn't close automatically?
1,574,366,215,000
When I erroneously type 'ñ' (expecting to type any command) and then remove it and type the correct letter, the output returns the command with a special character attached �, obviously the shell don't recognize the command and I must re-type it being careful not to type again the 'ñ' character. e.g. Wrong typing @tachomi:~$ ñs Correct typing @tachomi:~$ ls Output �ls: command not found Why is this happening since I removed the wrong character? How can I solve this? What I think is that this kind of characters ñ , ' etc are not compatible with the shell being this the reason that the "memory" keeps something that it doesn't recognize, but I want to be sure why is this happening. I'm using bash shell
Due to all help, I could find out how to fix this. The main issue is due to the UTF-8 encoding, the server didn't have it configured as said in comments. Quoting comments: [@Rmano]: In UTF-8, ñ is a two-bytes char [@jimmij]: backspace character for some reason deletes only one of them [@aecolley]: Try setting the environment variable LANG to C.UTF-8 This is fixed as follows: Find your current LANG $ locale -v | grep 'LANG=' Output LANG=en_US Change $ sudo LANG=en_US.UTF-8 or Change $ sudo vim /etc/default/locale Edit ~LANG="en_US.UTF-8" Restart your terminal session.
BASH: �ls: command not found when typing 'ñ' by mistake
1,574,366,215,000
I have a folder with a lot of files. I want to copy all files which begin with of these names (separated by space): abc abd aer ab-x ate to another folder. How can I do that?
With csh, tcsh, ksh93, bash, fish or zsh -o cshnullglob, you can use brace expansion and globbing to do that (-- is not needed for these filenames, but I assume they are just examples): cp -- {abc,abd,aer,ab-x,ate}* dest/ If you'd rather not use brace expansion, you can use a for loop (here POSIX/Bourne style syntax): for include in abc abd aer ab-x ate; do cp -- "$include"* dest/ done If you have a very large amount of files, this might be slow due to the invocation of cp once per include. Another way to do this would be to populate an array, and go from there (here ksh93, zsh or recent bash syntax): files=() includes=(abc abd aer ab-x ate) for include in "${includes[@]}"; do files+=( "$include"* ) done cp -- "${files[@]}" dest/
copying files with particular names to another folder
1,574,366,215,000
Is is possible to run the passwd command with an option to show the newly entered passwords? By default it doesn't show what I type and I don't want this. [dave@hal9000 ~]$ passwd Changing password for user dave. Changing password for dave. (current) UNIX password: New password: bowman Retype new password: bowman passwd: all authentication tokens updated successfully.
If you really want to go this path and there's no passwd parameter, you can use this Expect script: #!/usr/bin/env expect -f set old_timeout $timeout set timeout -1 stty -echo send_user "Current password: " expect_user -re "(.*)\n" set old_password $expect_out(1,string) stty echo send_user "\nNew password: " expect_user -re "(.*)\n" set new_password $expect_out(1,string) set timeout $old_timeout spawn passwd expect "password:" send "$old_password\r" expect "password:" send "$new_password\r" expect "password:" send "$new_password\r" expect eof How it works: [dave@hal9000 ~]$ ./passwd.tcl Current password: New password: bowman spawn passwd Changing password for user dave. Changing password for dave. (current) UNIX password: New password: Retype new password: passwd: all authentication tokens updated successfully. This shell script might also work (tested on Fedora 20 with bash-4.2.47-2 and passwd-0.79-2): #!/bin/sh stty -echo echo -n "Current password: " read old_password stty echo echo echo -n "New password: " read new_password passwd << EOF $old_password $new_password $new_password EOF How it works: [dave@hal9000 ~]$ ./passwd.sh Current password: New password: bowman Changing password for user dave. Changing password for dave. (current) UNIX password: New password: Retype new password: passwd: all authentication tokens updated successfully.
show entered new password in unix "passwd" command
1,574,366,215,000
I have a problem renaming a file that I'm not even sure which special characters I have in the file. I'm using CentOS 6 64bit. When I ls the file: Giko Suzo San?e - Ep1.avi but when view it in FTP: Giko Suzo San’e - Ep1.avi When I try to mv it: [root@server ]# mv 'Giko Suzo San?e - Ep1.avi' 'Giko Suzo Sane - Ep1.avi' mv: cannot stat `Giko Suzo San?e - Ep1.avi': No such file or directory I also tried renaming it in FTP using FlashFXP. I get: [L] 550 Giko Suzo San’e - Ep1.avi: No such file or directory Rename Failure! How can I rename with this problem?
You could use \ before the character ? so it is consider as a normal character in the name of the file and not a special character to be interpreted. The command would then be: mv Giko\ Suzo\ San\?e\ -\ Ep1.avi 'Giko Suzo Sane - Ep1.avi' EDIT: following discussion in comments, this line did the trick: mv Giko\ Suzo\ Sa*\ -\ Ep1.avi 'Giko Suzo Sane - Ep1.avi'
How to rename filename with ' or ? in file name?
1,574,366,215,000
Sometimes the output of some command include other commands. And I'd like to start that command from output without using a mouse. For example, when command is not installed there is a message with line for installing this command: $ htop The program 'htop' is currently not installed. You can install it by typing: sudo apt-get install htop So. There I'd like to type a command, that will start the command from last line from output of htop. How it can be done? Edit: I'll try show what I mean. There are two lines in "output" of command htop (actually, it's an error message). Second line of this message is the command sudo apt-get install htop. So I'd like to extract second line from output, and start it's like command itself. The following is a rubbish but it shows what I mean: htop | tail -1 | xargs start_command
The right thing to do here is to set up bash to prompt for installation, as explained in SamK's answer. I'll answer strictly from a shell usage perspective. First, the text you're trying to grab is on the command's standard error, but a pipe redirects the standard output, so you need to redirect stderr to stdout. htop 2>&1 | tail -1 To use the output of a command as part of a command line, use command substitution. $(htop 2>&1 | tail -1) The result of the command substitution is split into words and each word is interpreted as a wildcard pattern. Here this happens to do the right thing: this is a command line with words separated by spaces, and there are no wildcard characters. To evaluate a string as a shell command, use eval. To treat the result of the command as a string rather than a list of wildcard patterns, put it in double quotes. eval "$(htop 2>&1 | tail -1)" Of course, before evaluating a shell command like that, make sure it's really what you want to execute.
How to start line with command from output of another command
1,574,366,215,000
Let's say I have as follows : Folder A C1 file a A2 Folder B C1 B2 file b B3 What I want is to merge these two folders, which would give me : Folder C C1 A2 B2 B3 Notice I didn't write file a and file b. I only want to merge the architecture. Each repository contains different files and I don't want them to be added to the merged directory. As a consequence, mv Folder\ A/* Folder\ B is not adequate. Do you guys see a way to do so ?
Well, you could have find exec mkdir. cd /A/ find -type d -exec mkdir -p /C/{} \; Or if the structure is flat as shown in your example, without find cd /A/ for dir in */ do mkdir -p /C/"$dir" done and in both cases the same again for cd /B/.
Merging two directory without copying the files
1,574,366,215,000
I have a directory that has 800 files like this: file1.png [email protected] file2.png [email protected] file3.png [email protected] ... etc I need to create zip files like this file1.zip (containing file1.png and [email protected]) file2.zip (containing file2.png and [email protected]) file3.zip (containing file3.png and [email protected]) Is there a magic command I can use to do that?
If always there are two, is simple: for f in file+([0-9]).png; do zip "${f%png}zip" "$f" "${f/./@2x.}"; done Note that the above will work as is from the command line. If you intend to use it in a script, put shopt -s extglob somewhere before that line. (extglob is enabled by default only in interactive shells.) In old bash not handling extended patterns this ugly workaround will work, but better change the logic as suggested in another answer by Leonid: for f in file[0-9].png file*[0-9][0-9].png; do zip "${f%png}zip" "$f" "${f/./@2x.}"; done
Zipping files two by two
1,574,366,215,000
I just need to get how much bandwidth is used in 3 or 4 days. Do you have any application in the terminal to do it? I'd prefer if it didn't use SNMP. I found iptraf, wireshark, cacti, but they were not what I am looking for. Of course I need to save my results; for a single computer, not a network. It's very important that I can see the total size of inbound and outboud traffic. What solutions are there for me?
You know you already have that with ifconfig right? Ifconfig keeps counters about your incomming and outgoing bandwidth on each interface by default. Usually you can't reset counters except rebooting (with a few exceptions) From console you can easily leave a cron running each three days and saving results to a file for later check. Something like this: date >> ~/bw.log && ifconfig eth0|grep byte >> ~/bw.log Will produce this kind of output per run on the file bw.log at users home. Thu Oct 18 03:44:05 UTC 2012 RX bytes:414910161 (395.6 MiB) TX bytes:68632105 (65.4 MiB) My two cents...
Bandwidth monitoring in Linux
1,574,366,215,000
Currently, I have a plain text file, A, such as lowest priority very high significance. outstanding very novel In this file, every line contains a sentence. I want to separate this file into multiple files, and each file is composed of a single line of the original file, A. For instance, with respect to the example file A, I want to generate four files: A1, which has single line, lowest priority A2, which has single line, very high significance A3, which has single line, outstanding A4, which has single line, very novel. How to do that under linux?
You can easily do it using split command. E.g.: split -l1 -d -a 3 A A Check man split for details.
Regarding separate a single file into multiple files according to line separation
1,574,366,215,000
How can I align data into pretty columns relative to given word? For example, I have output of the route -n command: default via 172.20.99.254 dev eth0 87.33.17.71 dev tun0 scope link 89.223.15.12 via 172.20.99.254 dev eth0 src 172.20.99.74 172.20.9.0/24 dev eth0 proto kernel scope link src 172.20.99.74 65.46.5.89 dev tun0 scope link 192.168.11.0/24 dev tun0 scope link 45.211.111.7 dev tun0 scope link and I would like to align it by the word dev, so that the column containing the word dev is aligned: default via 172.20.99.254 dev eth0 87.33.17.71 dev tun0 scope link 89.223.15.12 via 172.20.99.254 dev eth0 src 172.20.99.74 172.20.9.0/24 dev eth0 proto kernel scope link src 172.20.99.74 65.46.5.89 dev tun0 scope link 192.168.11.0/24 dev tun0 scope link 45.211.111.7 dev tun0 scope link I cannot just naively replace fist space character with a tab, because sometimes I need 1 tab, other times I need 3 tabs.
I would use a character that I am sure it doesn't exist into file, as a dummy separator for the column command, like: $ sed 's/dev/@dev/' file | column -ts@ default via 172.20.99.254 dev eth0 87.33.17.71 dev tun0 scope link 89.223.15.12 via 172.20.99.254 dev eth0 src 172.20.99.74 172.20.9.0/24 dev eth0 proto kernel scope link src 172.20.99.74 65.46.5.89 dev tun0 scope link 192.168.11.0/24 dev tun0 scope link 45.211.111.7 dev tun0 scope link You can also use -o (specify the output separator) together with column, you can try with -o "" or -o " " to see behaviour. Also the above sed substitutes only the first occurence, and a simple matching was used, maybe you 'd need a more strict matching for other cases, like matching surrounding whitespaces.
align data by word into column
1,574,366,215,000
I would like to create a command which fetches matching previous command, something like this: match-latest "ssh root@150" but which then exits and populates the command prompt with the most recent match, e.g.: owilliams@OWILLIAMS010451 ~/go/ % ssh [email protected] and leaves it for the user to edit (if desired) and press enter themself. Is this possible? How would I do it? I am using zsh 5.8 on a MacBook Pro OSX
zsh offers a plethora of ways to retrieve command lines from history. Here’s just a selection of these. Keybindings Press ↑ to step through previous command lines. Press Alt. to step through the last word of each previous command line. Press CtrlR (“reverse”) and start typing to search through previous command lines. Type a word at the start of a new command line and press AltP (“previous”) to step through previous lines starting with that word. See Zsh Line Editor: History Control in the Zsh manual for more options. History expansion Type !! anywhere on your current command line, then press AltSpace to replace it with the previous command line. Type !<word> and press AltSpace to replace it with the most recent command line starting with <word>. Type !?<word> and press AltSpace to replace it with the most recent command line containing <word>. See History Expansion in the Zsh manual for more options. fc builtin command Type fc (“fix command”) on a new command line and press Enter to open the previous command line in an external editor (configured by setting $EDITOR), after which Zsh runs the modified command line. Use fc <word> for the most recent line starting with <word>. Pass the -s (“silently”) flag to execute the line immediately, without first editing it. See Shell Builtin Commands: fc in the Zsh manual for more options.
zsh: How to retrieve the previous command line?
1,574,366,215,000
I'm trying to run a script that takes a -t argument. This argument stands for text, and the value -- in theory -- is allowed to be multiline. On the command line, I assume a Here Document would work, but I don't like typing out long things on the command line. In addition, I want this file to persist so I can pass it again later. I'm not sure how to do this; if I cat foo | xargs echo, it prints as one line. This fixes that: cat foo | xargs -d='' echo, but it makes me think there are things I don't understand that will change the whitespace or general structure of the document depending on its contents. How do I pass a multiline file as an argument without having to worry about special chars or changing its format?
xargs -d='' (where -d is an extension of the GNU implementation of xargs) is the same as xargs -d= as '' is just quoting syntax in the shell language. That tells xargs to use = as the delimiter. So for instance echo foo=bar | xargs -d= cmd would call cmd with foo and bar<newline> as arguments. With xargs -d '' or xargs --delimiter= (which can be abbreviated to xargs --del= and even xargs --d= in current versions of xargs as there's currently no other long-option that starts with d), you'd get a syntax error. You could use xargs -d '\0', that would be the same as the more portable (though still not standard) xargs -0, which uses the NUL character as delimiter. NUL characters are not meant to appear in text files and anyway can't be passed in an argument to a non-builtin¹ command as the arguments are passed as NUL-delimited strings to the execve() system call. So: xargs -I'<TEXT>' -0a file cmd -t '<TEXT>' other args (-a being another GNU xargs extension), would pass the exact content of file as argument to cmd -t². But if file contains a foobar line for instance, that would pass the whole line including the line delimiter ("foobar\n"). Alternatively, you could do (in POSIX-like shells): cmd -t "$(cat file)" other args Command substitution does strip all the trailing newline characters from the output of the inside command, so would likely be preferable. If the output contains NUL characters, some shells, such as bash remove them (use "$(tr -d '\0' < file)" instead to get that behaviour in any shell). Note that the double quotes around it are important. Without them, the expansion would be subject to split+glob (split only in zsh) resulting in several arguments if the file contained characters of $IFS (and newline is in the default value of $IFS) or wildcard characters. In ksh, zsh or bash, you can also use "$(<file)" instead of "$(cat file)" which optimises out the execution of cat by reading the files by themselves (in bash, that's still done in a child process though). In zsh, you can also use the $mapfile special associative array in the zsh/mapfile module: zmodload zsh/mapfile cmd -t "$mapfile[path/to/file]" other args That's passing the contents as-is, including NULs (which would cause execve() to truncate the arg after the first NUL) and trailing newline characters. In the rc shell or derivatives, you can do: cmd -t ``(){cat file} other args Where ``(sep)cmd is a variant of command substitution (which is `cmd there) where you specify the separator, here none. There's no stripping of trailing newlines in that shell so the whole file contents will be passed as-is. In any case, note that on most systems, there's a limit on the total size of arguments³ to a command (though on recent versions of Linux, that limit can be raised by changing the stack size limit), and on Linux on the size of a single argument (128KiB max). Now, to pass a multiline string literally without have to worry about special characters, you can do: cmd -t 'the multiline here where the only character you have to worry about in Bourne-like shells is single quote which you have to enter as '\'', that is leave the quotes, enter it with other quoting operators (here \) and resume the quotes' In the rc shell (where '...' is the only form of quote) or zsh when the rcquotes option is enabled, single quotes can be entered inside single quotes as '', for example: cmd -t 'It''s simpler like that'. See How to use a special character as a normal one in Unix shells? for the details about quoting in various shells. Or you can use a here-document, either stored in a variable: multi=$(cat << 'EOF') multiline string here, only worry would be about an EOF line by itself though also note that all trailing newlines, so that includes all trailing empty lines are removed, including these: EOF ) cmd -t "$multi" # note the quotes again Or directly as: cmd -t "$(cat << 'EOF' multi line here EOF )" Note the quotes around EOF in those. Without them, parameter expansions (like $var), command substitutions (like $(cmd) or `cmd`) and arithmetic expansions ($((...))) are still performed. In the mksh shell, you can use: cmd -t "$(<< 'EOF' multi line EOF )" Where the cat and fork are optimised out, making it essentially a multiline form of quotes. ¹ It can't be passed in the arguments of builtin commands or functions, and can't even be stored in variables with most shells as well, zsh being the only exception that I know. ² as long as the file is not empty. If the file is empty, with -I, the command is not run at all, you'd need the file to contain one NUL character for the command to be called with one empty argument. ³ technically, the limit (in the execve() system call again, so does not apply to shell builtins / functions) is on the cumulated size of arguments and environment and generally also takes into account the size of the pointers to each argument and envvar string, so it's generally difficult to predict in advance whether a particular set of arguments will break the limit.
How do I pass the contents of a multiline file as an argument?
1,574,366,215,000
My man page for 'whatis' does not match others I have found online. Namely, no options are available to use with it. /home/User$ whatis -d ls whatis: -d: unknown option uname -srv Darwin 16.7.0 Darwin Kernel Version 16.7.0: Sun Jun 2 20:26:31 PDT 2019; root:xnu-3789.73.50~1/RELEASE_X86_64 My first thought is that I could update bash, but it's not a builtin command so I don't know if that would work. Only been working with CLI for a few days now, and unsure how to even troubleshoot. I am also worried troubleshooting could lead to bugging my computer somehow.
You appear to be on a Mac. Some of the online pages will be for other UNIX-like systems. Many of them will be Linux-centric (more specifically, GNU-centric) without necessarily realising so. The definitive solution for any given command is to use your installed reference documentation. For example, man whatis to see the page for your own installed version of whatis. This isn't about updating commands to get extra flags and options; it's that there are different implementations of what appear on the surface to be the same commands. As a Mac user you can get GNU versions of many of the standard commands through an add-on package system called homebrew. I don't use it myself but you can find out more about it at https://brew.sh/
What do I need to do to update my commands?
1,574,366,215,000
PROBLEM: I installed sl but when I type sl on the command line I get this: bash: sl: command not found (root@host)-(03:55:38)-(/home/user) $apt install sl Reading package lists... Done Building dependency tree Reading state information... Done sl is already the newest version (3.03-17+b2). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Sl is a program that can display animations aimed to correct you if you type 'sl' by mistake. SL stands for Steam Locomotive. package on Debian packages Installation instructions on cyberciti.biz/ Excerpt: Install sl software to get Steam Locomotive ( train in shell ) Type the following apt-get command/apt command on a Debian / Ubuntu Linux: $ sudo apt-get install sl Usage Okay, just mistyped ls command as sl: $ sl (root@host)-(03:57:47)-(/home/user) $cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" (root@host)-(04:04:01)-(/home/user) $bash -version GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. QUESTION: What is happening here, where is the locomotive, is there something I need to configure.......?
If you're running as root (I'm guessing you might be since you ran apt directly), PATH by default will exclude /usr/local/games and /usr/games due to a conditional in /etc/profile: if [ "`id -u`" -eq 0 ]; then PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" else PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" fi export PATH sl happens to be in /usr/games.
Locomotive doesn't work on a debian system: sl ls
1,574,366,215,000
How do I efficiently combine multiple text files and remove duplicate lines in the final file in Ubuntu? I have these files: file1.txt contains alpha beta gamma delta file2.txt contains beta gamma delta epsilon file3.txt contains delta epsilon zeta eta I would like the final.txt file to contain: alpha beta gamma delta epsilon zeta eta I would appreciate the help.
If you want to print only the first instance of each line without sorting: $ awk '!seen[$0]++' file1.txt file2.txt file3.txt alpha beta gamma delta epsilon zeta eta
Combine text files and delete duplicate lines
1,574,366,215,000
When using find, how can I drop the original filename extension (i.e. .pdf) from the second pair of -exec braces ({})? For example: find ~/Documents -regex 'LOGIC.*\.pdf' -exec pdf2svg {} {}.svg \; Input filename: ~/Documents/LOGIC-P_OR_Q.pdf Output filename: ~/Documents/LOGIC-P_OR_Q.pdf.svg Desired filename: ~/Documents/LOGIC-P_OR_Q.svg
You can use an "in-line" shell script, and parameter expansion: -exec sh -c 'pdf2svg "$1" "${1%.pdf}.svg"' sh {} \; or (more efficiently, if your find supports it) -exec sh -c 'for f; do pdf2svg "$f" "${f%.pdf}.svg"; done' sh {} +
Dropping filename extensions with find -exec
1,574,366,215,000
I need to be able to do this via command line in one step: lab-1:/etc/scripts# sqlite3 test.db SQLite version 3.8.10.2 2015-05-20 18:17:19 Enter ".help" for usage hints. sqlite> .mode csv ; sqlite> .import /tmp/test.csv users sqlite> select * from users; John,Doe,au,0,"",1,5555,91647fs59222,audio sqlite> .quit I've tried the following: lab-1:/etc/scripts# sqlite3 test.db ".mode csv ; .import /tmp/deleteme.csv users" and lab-1:/etc/scripts# sqlite3 test.db ".mode csv .import /tmp/deleteme.csv users" I don't get errors but I also don't end up with any data in the users table. Any tips would be appreciated.
SQLite meta commands are not terminated by ; but by newline. Therefore, you will have to provide the commands in some other way so that newlines are inserted in the correct places. Here's a few examples, of which I would probably use the first one (because it's readable). Use a here-document: sqlite3 testdb <<'END_COMMANDS' .mode csv .import /tmp/deleteme.csv users END_COMMANDS Format the commands with printf: printf '.mode csv\n.import /tmp/deleteme.csv users\n' | sqlite3 test.db or printf '%s\n' '.mode csv' '.import /tmp/deleteme.csv users' | sqlite3 test.db Use a here-string with C-escapes (in shells that supports it): sqlite3 test.db <<<$'.mode csv\n.import /tmp/deleteme.csv users\n'
sqlite3 command line - how to set mode and import in one step
1,574,366,215,000
I am looking for one or possibly more commands, or a combination of commands, to get my PC to use as much resources as possible. I want to check how my computer behaves when subject to the maximum amount of data it can handle. I've tried by running multiple programs such as browsers, graphic and system tools one by one and all together, I've also tried to download big files to monitor its network and storage performance but every time depending on what program(s) I run, I get different results in terms of RAM, CPU, I/O. Often my whole system crashes and I have to reboot or struggle to close some programs. To monitor, I'm using different commands such as iostat, iotop, htop, vmstat, lsof, iftop, iptraf but also some other little programs. (I'm using ubuntu) I would very much appreciate an answer that could list some way to exploit GPU, CPU and RAM and a way to write a file (even with zeros) in the quickest way possible to see how fast my computer can produce output and write it on the HDD.
You could probably use stress: stress: tool to impose load on and stress test systems If you wnat to stress memory you could use : stress --vm 2 --vm-bytes 512M --timeout 10s to use 2 vm using both 512MB of ram for 10 seconds. If you want to stress CPU add: stress --cpu ## -t 10s with ## equal to your number of core to simulate a 100% cpu usage on all core at the same time for 10 seconds. and if you want to simulate IO use the option : stress --io 4 -t 10s It will add thread that do sync call to you disk but you could also write to your disk with this option: stress --hdd 4 --hdd-bytes 256M This would create 4 thread writing each 256 MB of data to your disk, this could of course adjusted to simulated either lots of small file write or huge file write. This will need some adaptation form you if you wan to stress thing one by one or all together or for longer time like this: stress --hdd 4 --hdd-bytes 256M --io 4 --cpu ## --vm 2 --vm-bytes 512M -t 60s As for the GPU you could use glmark2 which should be available in Ubuntu. It's a basic GPU benchmark you could run it forever to simulate a gpuload: glmark2 --run-forever
Is there a command or a serie of commands to make the computer use as much resources as possible?
1,574,366,215,000
How can I execute each command pre-appended with another one? Example when I run: nmap -p 80 host I want it to run proxychains nmap -p 80 host even when I do not add proxychains intentionally. In other words: can I alias all commands at once with proxychains pre-appended? Bonus if this is something I can switch on/off.
Would it work to just run a full shell under proxychains? Assuming it can deal with processes started by the shell properly. You could do with just $ proxychains bash and exit the shell at will. But if you really want to, you can abuse the DEBUG trap (with extdebug set) to mangle the commands the shell runs. This would run every command with time: $ shopt -s extdebug $ d() { eval "time $BASH_COMMAND"; return 1; } $ trap d DEBUG $ sleep 2 real 0m2.010s user 0m0.000s sys 0m0.000s $ trap - DEBUG # turn it off, this still prints the 'time' output But the tricky part here is that it will also affect builtins, like trap or shopt themselves, so you'd probably want to add some exceptions for those... Also, stuff like cd somedir would turn into proxychains cd somedir, which probably will not work. This would also affect everything started from within functions etc. Maybe it's better to just have the function use proxychains only for those commands known to need it.
add a command to each command (e.g. proxychains each command by default)
1,574,366,215,000
I'm writing a wrapper Bash script for ARSS to make it easier to use. The program converts images to sounds and vice-versa but it only accepts 24-bit BMP images, which I was only able to produce using GIMP so far. I'm looking for a way to convert any given image to a suitable BMP file so ARSS can process it. I tried ImageMagic's convert, but I wan't able to get the 24-bt color depth. Here's my script: #!/bin/bash # where is ARSS binary? ARSS="/unfa/Applications/ARSS/arss-0.2.3-linux-binary/arss" convert "$1" -depth 24 "$1.bmp" $ARSS --quiet "$1.bmp" "$1.wav" --sample-rate 48000 --format-param 32 --sine --min-freq 20 --max-freq 20000 --pps 250 Here's the output: $ ./warss.sh 01.png The Analysis & Resynthesis Sound Spectrograph 0.2.3 Input file : 01.png.bmp Output file : 01.png.wav Wrong BMP format, BMP images must be in 24-bit colour As you can see I tried using convert "$1" -depth 24 "$1.bmp" to get a 24-bit BMP image, but that doesn't work as I expected. For reference, I get a proper file while exporting with GIMP: And ARSS processes such a BMP file fine. I cannot use that from the commandline however, and using GIMP's GUI every time defies the purpose of what I'm trying to achieve. I saw there's a way to use GIMP in headless mode by feeding it commands, but I don't know if I even need that. Maybe there's just something simple I don't know?
According to an ImageMagick forum post, using -type truecolor may be the correct way to force the image to 24 bit: convert "$1" -type truecolor "$1.bmp"
How to convert an image to a 24-bit BMP in commandline?
1,574,366,215,000
I want to delete all the text after the second underscore (including the underscore itself), but not on every line. Every of the target lines begin with a pattern (>gi_). EXAMPLE. Input >gi_12_pork_cat ACGT >gi_34_pink_blue CGTA Output >gi_12 ACGT >gi_34 CGTA
$ awk -F_ 'BEGIN {OFS="_"} /^>gi/ {print $1,$2} ! /^>gi/ {print}' input >gi_12 ACGT >gi_34 CGTA
Delete everything after second underscore
1,574,366,215,000
I've got a command foo that outputs a list of filenames: $ foo file1.a file2.b file3.a And a command bar that accepts the names of .a files as arguments and does some processing: $ bar file1.a file3.a Great success! $ bar file2.b FAILURE I'd like to combine the two with a pipe like foo | xargs bar, but I need to filter out all filenames that don't end in .a. How can I do this? Ideally I want something simple I can stick between the two commands in a pipe, like foo | filter-lines ".a" | xargs bar.
You can use grep to grab all files within foo that end with .a. foo | grep "\.a$" | xargs -d'\n' -r bar
How to filter lines passed through a pipe?
1,574,366,215,000
Trying to command mv foo&foo.jpg images/ but get command not found, then if try and rename the file it won't let me.
Use single quotes. For example: mv 'foo&foo.jpg' images/ Unless you quote or escape the & symbol, it's interpreted as a special token by the shell.
cannot move file because of & in name
1,574,366,215,000
It would certainly be possible to whip together something in Python to query a URL to see when it was last modified, using the HTTP headers, but I wondered if there is an existing tool that can do that for me? I'd imagine something like: % checkurl http://unix.stackexchange.com/questions/247445/ Fri Dec 4 16:59:28 EST 2015 or maybe: % checkurl "+%Y%m%d" http://unix.stackexchange.com/questions/247445/ 20151204 as a bell and/or whistle. I don't think that wget or curl have what I need, but I wouldn't be surprised to be proven wrong. Is there anything like this out there?
This seems to fit your requirements (updated to use '\r\n' as record separator for response data): #!/bin/sh get_url_date() { curl --silent --head "${1:?URL ARG REQUIRED}" | awk -v RS='\r\n' ' /Last-Modified:/ { gsub("^[^ ]*: *", "") print exit } ' } unset date_format case $1 in (+*) date_format="$1" shift ;; esac url_date="$(get_url_date "${1:?URL ARG REQUIRED}")" if [ -z "$url_date" ] then exit 1 fi if [ "$date_format" != "" ] then date "$date_format" -d"$url_date" else echo "$url_date" fi
Command line tool to check when a URL was updated?
1,574,366,215,000
I'm trying to create some files that has different names and with different extensions. Let say I want to create for example 3 files with following name and extension: File001.000 File002.001 File003.002 Or with alphabetical extension: File001.A File002.B File003.C Also it would be better if I could create files with random names and extension. Filexct.cbb Filedht.ryt Filefht.vbf
The simplest way I could find is: touch $(paste -d '.' <(printf "%s\n" File{001..005}) \ <(printf "%s\n" {000..004})) This will create File001.000 File002.001 File003.002 File004.003 File005.004 To understand how this works, have a look at what each command prints: $ printf "%s\n" File{001..005} File001 File002 File003 File004 File005 $ printf "%s\n" {000..004} 000 001 002 003 004 $ paste -d '.' <(printf "%s\n" File{001..005}) \ > <(printf "%s\n" {000..004}) File001.000 File002.001 File003.002 File004.003 File005.004 So, all together, they expand to touch File001.000 File002.001 File003.002 File004.003 File005.004 Creating 5 files with random names is much easier: $ for i in {1..5}; do mktemp File$i.XXX; done File1.4Jt File2.dEo File3.nhR File4.nAC File5.Fd8 To create 5 files with random 5 alphabetical character names and random extensions, you can use this: $ for i in {1..5}; do mktemp $(head -c 100 /dev/urandom | tr -dc 'a-z' | fold -w 5 | head -n 1).XXX done jhuxe.77b cwvre.0BZ rpxpp.ug1 htzkq.f9W bpgor.Bak Finally, to create 5 files with random names and no extensions, use $ for i in {1..5}; do mktemp -p ./ XXXXX; done ./90tp0 ./Hhn4U ./dlgr9 ./iVcn4 ./WsJIx
How to create multiple files/directories with randomized names?
1,574,366,215,000
Given a shell script file that begins as #!/bin/bash # (bash script here), and has been chmod +xed, is there any difference in running ./script and bash script from the command-line?
It depends on your $PATH. ./script will run /bin/bash script. bash script will use whatever bash comes first in your path, which isn't necessarily /bin/bash, and could be a different version of Bash.
Is there a difference between ./script and bash script? [duplicate]
1,574,366,215,000
I have numerous HTML files all nested inside different folders contained in a single overall folder. In each of these HTML files I need to replace /contact/index.html With /contact/index.php Is there an easy way of doing this from the command line?
Yup, if you have GNU find and GNU sed, try this in the parent folder: find . -type f \( -iname "*.htm" -o -iname "*.html" \) -exec sed -i.bak 's#/contact/index\.html#/contact/index.php#' '{}' + This will find all files whose name ends in .html or .HTML or .htm or .HTM (or .HtM...) and run this sed command on them: sed -i.bak 's#/contact/index\.html#/contact/index.php#g' This will make the substitution you want and create a backup of the original foo.htm called foo.htm.bak. If you don't want the backups, just remove .bak. DETAILS: The find command, obviously, finds files or folders. It's syntax can be quite complex and is explained in detail in its man page some of which is reproduced below: The general format is find [where] [what]. In the example I have given above, the where is . which means the current directory. The what is all files that have a html or similar extension, so I use iname which is: -iname pattern Like -name, but the match is case insensitive. For example, the patterns `fo*' and `F??' match the file names `Foo', `FOO', `foo', `fOo', etc. However, I want it to match both html and htm so I use the -o flag which means: expr1 -o expr2 Or; expr2 is not evaluated if expr1 is true. Such constructs need to be grouped together which is done by parentheses ( ) which, however, need to be escaped from the shell so we use \( and \). The magic happens in the -exec part: -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in argu‐ ments where it is alone, as in some versions of find. [...] The specified command is run once for each matched file. The command is executed in the starting directory. There are unavoidable security problems surrounding use of the -exec action; you should use the -execdir option instead. In other words, given a command like -exec ls {},find will find all files matching the conditions you have set and iterate through them, replacing {} with the current file name and executing the command given. I am also using + instead of \; to end the exec call because that will cause find to try and run as few commands as possible, this is just a minor optimization unless you have thousands of files when it could be important: -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total num‐ ber of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the com‐ mand. The command is executed in the starting directory. Finally, sed is a command line text stream editor, it will apply the command you give it to each line of a file. In this case, the command is a substitution, the basic format is: s#pattern#replacement#flags The delimiters (# ) can be any special character and are traditionally / but I chose # because otherwise I would have had to escape the /. Note that ChrisDown in his answer chose to use |. This is simply a personal choice and the two are equivalent.
Quickest way to find and replace a string in numerous HTML files
1,574,366,215,000
Is there something like a logical for the cli? I want to achieve this mv -t newfolder *.(png|jpg) so that alls jpg and png files are moved into newfolder. I know it could be done with mv -t newfolder *.png && mv -t newfolder *.jpg But there is somewhere sytactic sugar, isn't it?
In Bash: mv -t newfolder *.@(png|jpg) ?(pattern) Matches zero or one occurrence of the given patterns *(pattern) Matches zero or more occurrences of the given patterns +(pattern) Matches one or more occurrences of the given patterns @(pattern) Matches one of the given patterns !(pattern) Matches anything except one of the given patterns It requires extglob option: $ shopt extglob extglob on If it's off you can turn it on with $ shopt -s extglob
Regex match in CLI
1,574,366,215,000
I do not hold a deep understanding of computer science concepts but would like to learn more about how the utility encfs works. I have a few question regarding the concept of filesystem in regards to encfs. It is said that encfs is a cryptographic filesystem wiki link. 1)To encrypt the files encfs is moving around blocks of the files to be encrypted, so am I correct to see this 'scrambled' version of the files as a new perspective which justifies the term of a new filesystem? 2)In the man pages of encfs in the section CEVEATS link to man of encfs online, it says that encfs is not a true file system. How should I understand this? Is that because some necessary features common to all file systems is missing in encfs' file system? Or is because of some other more substantial reason? 3)The man pages say that it creates a virtual encrypted file system. There are two questions here; what is it that makes it virtual is it that it is a file system within a file system? and the encrypted is that there is not a straight forward way to map the file blocks into a format to be read by other programs? 4)How does the command fusermount relate to encfs?
I think that behind your description, there is a misconception. The unencrypted data is not stored on the disk at any point. When you write to a file in the encfs filesystem, the write instruction goes to the encfs process; the encfs process encrypts the data (in memory) and writes the ciphertext to a file. The file names, as well as the file contents, are encrypted. Reading a file undergoes the opposite process: encfs reads the encrypted data from the disk file, decrypts it in memory and passes the plaintext to the requesting application. When you run the encfs command, it does not decrypt any data. It only uses the password that you supply to unlock the filesystem's secret key. (This is actually a decryption operation, cryptographically speaking, but a different type from what happens with the file data. I will not go into more details here.) 1) Encfs is not exactly “moving blocks around”; it is decoding blocks when it reads them. Encfs is a filesystem because it behaves like one: you can store files on it, when it's mounted. 2) Encfs is not a “true” filesystem because it doesn't work independently. Encfs only provides an encryption layer; it uses an underlying filesystem to actually store data and metadata (metadata is auxiliary information about files such as permissions and modification times). 3) Virtual filesystem is another way to say that encfs itself doesn't store any data, it needs an underlying filesystem (see (2) above) for that. Encrypted means just that: encfs stores the data that you put in it in an encrypted form, which cannot be decrypted without the key. Another program could read the data stored by encfs if and only if that other program had access to the key (which requires the password that the key is protected with). 4) The fusermount command sets up a FUSE mount point. You would not normally call it directly, because a FUSE filesystem is implemented by a user-mode process which you have to start anyway, and that process (e.g. encfs) will take care of setting up the mount point. Unmounting a FUSE filesystem, on the other hand, is a generic operation, you can always do it by calling fusermount -u.
How to understand the filesystem concepts used by encfs?
1,574,366,215,000
I am writing a script to check whether all the servers in my organization are functioning properly. One of those is Zimbra mail server. I am trying to send a mail through sendmail provided by zimbra package using the following command ssh Jarvice@someip echo "Hello" | /opt/zimbra/postfix-2.7.4.2z/sbin/sendmail [email protected] But I am unable to do so. I think I am making some mistake in the echo "Hello" | /opt/zimbra/postfix-2.7.4.2z/sbin/sendmail [email protected] part. But I don't know what it is. Can someone tell me the correct way to do so? Since I am doing this through a script, I have made sure that I can ssh without providing the password. I wish to fully automate the process.
Your local shell is dividing your command into ssh ... and /opt/zimbra ..., and then piping the two. You have to quote the argument to ssh, so your local shell won't try to interpret it and it will be sent to the remote computer in its entirety: ssh Jarvice@someip "echo \"Hello\" | /opt/zimbra/postfix-2.7.4.2z/sbin/sendmail [email protected]"
Execute command through SSH
1,574,366,215,000
I am logged into a linux server via ssh. From the bash shell, is there any standard way of bringing up some kind of text editing environment so I can create and edit a text file? I am aware that there exist apps like emacs and vi, but I don't know if they are appropriate for basic text file editing or if I should use something simpler, or if not, which one to use.
It's well-known fact, that vi has only two modes: it beeps and spoils text (: So, if you're newbie and know nothing about vi and emacs, the best choice for you will be something simple like nano. It has hint in footer and it's easy to edit and save your edits. But in case you want to be a good administrator, you should learn vi or emacs, because there're great and powerful editor's that can save a lot of time during text writing/editing. ps. A little vi hint: To exit from vi just type :q .
How to create and edit a text file from the bash shell
1,302,063,784,000
I need to be able to determine the size of a block special file. For example, given /dev/sda, I need a command that will provide the size of the device. (By size I mean capacity, since this is a storage device.) Rationale: I can store information in the device with: echo "12345" >/dev/sda # needs to be run as root (Don't run that command by the way... unless you don't care about your data.) However, I need to know how much data I can store on the device and I don't know how to do that.
I don't think there's a general, cross-platform answer. On Linux, the information is in /proc/partitions (in spite of the name, this contains all (most?) block devices, not just PC-style partitions). awk '$4 == "sda" {print $3}' /proc/partitions
How to get size of a block special file? [duplicate]
1,302,063,784,000
I want to grant root access temporarily from ip 1.2.3.4, just for the current session (until next sshd or server restart) I could add this to sshd_config, and then remember to remove it: AllowUsers [email protected] but is there better way? Can I change the current settings of the currently running sshd daemon, without editing the config file ?
In FreeBSD, therc system provides a mechanism for passing flags to the sshd daemon, namely to set the sshd_flags variable prior to starting/restarting sshd. The rc system will look for files names /etc/rc.conf.d/sshd/* and source all such files on any 'service sshd' invocation. That makes it fairly trivial to create a one-time unique filename in the correct directory, restart the service, delete that unique filename, and we're done. To my admittedly limited knowledge, Linux lacks a dedicated directory into which one can place any number of arbitrarily-named files for the purpose of configuring a specific daemon's run-time behavior. I tried to mimic FreeBSD's mechanism in (Ubuntu) Linux by modifying /etc/default/ssh to search for and source files from a specific location (/etc/default/ssh.tmp.*) but had no success. Generally on the Linux systems I have available at hand, it seems that /etc/default/ssh nominally has: # Default settings for openssh-server. This file is sourced by /bin/sh from # /etc/init.d/ssh. # Options to pass to sshd SSHD_OPTS= On some systems it seems the key variable name here is OPTIONS, so check your systemd service file for sshd to be certain. Given that Plan A didn't work, I lowered my expectations and went with Plan B, which is essentially the same as modifying your sshd_config except that since by default /etc/default/ssh has no content, it is perhaps a tad safer to append your desired runtime options there than is it to muck with your /etc/ssh/sshd_config file. Further, since the syntax of /etc/default/ssh is shell syntax, once can feel relatively safe that by appending an altered value of SSHD_OPTS any previous assignment of SSHD_OPTS will be overridden for the next invocation, and then will be restored once /etc/default/ssh.safety is renamed back to /etc/default/ssh. # cat << EOF > test.sh cp -p /etc/default/ssh /etc/default/ssh.safety printf 'SSHD_OPTS='\''-o "AllowUsers [email protected]"'\''\n' >> /etc/default/ssh service sshd restart mv /etc/default/ssh.safety /etc/default/ssh EOF So starting from: # service ssh status ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2022-12-08 16:36:00 PST; 51s ago Process: 43364 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 43367 (sshd) Tasks: 1 (limit: 9830) CGroup: /system.slice/ssh.service └─43367 /usr/sbin/sshd -D ...snip... And: # cat /etc/default/ssh # Default settings for openssh-server. This file is sourced by /bin/sh from # /etc/init.d/ssh. # Options to pass to sshd SSHD_OPTS= Then with: # cat test.sh cp -p /etc/default/ssh /etc/default/ssh.safety printf 'SSHD_OPTS='\''-o "AllowUsers [email protected]"'\''\n' >> /etc/default/ssh service ssh restart mv /etc/default/ssh.safety /etc/default/ssh We can: # sh test.sh # service ssh status ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2022-12-08 16:37:42 PST; 3min 2s ago Process: 54918 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 54923 (sshd) Tasks: 1 (limit: 9830) CGroup: /system.slice/ssh.service └─54923 /usr/sbin/sshd -D -o AllowUsers [email protected] ...snip... Notice here that sshd is running with our desired short-term command-line options! Also notice that /etc/default/ssh is back to its unaltered state: # cat /etc/default/ssh # Default settings for openssh-server. This file is sourced by /bin/sh from # /etc/init.d/ssh. # Options to pass to sshd SSHD_OPTS= Which means that the next time we: # service ssh restart We will see: # service ssh status ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2022-12-08 16:43:56 PST; 7s ago Process: 44890 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 44896 (sshd) Tasks: 1 (limit: 9830) CGroup: /system.slice/ssh.service └─44896 /usr/sbin/sshd -D Back to the bare, stock sshd service. Again, this is essentially the same trick as modifying /etc/ssh/sshd_config except that if sshd_config gets mucked up or deleted (or even renamed), you may have a harder time recovering from that than from a similarly hosed /etc/default/ssh file. Since /etc/default/ssh is by default empty, you can simply rm it in the worst case to get sshd back to running the way it was before you started trying to play clever tricks.
sshd: add AllowUsers for current session, without editing sshd_config
1,302,063,784,000
I have a very similar problem to this question, but have no idea how to adapt the answer to my own issue. I have a tab-sep file with 2nd column containing comma-sep list, such as: TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0000166,GO:0003674,GO:0005488,GO:0005515,GO:0005524,GO:0005575 TRINITY_DN1_c0_g3 DN1_c0_g3 GO:0005829,GO:0006457,GO:0006458,GO:0006950,GO:0008134 TRINITY_DN10_c0_g1 DN10_c0_g1 GO:0050896,GO:0051082,GO:0051084,GO:0051085 I want to get it to this: TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0000166 TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0003674 TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0005488 TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0005515 TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0005524 TRINITY_DN1_c0_g1 DN1_c0_g1 GO:0005575 TRINITY_DN1_c0_g3 DN1_c0_g3 GO:0005829 TRINITY_DN1_c0_g3 DN1_c0_g3 GO:0006457 TRINITY_DN1_c0_g3 DN1_c0_g3 GO:0006458 TRINITY_DN1_c0_g3 DN1_c0_g3 GO:0006950 TRINITY_DN1_c0_g3 DN1_c0_g3 GO:0008134 TRINITY_DN10_c0_g1 DN10_c0_g1 GO:0050896 TRINITY_DN10_c0_g1 DN10_c0_g1 GO:0051082 TRINITY_DN10_c0_g1 DN10_c0_g1 GO:0051084 TRINITY_DN10_c0_g1 DN10_c0_g1 GO:0051085 There is a variable number of terms in the 3rd column. I need a separate line for each with it's associated 1st and 2nd column. If any help, the starting one liner from above questions is: perl -lne 'if(/^(.*?: )(.*?)(\W*)$/){print"$1$_$3"for split/, /,$2}' But I have no idea which bits needs to be changed to work for my issue! Many thanks in advance for help.
This awk command is quite readable: awk ' BEGIN {FS = "[,\t]"; OFS = "\t"} {for (i=3; i<=NF; i++) print $1, $2, $i} ' file In perl, this is perl -F'[,\t]' -lane 'print join "\t", @F[0,1], $F[$_] for 2..$#F' file # or perl -F'[,\t]' -slane 'print @F[0,1], $F[$_] for 2..$#F' -- -,=$'\t' file If you're not sure you have actual tab characters: awk: FS = ",|[[:blank:]]+" perl: -F',|\s+' And for fun, bash while IFS= read -r line; do prefix=${line%%GO:*} IFS=, read -ra gos <<< "${line#$prefix}" for go in "${gos[@]}"; do echo "$prefix$go"; done done < file This version doesn't care about spaces versus tabs, but it will be much slower than perl or awk.
Expanding comma separated list in a tab-delimited file into separate lines
1,302,063,784,000
Apologies for this newbie question but I can't find the answer anywhere. If I run the command systemctl in a SSH terminal accessing an Ubuntu VM in Azure, then it ends with lines 159-187/187 (END), it doesn't return the control and I don't know what key to press to let it continue and finish. I can press Ctrl-C to cancel, but that's probably not the correct way. ........... systemd-tmpfiles-clean.timer loaded active waiting Daily Cleanup of Temporary LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 179 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'. lines 159-187/187 (END)
This isn't a general thing, it is specific to the systemctl command. Some commands, like systemctl, automatically open a pager (programs like less or more) for their output. That's why you can press space to load the next page of output. To exit the pager, just press q. You can also use Ctrl+C though, it doesn't really make much difference.
Return control after command ends in Ubuntu
1,302,063,784,000
I have custom script that takes: optional arguments in the short/long format one required command line argument the short/long command line options are for example: -r, --readonly -m, --mount for the one required arguments, these are actually specified in the script as case statements, ie foo and bar in this example: case $1 in foo ) : ;; bar ) : ;; How can I create zsh completion for my script, so that optional arguments are completed when argument starts with -, and required arguments are completed taken from my script case statement? UPDATE: this is what I have so far, inspired by the answer from @Marlon Richert. Suppose my custom script is called myscript.sh and the completion rule I have created is in /usr/share/zsh/functions/Completion/Unix/_myscript.sh: #compdef myscript.sh _myscript () { local -a args args+=( {-r,--readonly}'[description for "readonly"]' {-m,--mount}'[description for "mount"]' ) _arguments $args && return } _myscript "$@" My script myscript.sh itself is located in /usr/local/bin/myscript.sh. So now when i have the optional arguments -r and -m taken care of, I need to modify my completion rules, so that for the required command line argument of my script, the items from the case statement from /usr/local/bin/myscript.sh are offered as completions. Also, I am not sure if the syntax of the block starting on line 6 with args+=( in my completion script is correct. Where do I have to put the single quotes?
Let's suppose the function for which you want to define completions is called myfunc. First, let's set up the actual function: Put your function in a file called myfunc. Note: This does not end in .zsh or .sh. When you put a function in its own file, you do not need to add funcname() {…} boilerplate. Make sure that the dir containing the file myfunc is in your $fpath. For example, if the file myfunc is located in ~/Functions, add this to your ~/.zshrc file: fpath+=( ~/Functions ) Finally, autoload myfunc in your ~/.zshrc file: # We pass a couple of options that make the code # less likely to break: # -U suppresses alias expansion # -z marks the function for zsh-style autoloading == # `unsetopt KSH_AUTOLOAD` autoload -Uz myfunc You should now be able to use myfunc on the command line (but without any completions yet). Next, let's create the completion function: Create a file called _myfunc. Put into this file: #compdef myfunc # The line above means "This function generates # completions for myfunc." # The combination of that line, plus the file name # starting with an `_`, plus having this file's # parent dir in your `$fpath`, ensures this file # will be autoloaded when you call `compinit`. # `+X` makes sure `myfunc`'s definition will get # loaded immediately, even if you have not called # this function yet. autoload +X -Uz myfunc # Get the definition of `myfunc` in string form. local funcdef="$( type -f myfunc )" # Get the part that matches `case*esac`, then split # it on whitespace and put the resulting words in an # array. local -a words=( ${=funcdef[(r)case,(r)esac]} ) # Keep only the words that start with `(` and end # with `)`. # Even if you used the `case` syntax with only the # closing `)`s, `type -f` will show your cases with # both `(` and `)`. local -a required=( ${(M)words:#'('*')'} ) # `-s`: Allow options to `myfunc ` to be stacked, # that is, you are allowed to specify `myfunc -rm`. # If not, remove the `-s` option. # `*:`: Let this argument be completed in any # position. _arguments -s \ {-r,--readonly}'[description for "readonly"]' \ {-m,--mount}'[description for "mount"]' \ "*:required argument:( ${required//[()]/} )" Replace required argument with whatever you want to call your argument. Fill out descriptions for the options. Again, make sure the directory in which this file is located is in your $fpath. Make sure you do autoload -Uz compinit; compinit in your .zshrc file and make sure it runs after the dir above has been added to your $fpath. Restart your shell with exec zsh or close your terminal window and open a new one. You now should be able to get completion for myfunc. If readonly and mount are mutually exclusive, you'll need to rewrite the last line of the completion function as follows: _arguments \ (-m --mount){-r,--readonly}'[description for "readonly"]' \ (-r --readonly){-m,--mount}'[description for "mount"]' \ "*:required argument:( ${required//[()]/} )"
zsh completion for custom script: complete options from "case" statement
1,302,063,784,000
When I run the info command I get bash: info: command not found How can I solve this?
You should install the info package: sudo apt install info
Install "info" in Debian
1,302,063,784,000
I use this dd command for checking disk speed: dd if=/dev/zero of=/path/file bs=1G count=1 oflag=direct which gives back something like this: 1 oflag=direct 1+0 records in 1+0 records out 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 8,52315 s, 126 MB/s Now I would like to pipe this output, not the file dd is writing, but to a separate file. I tried adding >> /tmp/foo or | sudo tee /tmp/foo to the dd command, but that just creates an empty file.
To be able to insert dd in a pipeline before or after another command, its informational messages are written to standard error rather than to standard output. The OpenBSD manual for dd explicitly mentions this (but the Ubuntu manual seems to omit this fact, but mentions it in the more complete info page): When finished, dd displays the number of complete and partial input and output blocks and truncated input records to the standard error output. To redirect standard error from a command, use 2>filename. To append the standard error stream to an already existing file without truncating it, use 2>>filename. For example: dd if=/dev/zero of=/path/file bs=1G count=1 oflag=direct 2>dd.txt Note that you mix appending output in the first of your examples (using >>) with truncating output in your second example (using tee). To append to a file with tee, use tee -a.
How to write dd status/result message to a file?
1,302,063,784,000
The below nmcli command to connect to WiFi doesn't connect if it has more than one word as ssid name? nmcli device wifi connect my homewifi password mypass NOTE: SSID name: my homewifi (bad since is has 2 words) SSID name: my (good since only has 1 word) Connecting with one word ssid name is good, but multiple word is bad, why?
If there are spaces in the command line, you should use quotes: nmcli device wifi connect "my homewifi" password mypass This will let the shell and nmcli know that this is to be considered as one word.
nmcli command takes only first string of ssid? [closed]
1,302,063,784,000
I just ran a job that takes several hours, and I forgot to pipe that text into a text file. pseudocode: echo [previous text output] > OutputHistory.txt Additionally, I can't just "copy and paste" what was in my terminal because 1) the display omits important formatting characters like "\t", and 2, I may have closed the terminal window. Is this possible with any Unix commands?
This is impossible in general. Once an application has emitted some output, the only place where this output is stored is in the memory of the terminal. To give an extreme example, if this is the 1970s the terminal is a hardcopy printer, the output isn't getting back into the computer without somebody typing it in. If the output is still in the scrolling buffer of your terminal emulator, you may be able to get it back, but how to do so depends on the terminal, there's no standard way. Tabs may or may not have been converted into spaces at that point. Whether formatting information (colors, bold, etc.) can be retrieved and in what format depends on the terminal. With most terminals, there's no easy or foolproof way to find where the command's output started and ended. If you plan in advance, you can record the command's output transparently with script. Running script mycommand.log mycommand may be different from mycommand 2>&1 | tee mycommand.log because with script, the command is still writing to a terminal. A compromise is to always run long-lived commands inside screen or tmux. Both have a way to dump the scrollback buffer to a file, and they have the added bonus that you can disconnect from the session without disrupting the program's execution and reconnect afterwards.
How would I get my terminal to regurgitate the previous output text from past commands? Is this even possible?
1,302,063,784,000
How do I double each line of input piped in? Example: echo "foobar" | MYSTERY_COMMAND foobar foobar
Just use sed p. echo foobar | sed p You don't need cat, either: sed p input.txt # or sed p input.txt > output.txt Explanation p is the sed command for "print." Print is also sed's default action. So when you tell sed explicitly to print, the result is that it prints every line twice. Let's say you wanted to only print lines that include the word "kumquat." You can use -n to supress sed's default action of printing, and then tell it explicitly to print lines that match /kumquat/: sed -n /kumquat/p Or if you only want to print the 5th line and nothing else: sed -n 5p Or, if you want to print every line from the 15th line to the 27th line: sed -n 15,17p If you want to print every line except lines which contain "kumquat," you could do this by just deleting all the kumquat lines, and letting sed's default action of printing the line take place on non-deleted lines. You wouldn't need the -n flag or an explicit p command: sed /kumquat/d sed works on a simple pattern—action syntax. In the above examples, I've shown line-number-based patterns and regex-based patterns, and just two actions (print and delete). sed has a lot more power than that. I should really include the most common and useful sed command there is: sed s/apples/oranges/g This replaces every instance of "apples" with "oranges" and prints the result. (If you omit the global flag to the substitute command, only the first instance on every line will be changed.) Further reading (highly recommended): Sed - An Introduction and Tutorial
How do I double each line of piped output? [duplicate]
1,302,063,784,000
Is that possible to get an alternative to the command dd to burn an iso image including some options: Detect the burning device , Burn an iso image and eject the CD from the command line?
In KDE the reference CD burning software is K3b, which is packaged in Debian as k3b. On the command-line you'd probably use cdrkit (the main package is called wodim).
How to burn an iso image from the command line
1,302,063,784,000
I want to create a file of fixed size (1G, 10G, 100G etc) with a single random word of length within the specified limit on every line. I basically want this to run a benchmark which will sort the entire file. So if I want a file of 1G and the word length limit is suppose 4, the sample of file would look like this: a bc def ghij Here the words' length will be within 1-4 and it won't exceed 4 and this file will eventually have the size of 1G NOTE: The word can be of a fixed size too. It won't be a problem. How will I be able to do this?
My understanding of the question is that, you need to create a large file, each line of this file is a random word within specified length. If you don't need the word to be a real word, but some random characters: < /dev/urandom tr -d -c '[:alpha:]'|head -c 1M|fold -w10 >result.txt This will create a file of size 1M and each line with 10 random characters.
Create a file of fixed size with specific contents
1,302,063,784,000
Attempt 1 xargs -I '{}' -0 -n 1 myprogram --arg '{}' --other-options But that does not preserve zero bytes. Also the program may run multiple times. But instead of failing in case of zero byte creeping into stdin, it runs program multiple times. Attempt 2 myprogram --arg "`cat`" --other-options But that does not preserve trailing whitespace. Attempt 3 bash -c 'read -r -d "" INPUT ; myprogram --arg "$INPUT" --other-options' Seems to mess with terminal, also fails to preserve trailing whitespace. How do I do it properly, reliably, readably, compatibly?
It is impossible to have NUL bytes in command line arguments, so the question is what do you want to happen in case there are NUL bytes in the standard input. As you've noted, your candidate solution #1 runs the command multiple times in this case. That's not ideal. But there is no ideal solution that lets you handle true binary input. As I see it, your only other reasonable options here are to: delete the NUL bytes and proceed insert tr -d '\0' | before xargs translate the NUL bytes to something else and proceed insert tr '\0' something-else | before xargs (if something-else is a single byte) abort and bail in case there are NUL bytes with bash or ksh93 (except if the input contains a single null byte at the end, in which case it is silently deleted): { read -r -d '' input; if [ "$(wc -c)" = 0 ]; then printf %s "$input" | xargs …; else echo 1>&2 "Null bytes detected, bailing out" exit 2 fi } with zsh (and not with other shells such as bash, ksh or dash): input=$(<&0) if [[ $input != *$'\0'* ]]; then printf %s "$input" | xargs …; else echo 1>&2 "Null bytes detected, bailing out" exit 2 fi Or use a temporary file. truncate the input after the first NUL byte insert tr '\0\n' '\n\0' | head -n 1 | tr '\0\n' '\n\0' before xargs (assuming your head is null-safe)
How do I turn entire stdin into a command line argument verbatim?
1,302,063,784,000
I have a text file which contains around 9999999 lines. Here I'm pasting the few lines: 1874641047 Gazipur 1874646347 Jessore 1845105653 Chittagong 1845146123 Narayanganj 1845164162 Gazipur 1843908007 Jessore Here 1st column contains cell phone numbers & 2nd column contains regions. I wanted to write those data in a text files region wise, as like: Gazipur.txt: 1874641047 Gazipur 1845164162 Gazipur Jessore.txt: 1874646347 Jessore 1843908007 Jessore Chittagong.txt: 1845105653 Chittagong Narayanganj.txt: 1845146123 Narayanganj How can I do this in Linux terminal? Is there any way to do this like awk, comm, diff commands?
You can use awk: awk '{print > $2".txt"}' input-file It redirects the output to a filename made from the second field.
How can I write data separately in many text files which contain the same fields?
1,302,063,784,000
How can I list the current directory or any directory path contents without using ls command? Can we do it using echo command?
printf '%s\n' * as a shell command will list the non-hidden files in the current directory, one per line. If there's no non-hidden file, it will display * alone except in those shells where that issue has been fixed (csh, tcsh, fish, zsh, bash -O failglob). echo * Will list the non-hidden files separated by space characters except (depending on the shell/echo implementation) when the first file name starts with - or file names contain backslash characters. It's important to note that it's the shell expanding that * into the list of files before passing it to the command. You can use any command here like, head -- * to display the first few lines (with those head implementations that accept several files), stat -- *... I you want to include hidden files: printf '%s\n' .* * (depending on the shell, that will also include . and ..). With zsh: printf '%s\n' *(D) Among the other applications (beside shell globs and ls) that can list the content of a directory, there's also find: find . ! -name . -prune (includes hidden files except . and ..). On Linux, lsattr (lists the Linux extended file attributes): lsattr lsattr -a # to include hidden files like with ls
What is the alternative for ls command in linux? [duplicate]
1,302,063,784,000
When I use the code below in a terminal on Ubuntu, it works fine: rm !(*.sh) -rf But if I place the same line code in a shell script (clean.sh) and run shell script from terminal, it throws error as this clean.sh script: #!/bin/bash rm !(*.sh) -rf Error I get: ./clean.sh: line 2: syntax error near unexpected token `(' ./clean.sh: line 2: `rm !(*.sh) -rf' can you help?
Shell Script Approach By default, globs don't work in a BASH script (although you can turn them on with shopt). If the shell script ever gets run by a non-BASH interpreter, globs might not work at all. You can get the same effect using the find command, which is how I'd recommend doing it (because of how much more control you can have once your requirements grow). Try this on for size: find . -mindepth 1 -maxdepth 1 -not -iname "*.sh" -exec rm -rf {} \; Makefile Approach Another approach you could take: If you're doing something that wants to be cleaned afterwards, there's a good chance that a Makefile is the right tool for the job, rather than a bunch of clean.sh, build.sh, install.sh, etc. This is especially true if you want to make sure your recipes always happen in order, or if you don't always want to re-run the recipes that produce an output. A simple Makefile that did the same thing would look like this: (note that the whitespace before rm needs to be a tab because that's how make rolls) SOURCES := $(wildcard *.sh) CLEAN_FILES := $(filter-out $(SOURCES),$(wildcard *)) CLEAN_FILES := $(filter-out Makefile,$(CLEAN_FILES)) build: $(SOURCES) YOUR_RECIPE_HERE clean: rm -rf $(CLEAN_FILES)
!(*.sh) works on the command line but not in a script
1,302,063,784,000
I know some methods for stopping process. When I type: echo {1..999999} > filename.txt I can't stop it from running. I can stop other processes with Ctrl+C | Ctrl+D | Ctrl+\ and etc. But none of them seem to be working with this command. Some guys told me to simply close this terminal. Other than that, open new terminal kill all terminal processes, but I don't want to do it this way. In the server I won't have such chance to open new terminal, I think. Can I open a new terminal session in text-mode?
The reason why cannot interrupt that with Ctrl-c etc... is that the shell isn't running any command at that point. It's busy expanding {1..999999} to compute what the command line arguments will eventually be once it gets to the point of running the command. While external commands respond to termination signals like SIGINT (which is emitted by default when you press Ctrl-c), shells themselves ignore them. If they didn't, then, when you pressed Ctrl-c, then in addition to killing whatever command happened to be running, you'd also kill the shell itself! (This is not quite true because of tty job control and foreground and background process groups, but close enough for the purpose of that explanation.) If you need to interrupt it, there is unfortunately nothing you can do but kill the shell itself. Killing the shell itself will cause your session to terminate. In that sense it's largely equivalent to closing the terminal window or terminating the SSH connection.
How do you stop a bash shell expansion?
1,302,063,784,000
Comparing grep '.*[s]' file with grep .*[s] file Why do you need quotation marks to let this work properly? In the second case, grep seems trying to inspect every file with a period.
Quotes (either single or double) around an argument inhibit glob expansion. Your first example passes a Regular Expression as an argument to grep. Your second example contains a glob pattern which the shell itself expands, passing filenames that fit that pattern as arguments to grep.
Why does "grep '.*[s]' file" work and "grep .*[s] file" doesn't?
1,302,063,784,000
I'm tarring up /home, and piping it through bzip2. However, I've got lots of already-compressed files out there (.jpg, .mp4, .mkv, .webm, etc) which bzip2 shouldn't try to compress. Are there any CLI compressors out there that are smart enough (either via libmagic or the user enumerating extensions) not to try to back up un- or minimally-compressible files? A similar question was asked a few years ago, but don't know if there have been any updates since then. Can I command 7z to skip compression (but not inclusion) of specific files while compressing a directory with its subs?
The way you are doing this, with compressing a .tar file the answer is for sure no. Whatever you use for compressing the .tar file, it doesn't know about the contents of the file, it just sees a binary stream, and whether parts of that stream are uncompressable, or minimally compressible, there is no way this is known. Don't be confused by the options for the tar command to do the compression, tar --create --xz --file some.tar file1 is as "dumb" as knowing about the stream contents as doing tar --create file1 | xz > some.tar is. You can do multiple things: you switch to some container format other than .tar which allows you to compress on an individual basis, but this is unfavourable if you have lots of small files in one directory that have similar patterns (as they get compressed individually). The zip format is an example that would work. you compress the files, if appropriate before putting them in the tar file. This can be done transparently with e.g. the python tarfile and bzip2 modules This also has the disadvantages of point 1. And there is no straight extraction from the tar file as some files will come out compressed that might not need decompression (as the already were compressed before backup). Use tar as is and live with the fact that th happens and select a not so high compression for gzip/bzip2/xz so that they will not try too hard to compress the stream, thereby not wasting time on trying to get another 0.5% compression which is not going to happen. You might want to look at the results of paralleling xz compression (not specific to tar files), to see some results of trying to speed up xz as published on my blog
Tell gzip/bzip2/7z/etc not to compress already-compressed files?
1,302,063,784,000
I have in a directory /home/files/ some files that all have the same extension .txt. For example 1.txt 2.txt 3.txt 4.txt 5.txt test.txt These files are all the same in data format but the data are different. All these files have at least the following line inside frames=[number here] This line appears multiple times in a file so i need to get the value of frames= that appears last in the file. And as i stated i want to get that line, from all the above files that match the extension .txt in a directory (no need for recursive). Is there any single bash command to do that? EDIT Expected output: 1.txt=5000 2.txt=123 3.txt=3452 4.txt=111 5.txt=958 test.txt=1231
With find + grep: find . -name '*.txt' -exec sh -c 'grep -HPo "frames=\K.*" "$0" | tail -n1' {} \; And with shell for loop + grep in similar fashion: for file in *.txt; do grep -HPo 'frames=\K.*' "$file" | tail -n1; done
Fastest way to read last line within pattern of multiple files with the same extension
1,302,063,784,000
I have a nice PS1 line in my .bash_profile, and I want to copy it to another machine. So I want to view it AND copy it to my clipboard. I can't figure out how to string the commands to do this together. I imagine what I need to do is grep for my PS1 line, pipe that to tee, then tee goes to stdout and also to pbcopy (binary to copy the line to my clipboard). So far I have: grep PS1 .bash_profile | tee [what do I put here?] | pbcopy and unfortunately I'm just confusing myself, can't figure out how to do this. How do I output to stdout AND to a binary at the same time?
... | tee /dev/tty | ... /dev/tty is the "file" that refers to your terminal.
Redirect output to stdout and pipe to a binary
1,302,063,784,000
I want to display the number of lines, words and characters, in a file, in separate lines? I don't know anymore than using wc filename for this.
You can use tr: wc filename | tr ' ' '\n' , or if you just want the numbers: wc filename | tr ' ' '\n' | head -3
How to display the number of lines, words and characters in separate lines?
1,302,063,784,000
I want to rename my file with its substring.Because unfortunately renamed all the files in my server. Now I want to remove suffix( .gz) of all the files including files in subdirectories also. Below is avaliable files with extra .gz. # pwd /usr/apache-tomcat-6.0.36/webapps/rdnqa/WEB-INF/classes # ls META-INF jbpm.hibernate.cfg.xml.gz jbpm.jpdl.cfg.xml.gz jpdl-4.0.xsd.gz config jbpm.hibernate.cfg.xml.mirror.gz jbpm.mail.templates.xml.gz jpdl-4.2.xsd.gz ecnet jbpm.hibernate.cfg.xml.staging.gz jbpm.repository.hbm.xml.gz jpdl-4.3.xsd.gz hibernate.queries.hbm.xml.gz jbpm.history.hbm.xml.gz jbpm.task.hbm.xml.gz jpdl-4.4.xsd.gz jbpm.businesscalendar.cfg.xml.gz jbpm.identity.cfg.xml.gz jbpm.task.lifecycle.xml.gz labels jbpm.cfg.xml.gz jbpm.identity.hbm.xml.gz jbpm.tx.hibernate.cfg.xml.gz log4j.properties.gz jbpm.console.cfg.xml.gz jbpm.jboss.idm.cfg.xml.gz jbpm.tx.jta.cfg.xml.gz nohup.out.gz jbpm.default.cfg.xml.gz jbpm.jbossremote.cfg.xml.gz jbpm.tx.spring.cfg.xml.gz jbpm.default.scriptmanager.xml.gz jbpm.jobexecutor.cfg.xml.gz jbpm.variable.types.xml.gz jbpm.execution.hbm.xml.gz jbpm.jpdl.bindings.xml.gz jbpm.wire.bindings.xml.gz # cd ecnet # ls core jms rd util # cd core # ls util # cd util # pwd /usr/apache-tomcat-6.0.36/webapps/rdnqa/WEB-INF/classes/ecnet/core/util # ls GridService.class.gz MDPDFXMLParser.class.gz PDFColumn.class.gz PDFXMLParser.class.gz GridService.java.gz MDPDFXMLParser.java.gz PDFColumn.java.gz PDFXMLParser.java.gz MDExcelXmlParser.class.gz MasterDetailsPrintWriter.class.gz PDFRow.class.gz RGBColor.class.gz MDExcelXmlParser.java.gz MasterDetailsPrintWriter.java.gz PDFRow.java.gz RGBColor.java.gz MDExcleWriter.class.gz PDFCell.class.gz PDFWriter.class.gz xml2excel MDExcleWriter.java.gz PDFCell.java.gz PDFWriter.java.gz #
You can use rename (it's designed for that). Just execute this command in the folder where the *.gz file are: rename -n 's/\.gz$//' *.gz This removed the .gz extension from all files that have a .gz extension. Output should look like this: hibernate.queries.hbm.xml.gz renamed as hibernate.queries.hbm.xml jbpm.businesscalendar.cfg.xml.gz renamed as jbpm.businesscalendar.cfg.xml jbpm.cfg.xml.gz renamed as jbpm.cfg.xml ... Note: If the output is as desired, execute the command without the -n switch. That switch causes rename not to act, just show what files would have been renamed.
How to rename file using substring of the same file name [duplicate]
1,302,063,784,000
I want to download the source files for a webpage which is a database search engine. Using curl I'm only able to download the main html page. I would also like to download all the javascript files, css files, and php files that are linked to the webpage and mentioned in the main html page. Is this possible to do using curl/wget or some other utility?
First of all, you should check with the website operator that this an acceptable use of their service. After that, you can do something like this: wget -pk example.com -p gets the requisites to view the page (the Javascript, CSS, etc). -k converts the links on the page to those that can be used for local viewing. From man wget: -p, --page-requisites This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets. [...] -k, --convert-links After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
Download all source files for a webpage
1,302,063,784,000
I have a directory in linux which has a list of log files where log files get auto generated if some job runs. Each log file gets appended with the timestamp like "JobName_TimeStamp" UPDATED: job_2014-05-28_15:05:26.log job_2014-05-28_15:06:58.log job_2014-05-28_15:07:02.log job_2014-05-28_15:07:57.log job_2014-05-28_15:08:00.log job_2014-05-28_15:08:01.log job_2014-05-28_15:08:09.log job_2014-05-28_15:08:10.log job_2014-05-28_15:08:11.log job_2014-05-28_15:08:12.log job_2014-05-28_15:08:13.log job_2014-05-28_15:08:14.log job_2014-05-28_15:08:22.log job1_2014-05-28_15:08:11.log job1_2014-05-28_15:08:12.log job1_2014-05-28_15:08:13.log job1_2014-05-28_15:08:14.log job1_2014-05-28_15:08:22.log I wanted to run a linux command to list all files greater than a particular timestamp? For Example 1 : I will pass two parameters , If the TimeStamp given is "2014-05-28_15:08:00" and Job Name is "job" I should get the list as job_2014-05-28_15:08:01.log job_2014-05-28_15:08:09.log job_2014-05-28_15:08:10.log job_2014-05-28_15:08:11.log job_2014-05-28_15:08:12.log job_2014-05-28_15:08:13.log job_2014-05-28_15:08:14.log job_2014-05-28_15:08:22.log Example 2 : I will pass two parameters , If the TimeStamp given is "2014-05-28_15:08:11" and Job Name is "job1" I should get the list as job1_2014-05-28_15:08:12.log job1_2014-05-28_15:08:13.log job1_2014-05-28_15:08:14.log job1_2014-05-28_15:08:22.log Any solutions?
The below code worked: for i in myjob*; do if [[ "myjob_2014-05-28_15:08:00.log" < "$i" ]]; then echo $i fi done
Listing files greater than particular timestamp in file name?
1,302,063,784,000
I am trying to use cpulimit for testing an app I'm developing under low resource conditions, and I need the process to start under the influence of cpulimit. It is not sufficient to start the program and later apply cpulimit. The example on the cpulimit page does not work for me. The example is this: cpulimit --limit 40 /etc/rc.d/rc.boinc start And I'm doing this: cpulimit --limit 40 a.out start
This is unrelated to cpulimit. Running a.out directly on the command line wouldn't have worked either. When you execute a program without specifying any directory component, the program is looked up in the PATH. The current directory is normally not in the PATH, so you need to give an explicit directory indication. cpulimit -l 40 -- ./a.out start It's also generally a good practice to end the options with "--", so cpulimit, or whatever command, won't interpret wrong what comes after that, as an option, when it's part of a file name or an option to a different program.
How can I start a process using cpulimit?
1,302,063,784,000
I'm currently writing a shell script that separate values from their identifiers (retrieved from grep). For example, if I grep a certain file I will retrieve the following information: value1 = 1 value2 = 74 value3 = 27 I'm wondering what UNIX command I can use to take in the information and convert it to this format: 1 74 27
You can use awk like this : grep "pattern" file.txt | awk '{printf "%s ", $3}' Depending of what you do with grep, but you should consider using awk for greping itself : awk '/pattern/{printf "%s ", $3}' file.txt Another way by taking advantage of bash word-spliting : echo $(awk '/pattern/{print $3}' file.txt) Edit : I have a more funny way to join values : awk '/pattern/{print $3}' file.txt | paste -sd " " -
How to separate numerical values from identifiers
1,302,063,784,000
I have a large log file that contains numerous lines of the same entry, lets call it "repeat-info". As an example here is what a portion of the log might look like: [Timestamp] repeat-info [Timestamp] repeat-info [Timestamp] Log information 1 [Timestamp] Log information 2 [Timestamp] repeat-info [Timestamp] Log information 3 [Timestamp] repeat-info Is there a command that can output the information in the log file and exclude the repeated information? It becomes a hassle if I have to use more file.log and sift through all the repeating information to find what it is I want to look at. I am reading through the man pages for sed and awk as I saw those appear in searches for my question, however I haven't found anything conclusive that would do what I need. I was searching through the older questions and found this question which is related to mine. I was looking for a way to do this with a single command, or two piped together, without having to create a script. Any help is appreciated!
There are a few ways to do this. Best would be grep: grep -v 'repeat-info' file.log Other ways: sed '/repeat-info/d' file.log sed -n '/repeat-info/!p' file.log awk '!/repeat-info/' file.log
Searching a file & excluding lines with a specified string
1,302,063,784,000
I saw this usage of redirection somewhere, and thought it was a typo: grep root < /etc/passwd But after I run it, I saw that it gives the same output with grep root /etc/passwd: $ grep root < /etc/passwd root:x:0:0:root:/root:/bin/bash $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash The same thing happens with cat < /etc/passwd cat /etc/passwd However, redirection is ignored when used with ls: ls < /etc/passwd does not print the same output with ls /etc/passwd What is happening?
Many utilities which work with files will accept stdin (standard input) as streamed input, or accept the file-name as a parameter.. Your < file examples are redirecting the output of the file to the utility. The file was opened by the shell and passed on to your utility via stdin .. On the other hand, with cat file, cat is handling the opening and reading of file, and no redirection is involved. ls never reads a file, therefore it does not take a file name as a parameter with a view to opening and reading the file.. (it accepts file-name masks) ... For ls, the redirection action is, in effect, ignored because nothing in the process reads the shell-opened file... To determine how any utility behaves, just type man utility-name in the terminal... man is a contraction of manual ... eg. man cat presents you with cat's manual
Removing redirection operator does not change output. Why?
1,302,063,784,000
Example: When I run echo -e "\[\033[;33m\][\t \u (\#th) | \w]$\[\033[0m\]" the printed response is \[\][ \u (\#th) | \w]$\[\] where everything after the first \[ and before the last \] is an orangey-brown. However when I set the command prompt to \[\033[;33m\][\t \u (\#th) | \w]$\[\033[0m\] the command prompt is printed as [21:55:17 {username} (89th) | {current directory path}]$ were the whole command prompt is the orangy-brown. In conclusion, my question is: Can I have a command prompt design printed (with echo, cat, less, etc.) as if it were the command prompt?
In Bash 4.4+, you can use ${var@P}, similarly to how ${var@Q} produces the contents of var quoted, see 3.5.3 Shell Parameter Expansion in the manual, bottom of page. $ printf "%s\n" "${var}" \[\033[00;32m\]\w\[\033[00m\]\$ $ printf "%s\n" "${var@P}" |od -tx1z 0000000 01 1b 5b 30 30 3b 33 32 6d 02 2f 74 6d 70 01 1b >..[00;32m./tmp..< 0000020 5b 30 30 6d 02 24 0a >[00m.$.< 0000027 or if you run the latter without od, you should see the current path in green. Note that it prints \001 and \002 for \[ and \]. Those probably aren't too useful for you. We could use a temporary variable and the string replace expansion to get rid of them: $ printf -v pst "%s\n" "${var@P}" $ pst=${pst//[$'\001\002']} $ printf "%s\n" "$pst" |od -tx1z 0000000 1b 5b 30 30 3b 33 32 6d 2f 74 6d 70 1b 5b 30 30 >.[00;32m/tmp.[00< 0000020 6d 24 0a 0a >m$..< 0000024 In Zsh, there's print -P: $ print -P "%F{green}%d%F{normal}%#" |od -tx1z 0000000 1b 5b 33 32 6d 2f 74 6d 70 2f 61 61 61 1b 5b 33 >.[32m/tmp/aaa.[3< 0000020 39 6d 25 0a >9m%.< and the parameter expansion flag %, so ${(%)var} would be similar to Bash's ${var@P} above, and you could use either print -v othervar -P "..." or othervar=${(%)var} to put the resulting string in othervar. Note that for things like Bash's \u and \w, you could just use $USER and $PWD instead, but for something like \# or \j that might not be so easy.
Is it possible to use the escape codes used in shell prompts elsewhere, such as with echo?
1,302,063,784,000
I am trying to empty my syslog.1 file which was flooded with some messages and has the size of 77 GB. I did sudo truncate -s 0 /var/log/syslog.1 but the command is taking more than 2 hours to return. Is it safe to stop it by Ctrl-C or by the kill command? I am afraid that these methods may cause inconsistency in the file system. Is there a better way? The system is Ubuntu 16.04. The root partition where /var/log/syslog.1 sits is almost full due to the sudden increase in size of this file as well as /var/log/syslog and /var/log/kern.log. The latter files are still continuing to grow, but the command line is still responsive.
Interrupting a process will never cause the filesystem itself to become corrupted¹. The kernel ensures this. The worst that can happen is that the files are in an inconsistent state with respect to application invariants. For example, killing a file editor while it's saving the file may leave a half-written file, but won't damage other files. The truncate utility calls the ftruncate system call under the hood. This system call is atomic: it either happens, or doesn't. So if you kill the truncate process, the end result is that either the file is in its original state as if truncate had not been run, or the file is truncated. You can't end up with a shortened but not fully truncated file. Truncating a file doesn't overwrite the data on the disk. It only updates the list of blocks used by the file and the list of free blocks. It doesn't update the data blocks themselves: they'll be overwritten when they're reclaimed to store data later. Even with a large file, that wouldn't take long. And even overwriting 77 GB of data wouldn't take hours on any hardware where you're likely to have room for 77 GB of logs. So it's likely that something bad is happening. You may have hit a pathological case with bad performance on a full disk, but even so I would expect that to slow the system down for seconds, or minutes if something else is writing to the disk with high priority, but not hours. A more likely possibility is that there's something wrong with the disk, and that it's mostly a coincidence that it's revealed now. Check the kernel logs: if there's something wrong with the disk, you'll see messages about it. Also check the disk status with smartctl. By the way, syslog.1 is an archive file, so nothing else is going to write to it. You might as well remove it rather than make it empty. ¹ Unless there's a kernel or hardware bug, but that could happen regardless of whether you kill a process.
How to stop truncate command safely
1,302,063,784,000
I am using Linux Mint 19.3 I want to run two instances of youtube-dl. All traffic of terminal 1 instance should go through regular network. All traffic of terminal 2 instance should go through VPN. Is there any command that will enable terminal 2 to use VPN connection for all future commands? In other words, Is there any command which will instruct terminal 2 to use VPN connection for all future commands? I found Bind unix program to specific network interface, But I want to bind VPN in specific terminal for all future initiated programs, not a particular program. What I want is, open a terminal and run youtube-dl to work with regular connection. However, If I enable VPN (through a command) and run youtube-dl then youtube-dl will work through VPN connection. So, the possible end result being, two instance of same program, one through regular network, another through VPN, where the network selection is real-time and not predefined (through routing table or anything like that). I found a tool vpnns, but not sure if this is what I am looking for or is there a better solution.
You cannot "instruct terminal 2 to use a VPN connection for all future commands". However, you can make a new network namespace, run your VPN in that namespace or move the VPN network interface into that namespace (possibly after connecting it up in a suitable way to the main network namespace and/or your physical network interface), then start a new terminal window in this network namespace, and have all commands use the VPN by connection (and only the VPN connection). There's plenty of tutorial on network namespaces, google. The main command you'll need is ip netns. You'll also need enough understanding of networking basics to connect up the network namespace. And yes, vpnns works in a very similar way.
Command that will Enable VPN Only in my current Terminal
1,302,063,784,000
I often find myself in a long command-line, where it can be a pain to navigate to a specific location in the middle by means of Left-Arrow, Right-Arrow, Alt+B, Alt+F etc. I know that using tmux, I can move to a particular <keyword> by means of a search. Since this is a very common operation, is something similar implemented in contemporary terminals like gnome-terminal? Bonus: it would be nice to get an answer for macOS's iTerm too.
Bash (and any other terminal application that uses the readline library) has search functionality. Command line edition is done by the shell, not by the terminal. (See What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'?). The main search commands are Ctrl+S and Ctrl+R, which search forward and backward respectively. These are incremental searches: after pressing Ctrl+S or Ctrl+R, type the text you want to search and you'll be brought to the next/previous occurrence of what you've typed so far. Press any key that doesn't insert a character and that isn't Backspace to end search mode. Note that the key will have its usual effect, in particular Enter runs the command immediately. Left/Right are usually the most convenient way. If you want to cancel the search, press Ctrl+G and you'll be returned to the command you were editing. These commands search the shell command history as well as the current command line. If you accidentally drift to a previous command line, Ctrl+G returns you to what you were typing originally. Bash also has commands to search a single character without entering a search mode: Ctrl+] forward, Ctrl+Alt+] or Alt+- Ctrl+] backward. Zsh has similar commands (and quite a few more). Its commands to search a single character quickly aren't bound to a convenient key by default (Ctrl+X Ctrl+F forward and none backward) unless you're in vi mode, but you can bind a key to them with bindkey.
Move the cursor directly to `<keyword>` in a long command line
1,302,063,784,000
When I run date on my Ubuntu under WSL, it prints: Wed May 15 19:33:37 STD 2019 Why does this have the string STD in it? I can't find a timezone online with the abbreviation STD (for example here) and googling 'date std' gave me unexpected results.
Thanks to Jesse_b for finding this Stack Overflow Q/A: TL;DR WSL Dynamically generates a fake timezone info file at /usr/share/zoneinfo/Msft/localtime that in hard linked to /etc/localtime. The Msft file uses the made up names DST or STD, and they stand for no specific timezone. What is actually going on is WSL attempts to match your Windows timezone in Linux. This is a non-trivial mapping, as seen here (discussion). So rather than trying to hit a constantly moving target, what I believe WSL does is use the Windows API to get the Windows Timezone info, and based off of that information, it dynamically generates a timezone info file. I believe that wslhost (specifically code in C:\Windows\System32\lxss\LxssManager.dll) does this inspection on your current timezone in Windows, and writes to the the /usr/share/zoneinfo/Msft/localtime file. This is why when the timezone in Windows changes, you see the affect in an already running WSL instantly. But since there is no perfect mapping from Windows timezones to Linux or POSIX timezones, wslhost probably just wings the name, and that's where the DST comes into play. Update: Actually, I think it just says "DST" if you are in Daylight Saving Zone timezone, and "STD" for non DST timezone (Standard I assume). So the answer to "What timezone does DST stand for" is none, and any Linux program that will attempt to match /etc/localtime to a timezone file in /usr/share/zoneinfo (via readlink or just searching), will only every get "Msft/localtime" as an answer. A "Technically accurate, but totally useless" answer. The Windows 19H1 (1903) update, scheduled for final release in May 2019, resolves this issue and WSL distributions will report a conventional time zone where possible: Until the final public release, the new version is already available through the Windows Insider programme, so if you have an active issue caused by the zone tag it is possible to work around it. You don't need to do anything to activate it or have the new time zone applied.
Why does my date output have "STD" as the time zone?
1,302,063,784,000
I'm interested in wrapping a command such that it only runs at most once every X duration; essentially, the same functionality as the lodash throttle function. I'd basically like to be able to run this: throttle 60 -- check-something another-command throttle 60 -- check-something another-command throttle 60 -- check-something For each of those throttle commands, if it's been less than 60 seconds since check-something was run (successfully), the command is skipped. Does anything like this already exist? Is it easy to do with a shell script?
I'm not aware of anything off-the-shelf, but a wrapper function could do the job. I've implemented one in bash using an associative array: declare -A _throttled=() throttle() { if [ "$#" -lt 2 ] then printf '%s\n' "Usage: throttle timeout command [arg ... ]" >&2 return 1 fi local t=$1 shift if [ -n "${_throttled["$1"]}" ] then if [ "$(date +%s)" -ge "${_throttled["$1"]}" ] then "$@" && _throttled["$1"]=$((t + $(date +%s))) else : printf '%s\n' "Timeout for: $1 has not yet been reached" >&2 fi else "$@" && _throttled["$1"]=$((t + $(date +%s))) fi } The basic logic is: if the command has an entry in the _throttle array, check the current time against the array value; if the timeout has expired, run the command and -- if the command was successful -- set a new timeout value. If the timeout has not yet expired, (don't) print an informative message. If, on the other hand, the command does not (yet) have an entry in the array, run the command and -- if the command was successful -- set a new timeout value. The wrapper function doesn't distinguish commands based on any arguments, so throttle 30 ls is the same to it as throttle 30 ls /tmp. This is easily changed by replacing the array references and assignments of "$1" to "$@". Also note that I dropped the -- from your example syntax. Also note that this is limited to seconds-level resolution. If you have bash version 4.2 or later, you may save the call to the external date command by using a feature of the printf built-in instead: ... _throttled["$1"]=$((t + $(printf '%(%s)T\n' -1))) ... ... where we're asking for the time formatted in seconds (%s) explicitly of the current time (-1). Or in bash 5.0 or later: _throttled["$1"]=$((t + EPOCHSECONDS))
How to wrap a command such that its execution is throttled (that is, it executes at most once every X minutes)
1,302,063,784,000
I want to change the password for my wifi network foo. I've got a raspberry pi connected to foo. I only use SSH to talk with it. The pi is headless, and it isn't convenient to attach a monitor+keyboard or hardline. /etc/wpa_supplicant/wpa_supplicant.conf on the pi has lines like this: network={ ssid="foo" psk=deadbeefdeadbeefdeadbeefdeadbeefdeadbeef } Can I just duplicate that config with the new password? Like so: network={ ssid="foo" psk=deadbeefdeadbeefdeadbeefdeadbeefdeadbeef } network={ ssid="foo" psk=abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd }
Yes This works. The network manager will try connecting to each one.
changing wifi password over SSH on wifi-only headless server
1,302,063,784,000
I frequently end up grepping files with very long lines resulting in pages worth of output for one matching word. What's a good way to limit the output to only enough characters as the width of my terminal? I realize this means the matching word may not be present in the line. But I still want context, so filename only is not acceptable.
Consider this wrapping function, which passes any parameters to grep, then cuts the output to $COLUMNS (or 80, if COLUMNS isn't set): function grepcut() { grep "$@" | cut -c1-${COLUMNS:-80} } Use it like: $ grepcut sometext somefiles or $ set | grepcut LS_COLORS
How to make grep output fit screen's width of characters
1,302,063,784,000
I use the following code as part of a much larger script: mysql -u root -p << MYSQL create user '${DOMAIN}'@'localhost' identified by '${DOMAIN}'; create database ${DOMAIN}; GRANT ALL PRIVILEGES ON ${DOMAIN}.* TO ${domain}@localhost; MYSQL As you can see it creates an authorized, allprivileged DB user, and a DB instance with the same value (the password will also as the same value). DB user ====> ${domain}. DB user password ====> ${domain}. DB instance ====> ${domain}. This is problematic because I need the password to be different. Of course, I could change the password manually from `${domain} after the whole script will finish to be executed, but that's not what I want: What I want is to type/paste the password directly on execution, interactively. In other words, I want that me being prompted for the DB user's password would be an integral part of running the script. I've already tried the following code, which failed: mysql -u root -p << MYSQL create user '${DOMAIN}'@'localhost' identified by -p; create database ${DOMAIN}; GRANT ALL PRIVILEGES ON ${DOMAIN}.* TO ${domain}@localhost; MYSQL What is the right way to be able to insert a password interactively, by either typing/pasting directly in script execution (instead changing it manually after script execution)?
Just have the user store the variable beforehand with read: echo "Please enter password for user ${domain}: "; read -s psw mysql -u root -p << MYSQL create user '${domain}'@'localhost' identified by '${psw}'; create database ${domain}; GRANT ALL PRIVILEGES ON ${domain}.* TO ${domain}@localhost; MYSQL Here, the command read reads user input and stores it in the $psw variable. Note that just after entering the password value, you'll be prompted for the MySQL root password in order to connect to the MySQL database (-p flag = interactive password prompt).
Making mysql CLI ask me for a password interactively
1,302,063,784,000
There are some commands like cd or ll that if I run them as sudo, their execution just "breaks". What is a rule of thumb to know which commands will "break" this way when a sudo command precedes them? This data can help me and other newcomers to code stabler scripts.
Only external commands can be run by sudo. Sudo The sudo program forks (start) a new process to launch an external command with the effective privileges of the superuser (or another user if the -u option is used). That means that no commands that are internal to the shell can be specified; this includes shell keywords, builtins, aliases, and functions. The best way to find out if a command is available as an external command (and not internal to the shell) is to run type -a command_name which displays all locations containing the specified executable. Example 1: Shell builtin In this case, the cd command is only available as a shell builtin: $ type -a cd cd is a shell builtin It fails when you try to run it with sudo: $ sudo cd / sudo: cd: command not found Example 2: Alias In this case, the ls command is external – but an alias with the same name has also been created in the user’s shell. $ type -a ls ls is aliased to `ls -F --color' ls is /bin/ls If I was to run sudo ls, it would not be the alias that runs as the superuser; if I wanted the -F option, it would have to be explicitly included as an option, i.e., sudo ls -F. Example 3: Shell builtin and external command In this case, the pwd command is provided as both a shell builtin and an external command: $ type -a pwd pwd is a shell builtin pwd is /bin/pwd In this case, the external /bin/pwd command would run with sudo: $ sudo pwd /home/anthony Other examples of commands that are often provided as both shell builtins and external commands are kill, test ([) and echo. Run internal shell commands with sudo If you really want to run a shell builtin with superuser privileges, you’d have to launch a shell as the external command. E.g., the following command runs bash as the superuser with the cd builtin command provided as a command line option: $ sudo bash -c "cd /; ls" bin etc lib media mnt ntp.peers proc sbin share sys tmp var boot dev home lost+found misc net opt … … Note: Aliases can not be passed as commands to Bash using its -c option. Shell redirection Another issue to watch out for is that shell redirection takes place in the context of the current shell. If I try to run sudo /bin/echo abc > /test.file, it won’t work. I get -bash: /test.file: Permission denied. While the echo command runs with superuser privileges, it prints its output to my current (non-privileged) shell and, as a regular user, I don’t have permission to write to the / directory. One work-around for this is to use sudo to launch a new shell (similar to the above example): sudo bash -c "echo abc > /test.file" In this case, the ouptut redirection takes place in the context of the privileged shell (which does have permission to write to /). Another solution is to run the tee command as the superuser: echo abc | sudo tee /test.file
A rule of thumb to know which commands break in sudo?
1,473,661,284,000
I wish to compile the results of a find command where the find command returns the relative filepath of a java file located in subdirectories. E.G., I am in ./ and I want to run javac on a file, someFile.java, but I don't know at "command-type-time" what the relative path is. Running find . -name someFile.java returns the correct relative path (and only the correct one so long as someFile.java is uniquely named within subdirectories of .) I wish to compile THIS file. So I have attempted javac | find . -name someFile.java but I am not sure about why this is not working. Is this even possible?
You want to use the $() syntax. eg javac $(find ./someDir/anotherDir/ -name someFile.java)
Forward results of Find command to javac
1,473,661,284,000
For example, when using ack to search code in source files, the output is high lighted. But if you pipe the output into a local file, you lose the code high light. Do we have a command line tool to reserve it? To understand what I mean: $ git clone https://github.com/koehlma/jaspy $ cd jaspy/ $ ack func ./* # you see the high light for each match $ ack func ./* > output.txt $ cat output.txt # you don't see the high light
ack does something smiliar to grep. When it puts it text to a terminal, it will spit the results out in color. If the output is redirected to a file, the matches do not get colorized. You can override these heuristics with the options --color and --nocolor. Check man 1 ack for more details.
command line tool to reserve code high light in output file?
1,473,661,284,000
I want to cd to the directory of go's bin file: $ type go go is /c/tools/go/bin/go How can I quickly cd to this directory in bash ?
cd $(dirname $(which go)) which go will show the path of the executable. Then get the dirname of that path and cd to that.
How to quickly cd to a command's directory after used the 'which' or 'type' [duplicate]
1,473,661,284,000
We have a policy on our cluster that any files not modified or accessed within 30 days will be deleted. I have a project running and I want to keep all files until I finish it, would it be possible to trick the system by doing something like: find ./ -type f -exec touch {} + I have tried this and it seems that the time-stamp changes, but will that fool the system into thinking that the files have actually been modified?
Sure this should work. But you make sure by check with 'stat' command. sh-4.3$ stat test.csv File: 'test.csv' Size: 871 Blocks: 8 IO Block: 4096 regular file Device: 20fd4bh/2161995d Inode: 8389896 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ cg) Gid: ( 1000/ cg) Access: 2016-01-06 05:29:32.220197637 +0000 Modify: 2016-01-06 05:29:32.220197637 +0000 Change: 2016-01-06 05:29:32.220197637 +0000 Birth: - sh-4.3$
How to modify files on Unix to avoid file purging policy?
1,473,661,284,000
I am writing a script that requires me to provide a list of files in my directory, whose filenames contain spaces,*,?,$,%,etc. How can I do this, I have seen multiple posts but couldn't find anything that works for me. Is it possible to achieve this using grep?
grep -E should do what you need. Note that some characters need to be escaped with a backslash, so if you add more and the results aren't what you expected, escape it and try again. Note the | is an or... $ ls | grep -E '\s|\*|\?|\$|%' all this.txt all?this.txt all*this.txt all%this.txt all th*s.txt all$.txt
How to list filenames that contain spaces and special characters without using find
1,473,661,284,000
I am using rlwrap to colorize prompt in asterisk CLI: rlwrap -s 99999 -a -pRED /usr/sbin/asterisk -r I read in man rlwrap that I can also use rlwrap -z pipeto to pipe output through a colorizer. I have grc colorizer which works like this: cat foo | grcat <conf_file> The above examples colorizes foo using the rules from <conf_file>. How can I use rlwrap -z pipeto to colorize output from rlwrap through grcat ?
pipeto Unfortunately, rlwrap's built-in filter pipeto does not filter output in the way you desire. I find the documentation rather misleading, but what it does is if you run rlwrap -z pipeto some-shell, then, within the interaction: if you type commands without any pipe sign (|), those are passed verbatim to some-shell and then the output of that is simply printed; if you type command | filter, then command is passed to some-shell to interpret, and the output of that is piped through filter before being printed (where filter is any command you could run from the command line in your Unix shell). So the good news is you could get the behavior you are looking of, kindof, sortof, by running rlwrap -z pipeto asterisk and then remembering to append | grc to each command you want to pass to asterisk. But that would not be very convenient, would it? Hence outfilter. outfilter I suggest creating the following rlwrap filter script: #! /usr/bin/perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use strict; my $filter = new RlwrapFilter; my $name = $filter->name; my $filter_command = join ' ', @ARGV; $filter->help_text("Usage: rlwrap -z '$name <filter-command>' <command>\n" . "Filter <command> output through <filter-command>"); $filter->output_handler(sub {""}); $filter->prompt_handler(\&prompt); $filter->run; sub prompt { my $prompt = shift; my $output = $filter->cumulative_output; $output =~ s/\r//g; open (PIPE, "| $filter_command") or die "Failed to create pipe: $!"; print PIPE $output; close PIPE; return $prompt; } Save it as outfilter, make it executable, and then run rlwrap -z './outfilter <coloring-filter>' shell. I tried with: rlwrap -z './outfilter ccze -A' gosh which nicely colors Gauche's output. In your case, that would become something like: rlwrap -z './outfilter grcat grcat-config' asterisk If you like the filter and want to be able to run it without having to specify its path, you could move it alongside the builtin filters (on my system, that's in directory /usr/share/rlwrap/filters). Note that the filter as written is probably inefficient (it spawns a new copy of the coloring filter for each interaction with the command shell, because that's the shortest way I could find to have it flush its buffers) and fragile, but if the shell you are interacting with doesn't do any black magic of its own, it should work.
rlwrap -z pipeto: piping output through pagers
1,473,661,284,000
Regarding the question Simple command line HTTP server I would like to know, what is the simplest method for creating a http server listening on a specified port that will always cause timeout (it just eats the request and never responds). I am looking for a dead simple oneliner. It would be useful for the purpose of testing clients.
You can use NetCat to just listen on a port and do nothing: nc -l $PORT
Simple http server from the command line that will always cause a timeout
1,473,661,284,000
What's the shortest command I could use to find out my WAN IP?
I found: $ curl ifconfig.me 73.4.164.110 So of course I made an alias $ alias myip='curl ifconfig.me' $ $ myip 73.4.164.110
What's the shortest way to find my WAN IP address at the command line? [duplicate]
1,473,661,284,000
I'm new to Linux and so far,I have come to understand that if we open a new process through a gnome-terminal eg: gedit & "& for running it in background", then if we close the terminal, gedit exits as well. So, to avoid this ,we disown the process from the parent terminal by giving the process id of gedit. But, I've come across one peculiarity; If we open a gnome-terminal from within a gnome-terminal(parent) with gnome-terminal & and now, if we close the parent terminal, the child terminal doesn't close even if I haven't disowned it. Why is it? And if there is an exception, what is the reason for making it an exception and where can I find the configuration file(if accessible) where this exception has been mentioned?
When you close a GNOME Terminal window, the shell process (or the process of whatever command you instructed Terminal to run) is sent the SIGHUP signal. A process can catch SIGHUP, which means a specified function gets called, and most shells do catch it. Bash will react to a SIGHUP by sending SIGHUP to each of its background processes (except those that have been disowned). Looking at your example with gedit (with some help from pstree and strace): When the GNOME Terminal window is closed, gedit is sent SIGHUP by the shell, and since it doesn't catch SIGHUP (and doesn't ignore it), gedit will immediately exit. ─gnome-terminal(31486)─bash(31494)──gedit(31530) [31486] getpgid(0x7b06) = 31494 [31486] kill(-31494, SIGHUP) = 0 [31494] --- SIGHUP {si_signo=SIGHUP, si_code=SI_USER, si_pid=31486, si_uid=0} --- [31494] kill(-31530, SIGHUP) = 0 [31530] --- SIGHUP {si_signo=SIGHUP, si_code=SI_USER, si_pid=31494, si_uid=0} --- [31530] +++ killed by SIGHUP +++ [31494] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=31530, si_status=SIGHUP} --- [31486] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=31494, si_status=SIGHUP} --- But when you type the gnome-terminal command and there is an existing GNOME Terminal window, by default GNOME Terminal will do something a bit unusual: it will call the factory org.gnome.Terminal.Factory via D-Bus, and immediately exit; there is no background job for the shell to see for more than a fraction of a second. As a result of that factory call, the new window you get after typing gnome-terminal is managed by a new thread of the same GNOME Terminal process that is managing your existing window. Your first shell is unaware of the process id of the second shell, and cannot automatically kill it. ─gnome-terminal(9063)─┬─bash(39548) │ └─bash(39651) ├─{gnome-terminal}(9068) └─{gnome-terminal}(9070) On the other hand, if you type gnome-terminal --disable-factory &, it won't call the factory, and process-wise it will behave just like gedit did in your example. ─gnome-terminal(39817)──bash(39825)──gnome-terminal(39867)──bash(39874) │ ├─{gnome-terminal}(39868) │ └─{gnome-terminal}(39819) Closing the first terminal window will close both the first and the second terminal windows.
closing parent process(terminal) doesn't close a specific child process
1,473,661,284,000
For example if I did a curl -F '[email protected]' https://clbin.com. How do I man curl easily?
In both bash and zsh (and (t)csh where that feature comes from), provided that history expansion is enabled: man !!:0 (admittedly, it's not really shorter than man curl).
How to see manpage of previous command?
1,473,661,284,000
I've got an Excel file, shown in the picture below, and available for download here. What I need is to extract the variables under Item (Column B) and the values in column G. As a start, I tried saving the Excel file as a comma-delimited .csv file, but when I check the number of rows in the Mac OS X Terminal, it tells me that the CSV file is just one row: $ wc -l Layout.csv 0 Layout.csv Any idea for why this might be the case? Here is the CSV file opened in a text editor, showing that it has multiple lines: You can download that file here.
After seeing your CSV output, the problem is clear: you told Excel to use CR line endings, probably because it informed you that they are "Macintosh" style. That is badly outdated information, not true for over a decade now. There are three main line ending styles: LF: The style used by Unix and all its primary derivatives, including Mac OS X. CR: The style chosen by "classic" Mac OS, abandoned by Apple in 2001 with the move to Mac OS X. Since classic Mac OS is the only popular OS to ever use this style, it is almost never seen any more in practice. The CSV file you have linked to is one of these rare examples. CR+LF: The DOS/Windows style of line ending. Technically, this style is truer to the history of ASCII, and therefore "more correct," but it is uncommon to see outside of the Microsoft world. The best way to fix this is to get Excel to use LF line endings, that being the native form for OS X, which will make wc and other command line Unix tools happy. But, that is outside the scope of this forum. (Try Super User if you really can't work it out on your own.) An on-topic Unix command line way to fix it is: $ tr '\r' '\n' < Layout.csv > Layout-LF.csv (This is one of those sorts of problems that has about as many different solutions as there are people offering them.)
How can I convert this excel file so that it is not only one row?
1,473,661,284,000
When writing a command on the bash command line, I can use CTRL+w to delete a word backwards, or ALT+d to delete word forwards. The problem is, that these two shortcuts are not exactly complementary: CTRL+w deletes everything up to a space, whereas ALT+d deletes only up to any non-alphabetic character (i.e. stops at /) Is there a shortcut that acts as ALT+d, but backwards? So that when I am at the end of /etc/hostname and press shortcut, I end up with /etc/.
bind -p gives you the current bindings. You'll find that Ctrl+W is bound to unix-word-rubout and Alt+D to kill-word: "\C-w": unix-word-rubout "\ed": kill-word If you do a bind -p | grep kill-word, you'll find: "\e\C-h": backward-kill-word "\e\C-?": backward-kill-word Some terminals send ^H upon Backspace and some other ^? which is why there are two bindings. That makes it that Alt+Backspace should be what kills a word backward at least on those terminals where Alt+X sends the ESC character followed by X. There are some terminals however that send X with the 8th bit set (0xD8) upon Alt+X (though they are becoming rarer and rarer, as that doesn't make much sense this new UTF-8 world). In those, you'll have to press Esc and then Backspace, or you can set convert-meta to on in the readline configuration (for instance with bind 'set convert-meta on'), but then you won't be able to input non-ascii characters.
bash command line editing (Emacs shortcuts)
1,473,661,284,000
I have analog camera and I give a film to the lab where they scan it, I wanted to upload it to flickr but want to change info about camera. Right now it's NORITSU KOKI QSS-32_33 and I wanted it to be pentax k1000 (I don't want to clear exif data). How can I do this from command line.
The tool you're looking for is called exiftool. You can use it to read & write exif meta data that's attached to a single image or a whole directories worth of files using its recursive switch (-r). To change the camera model you can use the -model=".." switch. Example Here's an image before the change. $ exiftool ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg ExifTool Version Number : 9.27 File Name : ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg Directory : . File Size : 2.1 kB File Modification Date/Time : 2013:12:31 14:18:44-05:00 File Access Date/Time : 2013:12:31 14:18:44-05:00 File Inode Change Date/Time : 2013:12:31 14:18:44-05:00 File Permissions : rw------- File Type : JPEG MIME Type : image/jpeg JFIF Version : 1.01 Resolution Unit : None X Resolution : 1 Y Resolution : 1 Comment : CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 95. Image Width : 50 Image Height : 50 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 50x50 To change the model of my camera. $ exiftool -model="sam's camera" ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg Now when we recheck the tags. $ exiftool ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg ExifTool Version Number : 9.27 File Name : ff42403138dd5fa56e38efdaab2ced1435d0e28c.jpg Directory : . File Size : 2.3 kB File Modification Date/Time : 2013:12:31 14:19:14-05:00 File Access Date/Time : 2013:12:31 14:19:14-05:00 File Inode Change Date/Time : 2013:12:31 14:19:14-05:00 File Permissions : rw------- File Type : JPEG MIME Type : image/jpeg JFIF Version : 1.01 Exif Byte Order : Big-endian (Motorola, MM) Camera Model Name : sam's camera X Resolution : 1 Y Resolution : 1 Resolution Unit : None Y Cb Cr Positioning : Centered Comment : CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 95. Image Width : 50 Image Height : 50 Encoding Process : Baseline DCT, Huffman coding Bits Per Sample : 8 Color Components : 3 Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2) Image Size : 50x50 There is another tool called exiv2 which does the same kinds of things as exiftool in case you're interested. References exiv2 website ExifTool website
How to change camera info in Exif using command line
1,473,661,284,000
I want to set an alias to start nedit together with command line option -noautosave (due to text files up to 500MB). What seemed to be easy: alias nn="nedit -noautosave $1 &" just raises some error "Permission denied" of a different file and another error about unexpected EOF while looking for matching "'" and "unexpected end of file". One solution I found after a Google Search would be to check the quotes, but I can't see any possible errors with them. I also tried declaring a function: function nn () { nedit -noautosave $1 &} which also failed with same errors.
If you only want to run nedit with -noautosave when one of the files to be opened is larger than a given size, try this (I am using 100M but you can set your own size limit): function nn() { big=0; let big+=$(find "$@" -size +100M|wc -l) if [ $big -gt 0 ]; then nedit -noautosave -- "$@" & else nedit -- "$@" & fi } From man nedit: -- Treats all subsequent arguments as file names, even if they start with a dash. This is so NEdit can access files that begin with the dash character.
Bash aliases/functions and command line options
1,473,661,284,000
I am working in Linux and C-shell. Often times I would require to change some portion of what is there already in the prompt that I obtain from reverse searching through the history command by use of CTRL+R. I do not want to use the Backspace for doing this. I have looked at this question on superuser.com which was specifically for bash. This answer says to use CTRL+U which would clear out everything before the current cursor position. But this is clearing out everything for me even though I have hit a few backspaces and now am at the middle of the text in the prompt. Is there a C-shell specific way of achieving the same in Linux?
In tcsh (which I suppose is what you're calling "C-Shell" if you're not totally masochist) in emacs mode (usually the default), you can use Ctrl-W. That's the kill-region widget which deletes between the mark (set with Ctrl-Space but defaults to the beginning of the line) and the cursor. In that regard, its behaviour is closer to emacs' than with bash(readline)/ksh/zsh emacs mode, but departs from the terminal driver embedded line editor (in canonical mode), where Ctrl-W deletes the previous word (werase, also in vi). That time it chose emacs over usual terminal behaviour but not in the case of Ctrl-U where in emacs it's the universal argument while it's the kill-line character in the terminal. If you'd rather Ctrl-U delete to the beginning of the line, you can also do: bindkey '^U' backward-kill-line En vi mode, Esc to go to command mode, and d0 to delete to the beginning of the line (or c0 if you want to change instead of deleting that).
How can I clear portion of what is typed into the prompt while working on Linux and using C-shell?
1,473,661,284,000
I have my current PS1 as follows. The $? output is really useful (second line). export PS1="\ ${PSOn_Blue}${PSBWhite}\t\ ${PSColor_Off} \$?\ ${PSColor_Off}${PSBGreen} \u\ ${PSColor_Off}${PSWhite}@\ ${PSColor_Off}${hostcolor}\h\ ${PSColor_Off}:\ ${PSBGreen}\w\ ${PSColor_Off}\$\ " It would be even nicer if the return code ($?) would be red on non-zero output. How can I achieve this?
I use this: BOLD_FORMAT="${BOLD_FORMAT-$(color_enabled && tput bold)}" ERROR_FORMAT="${ERROR_FORMAT-$(color_enabled && tput setaf 1)}" RESET_FORMAT="${RESET_FORMAT-$(color_enabled && tput sgr0)}" PS1='$(exit_code=$?; [ $exit_code -eq 0 ] || printf %s $BOLD_FORMAT $ERROR_FORMAT $exit_code $RESET_FORMAT " ")' Concatenate that with the rest of your $PS1, but make sure you still use the single quotes, otherwise it won't work, and you should be golden. If you want to display the exit code even if it's zero, simply remove the [ $exit_code -eq 0 ] || bit.
Color PS1 based on previous command output [duplicate]
1,473,661,284,000
In the book I am reading, the output of df command is shown like this: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 15115452 5012392 9949716 34% / /dev/sda5 59631908 26545424 30008432 47% /home /dev/sda1 147764 17370 122765 13% /boot tmpfs 256856 0 256856 0% /dev/shm But when I run the same command (whilst passing the -h parameter) on my Ubuntu server (VirtualBox VM), the output is like this: Filesystem Size Used Avail Use% Mounted on /dev/mapper/a-root 4.2G 1.1G 3.0G 26% / udev 741M 4.0K 741M 1% /dev tmpfs 300M 268K 300M 1% /run none 5.0M 0 5.0M 0% /run/lock none 750M 0 750M 0% /run/shm /dev/sda1 228M 27M 190M 13% /boot What I want to know is, why is /home directory missing? And what exactly is the criteria that the listed directories fulfill? (I mean, / is listed, but not /home. But /run is there, and also /run/lock and /run/shm. Why the bias?)
df shows you the utilization and free space on filesystems. Obviously, on your machine, /home is not a filesystem but a mere directory.
'df' command doesn't list /home directory