date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,373,893,866,000
I'm running CentOS 7.2 on my Cloud Account and following some tutorials to install Grafana. I've almost finished install but got stuck when they asked me to execute this command which used to start the service on boot. update-rc.d grafana-server defaults 95 10 Of course, i did it and error came out: update-rc.d command not found. I tried to install it with yum but it didn't work too. Is there anyway to "execute" this command or just convert it into chkconfig which i could execute it right away without any trouble?
update-rc.d doesn't exist on Red Hat distros such as Fedora/CentOS/etc. The equivalent would be to use chkconfig, eg. chkconfig grafana-server on By default, chkconfig assumes levels 2345. Any runlevels not specified as on, will be marked as off; levels 016 by default. So you can specify runlevels: chkconfig --level 345 grafana-server on
convert update-rc.d command into chkconfig
1,373,893,866,000
I have the following code: #!/bin/bash task=$1 xml=$(curl -sL "http://login:[email protected]/issues/$task.xml") id=$(xmllint --xpath '//issue/id/text()' --format - <<<"$xml") name=$(xmllint --xpath '//issue/subject/text()' --encode utf8 - <<<"$xml") echo "task #$id - $name" But when I run it I get encoded cyrillic text like this: task #10014 - &#x41B;&#x438;&#x447;&#x43D;&#x44B;&#x439; &#x43D;&#x43E;&#x43C;&#x435;&#x440; &#x43A;&#x43B;&#x438;&#x435;&#x43D;&#x442;&#x430; Please help to fix it. I don't want this text encoded
After a few minutes of fighting with xmllint, I usually give up and end up using xmlstarlet instead which is usually more inclined to doing what you expect it to do. Here: xmlstarlet sel -t -v '//issue/subject' <<< "$xml" (or <rant>give up altogether on XML and use a more sensible format</rant>).
Problem with encoding in shell script
1,373,893,866,000
I know how to play a tone for a specific amount of time using SOX. play -n synth 5 sin 347 I know how to save a tone using SOX. sox -n note.mp3 synth 5 sin 347 Question is : How can I save a longer tone (hours) without the sound actually playing and not actually having to wait hours for the file to generate?
Answer: @derobert pointed out the "sox" and "play" command are part of the same package but does different thing. The 3600 below is the time interval in seconds. sox -n note.mp3 synth 3600 sin 347 The above code will generate an hour long tone without playing it. play -n note.mp3 synth 3600 sin 347 The above code will play the tone for an hour AND save it to note.mp3 Thanks to @derobert, I should have tried this first.
Generating a long sound file with SOX without actually playing the tone
1,373,893,866,000
I have set a simple udev rule for my Raspberry Pi (Debian) to automatically mount an USB HDD. It just runs a script which mounts all devices in /etc/fstab since it's the only one I have and I'm going to have there. I just need that but I saw that there were some environment variables that got passed to the script and tried to get it to print the label of the drive and the device node name for example just to experiment a bit. I got it to work, but now when I plug it I get for example: pi@Gawain ~ $ Disk TOSHIBA_EXT (/dev/sda1) plugged in.Mounting... And then in the next line I get no prompt, but it's not that the script hasn't exited correctly or anything, it's waiting for input and if I type anything like "pwd" for example it works, it's just that it shows no prompt. I'm really not concerned about this since it's just a minor cosmetic thing, and I'll probably leave the script to just mount the drive silently, but I feel curious about why it's behaving that way. udev rule: KERNEL=="sd*1", ACTION=="add", RUN+="/home/pi/scripts/mountUSB.sh" mountUSB.sh: #!/bin/bash CONSOLE="/dev/$(who | awk '{print $2}')" echo "Disk $ID_FS_LABEL ($DEVNAME) plugged in.Mounting..." > $CONSOLE sudo mount -a
When you are printing straight to the terminal, your shell doesn't know about it, so it doesn't know to print its prompt again. You would get similar behavior running e.g. (sleep 1; echo foo) &. I would suggest either not printing from your udev rule (that seems like the more usual thing to do: be quiet unless something wrong happened), or living with it, knowing that nothing is really broken here; the messages pushed straight to your terminal are parasitic as far as you shell is concerned.
Need to press enter to get prompt after executing udev script
1,373,893,866,000
I am writing a bash script to perform some analysis using the program ROOT. I want to run some initial command to load the result of the analysis, then continue using ROOT interactively. The analysis part goes along well but the problem is that after root is executes my initial command, it closes immediately. So far I have tried the EOF (here-file) construct to pass my initial command and I am a bit unfamiliar with shell scripting so I would like to get your opinions on how to keep ROOT running after the execution of the script. That is I would like to see the ROOT prompt instead of the sytem prompt. ./runReader.py SummerStd 140PU_NM1 root -l SummerStd_140PU_NM1_his.root << EOF TBrowser a; EOF The above code executes the analysis then runs root; however, it immediately terminates and I have no time to inspect the TBrowser since I have the system prompt instead of the ROOT prompt. I would like control to stay at root's command prompt after the script sends the TBrowser command to the program, so I can enter additional commands by hand.
You could do: expect -c 'spawn -noecho root -l SummerStd_140PU_NM1_his.root send "TBrowser a;\r" interact'
Pass a command to ROOT from a shell script and having it stay open
1,373,893,866,000
Possible Duplicate: Recursive rename files and directories I have a large directory of music files that is often changing as files and directories come and go. My preference is to make sure that file and directory names don't have spaces in them, so I replace them all with underscores. I go into the main directory and run this command: $ find -type d -exec rename 'y/\ /\_/' {} \; The problem is that when there are subdirectories, this command seems to get lost and it will return errors. So if I have the following directory structure: ... and if I run the command, I get the following errors: $ find -type d -exec rename 'y/\ /\_/' {} \; find: `./Test 02': No such file or directory find: `./Test 01': No such file or directory find: `./Test 03': No such file or directory And then the result is that my directory structure looks like this. Note the subdirectories still have spaces in them: If I run the command again, I get these errors, even though it seems like maybe it renamed the directories in question: $ find -type d -exec rename 'y/\ /\_/' {} \; find: `./Test_01/Test A': No such file or directory find: `./Test_01/Test C': No such file or directory find: `./Test_01/Test B': No such file or directory Finally, I run the command yet one more time, and I get no errors, and I have all directories and subdirectories named the way I want: Obviously this requires running the command even more times when I have multiple subdirectories, which can get tedious. How can I make it so that this command only has to be run once and it will rename all directories and subdirectories in one go? My ultimate aim is to include this in a Bash script so that I can run it along with other similar housekeeping commands, so I need it to not return errors or need more input from me. Also, I'm running Ubuntu 12.04 if that makes a difference.
Here's what happens: find finds the matching directory ./Test 02. find executes the rename command on that directory. rename renames Test 02 to Test_02. find tries to descend into the directory Test 02. But it no longer exists. The easiest way of solving this problem is to tell find to work the other way round: first look for matches inside directory, then check if the directory itself matches. This is what the -depth option does. If you only add -depth, you'll run into another problem, which is that when find reaches Test 01/Test A, it invokes rename 'y/\ /\_/' 'Test 01/Test A', which tries to rename that directory to Test_01/Test_A. This will fail since there is no directory called Test_01. An easy fix is to use the -execdir option, which invokes rename inside the Test 01 directory and passes Test A as the argument. You can speed things up by passing multiple arguments to rename in one batch, by using + instead of ; to terminate the -execdir command. find -depth -name '* *' -type d -execdir rename 'y/ /_/' {} + Alternatively, use this short but cryptic zsh command: autoload zmv zmv -Qw '**/*(/D)' '$1${2// /_}' The zmv function renames files according to patterns. The source pattern is **/*, which matches all files (*) in all subdirectories recursively (**/). The glob qualifiers (activated by the -Q option) indicate that only directories are matched (/) and that dot files are included (D). The -w option creates a backreference for each wildcard in the source pattern. The replacement starts with $1 which designates the match for the first wildcard (for **, this includes a final /), and is followed by ${2// /_}, which is $2 (what the * matched) modified to replace all space characters by _. Add the -v option to see what the command does, you'll notice that it traverses depth first like find -depth.
How do I get this find and rename command to work with subdirectories? [duplicate]
1,373,893,866,000
I have to find a free, gratis command line solution to convert primary Microsoft PowerPoint presentations into HTML files (1 HTML file per foil) on Linux (Debian, OpenSuse). It would be nice if the solution supports OpenOffice Impress presentations as well but this is not necessary. What is/are solution/s for this matter?
You want to use pptHTML: http://www.ma.utexas.edu/restricted-resources/utma-doc/xlHtml/pptHtml.txt For debian based distros: http://packages.debian.org/unstable/utils/ppthtml The C Source for the xlhtml package: http://prdownloads.sf.net/chicago/xlhtml-0.4.9.3.tgz ppthtml is an executable installed through the same package.
How to convert a PowerPoint ppt file into HTML files?
1,373,893,866,000
I got a bunch of URLs (more than 1,000) and I am wondering if there is any CLI script to validate URL for http schema?
You can use Perl's Regexp::Common::URI::http. From the CPAN documentation: use Regexp::Common qw /URI/; while (<>) { /$RE{URI}{HTTP}/ and print "Contains an HTTP URI.\n"; }
Any CLI to validate URL? [closed]
1,373,893,866,000
I have this line that I execute from php sudo -u db2inst1 -s -- "/opt/ibm/db2/current/bin/db2 connect to PLC; /opt/ibm/db2/current/bin/db2 \"update EDU.contact set MOBILE_PHONE = '123'\"" it works fine on Sudo version 1.7.2. Now I got new server with SUSE Linux Enterprise Server 11 (x86_64). There was no sudo so I installed it from repository (Sudo version 1.6.9p17) But know the above syntax doesn't work. It throws bin/bash: /opt/ibm/db2/current/bin/db2 connect to PLC; /opt/ibm/db2/current/bin/db2 "update EDU.contact set MOBILE_PHONE = '123'": No such file or directory Any idea how I can make this work? If I run /opt/ibm/db2/current/bin/db2 connect to PLC; /opt/ibm/db2/current/bin/db2 "update EDU.contact set MOBILE_PHONE = '123'" under db2inst1 account everythings work just fine.
I'm really not quite sure why you're getting this error. I have a system with sudo 1.8.3 on it, and the documentation clearly says something like sudo -s "echo hi" should work, but it doesn't. The way I've always done this is to do the same thing -s [command] does, but manually. sudo sh -c 'echo hi' or in your case sudo -u db2inst1 sh -c "/opt/ibm/db2/current/bin/db2 connect to PLC; /opt/ibm/db2/current/bin/db2 \"update EDU.contact set MOBILE_PHONE = '123'\"" It's more compatible, as the -s argument hasn't always been around (and I unfortunately have some really old machines at work). Edit: What's happening in the error you're getting is that it's looking for an executable which is literally named db2 "update EDU.contact set MOBILE_PHONE = '123'" in a directory called /opt/ibm/db2/current/bin/db2 connect to PLC; /opt/ibm/db2/current/bin (yes, it looks for db2 connect to PLC; as a directory). This obviously doesn't exist.
How to run this in sudo?
1,373,893,866,000
I don't mind the idea of quotes when I load the Mint console, however the OEM text leaves much to be desired. I'd like to update the quote text with inspirational or otherwise useful quotes. How would I go about doing that?
Editing Linux Mint fortunes! (Mint 13) has some good information for how to tweak what "fortunes" are displayed. In specific, it appears they are stored in /usr/share/cowsay/cows (as plain text, preformatted) with .cow extension. There's more information in the link.
Change (not remove) "fortunes" in Linux Mint console
1,373,893,866,000
I recently changed my MacBook's UI to show Hebrew, and at around the same time, I began using the Terminal more often. The combination of the two has lead me to wonder - are there Hebrew language commands, or any other non-english language command sets available for the terminal either on Unix, Linux, or Mac OS? (This question leads me to wonder if an entirely separate shell implementation is required for something like that, if it's even possible.) Are there foreign language Terminal command sets?
This is a cool idea, but I don't think it exists. Alternatively, you could write your own wrappers (in Hebrew in your case) either as executable code or as an alias in your ~/.bashrc. Something like: alias [hebrew_for_add_a_user]='useradd' I would personally opt for the alias implementation.
Are there foreign language Terminal command sets?
1,373,893,866,000
I know the commands to restart/stop/start, but when I try to pass options it doesn't seem to work! CENTOS 6, MySQL 5.14 service mysql restart service httpd restart Then I tried this: /etc/init.d/mysql --general_log /my/log/path.log That doesn't work either (the error message says it doesn't have access) =/ Update: Apparently I can't run mysqld as root because of security issues
To change options permanently and in the sanctioned manner, edit the files in /etc/sysconfig that have the same name as the service. For example, consider httpd. On one system I have, there are several things you can set: # Processing model HTTPD=/usr/sbin/httpd.worker # Additional options OPTIONS= # Set locale HTTPD_LANG=C (The actual file is much more verbose and explanatory than this.) There should be files in /etc/sysconfig for virtually every service.
If I'm logged in as root, how do I restart mysql or apache with options?
1,373,893,866,000
I'm something called iscan to use my scanner. However to use it, I need to go through a GUI to get my scanned pdf. When I run iscan --help (because just iscan launches the GUI), it tells me that it is a GIMP plugin and requires to have GIMP launched: iscan is a GIMP plug-in and must be run by GIMP to be used Is there a way to use GIMP plugins like this from the command line? I would MUCH rather run a simple command to output whatever is in my scanner to a pdf somewhere.
Use scanimage from sane-backends.
Using iscan (GIMP plugin) from the command line
1,373,893,866,000
I actually found following one-liner on commandfu.com: cmdfu(){ curl "http://www.commandlinefu.com/commands/matching/$@/$(echo -n $@ | openssl base64)/plaintext"; } I have not come across a similar use of the () in bash before. It seems that it creates the command cmdfu temporarily within the bash environment. Am I correct here? Or is it actually stored somewhere? Is there a name for this construct so I can read up on it?
This is a shell function. If you create it at the command line, it will exist only for the current shell invocation. You can add the definition to a shell startup file (.bashrc, .bash_profile, etc) and it will make it "permanent". If you search the bash(1) manpage for "Shell Function Definitions" you can see the syntax for defining shell functions. The section "FUNCTIONS" goes into more details of functions themselves.
What is this construct in bash? mycommand(){dosomething;}
1,373,893,866,000
Is there any tool to count the number of system-calls issued for example in one-second in the entire system (like a global strace) ? ( Something like what vmstat does for number of interrupts or context switches per second )
One possibility is to count system calls with perf. If you only want a global count, updated every second, run perf stat -e raw_syscalls:sys_enter -a -I 1000 sleep 5 This will show the global count of system calls, every second, for five seconds. The sleep 5 command determines how long the trace will last; the -I parameter determines how often the count will be output. perf can also count calls by type: perf stat -e 'syscalls:sys_enter_*' -a -I 1000 sleep 5 or display a top-like view of all processes, by system call count, updated every two seconds: perf top -e raw_syscalls:sys_enter -ns comm
Counting the number of issued syscalls
1,373,893,866,000
TL;DR: Q: How to keep a counter in find -exec loop ? My use-case: I need to move a lot of directories which are scattered around the place, so I do find . -type d -name "prefix_*" \ -exec sh -c ' new_path="/new/path/$(basedir "$1")"; [ -d "$new_path" ] || mv "$1" "$new_path"; ' find_sh {} \; (The real command is more complex, as I read some metadata for the constitution of /new/path. Anyways, I do not want to argue about the command itself, it's not part of the question, just the use-case). It works just fine, but it takes quite a while and I want to keep track of the progress. So I added a counter writing to a file: i=$(cat ~/find_increment || echo 0); echo $((i+1)) | tee ~/find_increment; That also works just fine, but it feels like a really bad idea having some 100.000 disk read and write operations. I thought about writing to ramdisk instead of disk, but I don't have that option in the environment I need to perform that task. Is there a better way to keep a counter between -exec runs ?
Instead of using a pure find command you could combine find with a while read loop or GNU parallel. Both are likely to be faster than find's -exec since you don't start a new shell for every path found by find. Solution Using GNU Parallel GNU parallel has the following benefits compared to while read: Easier to get right. No IFS= and -r needed. Built-in job number variable {#}. For more handy replacement strings have a look at the tutorial. Easy to parallelize if needed. Remove the -j1 and you have as many workers as cores by default. script=' echo Processing job number {#} new_path="/new/path/$(basedir {})" [ -d "$new_path" ] || mv {} "$new_path" ' find … -print0 | parallel -0 -j1 "$script" The {} is replaced by parallel by the correctly quoted entry read from stdin. Do not quote {} again. parallel executes the script with the same shell from which you started it. If you started parallel in bash you can use bash features in the script. Solution Using While Read find … -print0 | while IFS= read -r -d '' old_path; do echo Processing job number "$((++job))" new_path="/new/path/$(basedir "$old_path")" [ -d "$new_path" ] || mv "$old_path" "$new_path" done
find -exec and increment counter / progress
1,373,893,866,000
I want to extract frames as images from video and I want each image to be named as InputFileName_number.bmp.  How can I do this? I tried the following command: ffmpeg -i clip.mp4 fr1/$filename%d.jpg -hide_banner but it is not working as I want.  I want to get, for example, clip_1.bmp, but what I get is 1.bmp. I am trying to use it with GNU parallel to extract images of multiple videos and I am new to both so I want some king of dynamic file naming input -> input_number.bmp.
$filename is handled as a shell variable. What about ffmpeg -i clip.mp4 fr1/clip_%d.jpg -hide_banner or $mp4filename=clip ffmpeg -i ${mp4filename}.mp4 fr1/${mp4filename}_%d.jpg -hide_banner ? Update: For use with gnu parallel, you can use parallel's -i option: -i Normally the command is passed the argument at the end of its command line. With this option, any instances of "{}" in the command are replaced with the argument. The resulting command line could be as simple as parallel -i ffmpeg -i {} fr1/{}_%d.jpg -hide_banner -- *.mp4 if you can live with the extension in the output files. Be aware that you may not actually want to run this in parallel on a traditional hard-disk as the concurrent i/o will slow it down. Edit: Fixed variable reference as pointed out by @DonHolgo.
How to include input file name in output file name in ffmpeg
1,373,893,866,000
I need the number of the previous week of the month. In the 2nd week of March, the previous week would be 1. In the 1st week of April, the previous week would be 5. My week starts with Monday. WEEK=$(( 1 + $(date +%V) - $(date -d "$(date -d "-$(($(date +%d)-1)) days")" +%V) )) How may I subtract one from this so that I get the previous week (if the week equals 1 I have a exception, so I don't have to mind that). It would also be nice if February, with only 4 weeks, is automatically recognized.
If I'm not mistaken, your expression (reorganized here a bit), finds the week number corresponding to the first day of the current month, the week number corresponding to today, and calculates the difference as week-of-month? first=$(date -d "-$(($(date +%d)-1)) days") weekofmon=$(( 1 + $(date +%V) - $(date -d "$first" +%V) )) If that's right, and you want the week-of-month for last week, shouldn't it be enough to just replace "today" with "7 days ago" (in all places) to get the month-of-week corresponding to that date? now="7 days ago" first=$(date -d "$now - $(($(date +%d -d "$now" )-1)) days") weekofmon=$(( 1 + 10#$(date +%V -d "$now") - 10#$(date -d "$first" +%V) )) The relies on date being able to parse expressions like "7 days ago - 3 days". date +%V prints week numbers < 10 with a leading zero, which would cause them to be interpreted as octal numbers, breaking 08 and 09. Add the 10# to force bash to take the numbers in decimal. A bit shorter way to get the first day of the same month: first=$(date -d "$now" +"%Y-%m-01") (different format, but date should be able to interpret it.)
Get previous week's number in bash
1,373,893,866,000
As you can see I'm trying to run a Java program but I need the command which I can write in terminal to run the code.
I imagine the easiest way would be to press the blue arrow at the top. If it has to be command line though, try Javac Main.java To compile the source to byte code which will give you a Main.class file, then Java Main To run it. The oracle website has a quick guide here
How to run a simple Java program using terminal command?
1,373,893,866,000
I am working in a big project using SVN and many times I have to go see things in other people's branches and play with them. But I want to keep those changes local, never commit them. By mistake, though, its very easy to type svn ci -m"blabla" and commit my changes in someone else's branch. The branches have some identifier to know who they belong to, so for instance we have: project-aa project-bb etc... And mine would be project-bb. Is there a way of making sure that I commit on the right branch? For instance, whenever I commit, if the text on the path does not contain "-bb", then ask me if I am sure. Is this possible?
I would use a shell function: svn () { if [[ $1 == "ci" || $1 == "commit" ]] && [[ $PWD != *"-bb"* ]]; then echo "don't commit to someone else's branch" >&2 return 1 fi # now, do the actual svn command command svn "$@" # quotes are crucial here }
Confirm "svn ci" command depending on my current location
1,373,893,866,000
The company just gave us all a new virtual machine for development porpoises. It runs Arch Linux, which nobody here has experience of. The VM has no internet access, but we can side load files by dowloading them on the Windows host PC and placing them in a directory which is shared with the VM. How can I serach for suitable packages from Windows, then install them from the command line in Arch Linux? I am specifically interested in Tux Commander and Search Monkey, but a generic answer will also help.
What a sad thing having a VM without internet access :( I think that you should talk to your boss and tell him that without internet access you can't properly update your linux distro, and this can lead to potential security issues. Anyway, you can browse the Arch Linux official package list from here: https://www.archlinux.org/packages/ You can download a package by clicking on its name and then click to "download from mirror". Then install with this command: pacman -U yourpkgname.tar.xz
How to install from the command line in Arch Linux?
1,455,402,031,000
On page change, redraw, or Reload command, xpdf will reload the file it is currently displaying. Is it possible to cause xpdf to reload the file by sending a signal? Which signal? (I am basically looking for the functionality offered by xpdf -remote ServerName -reload, except I want to apply it to an xpdf that was not launched with the -remote option.)
I don't think you can use a signal. But Xpdf accepts synthetic events, so it's easy to programmatically type r into the window using xdotool(1). Unfortunately the xpdf window does not identify itself by its PID, but the following seems to work: xdotool search --onlyvisible --class Xpdf key r If you know the name of the file that Xpdf is displaying, you can match the window's title: xdotool --name 'Xpdf: foo.pdf' key r There's a small risk of a false positive with another window whose title just happens to contains that string. Other window matching options may help pinpoint the right window.
Is it possible to send an xpdf process a signal that causes it to reload the file being displayed?
1,455,402,031,000
Part of my software issues various commands to open and view different file types. For instance I use atril for PDFs and eom for PNGs. However I have a slight problem with CSV files. I can open them with soffice –calc <filepath> but each time it goes through the Import stage. Is there a way I can avoid this, to avoid the risk of users creating issues, as the format is consistent and the only separator I need to include is the comma ,? Thanks in advance.
A method to skip importing would be to convert the file to a format that can be read without importing - so for instance: soffice --headless --convert-to ods --outdir /tmp tblIssues.csv soffice --view /tmp/tblIssues.ods rm /tmp/tblIssues.ods This converts the file tblIssues.csv to a ODS spreadsheet, saves it to /tmp and opens it in Libreoffice. Once it has finished it removes the converted file (optional). The --view option opens the file as read-only, and also hides the GUI elements needed for editing, making LibreOffice more practical as a viewer. You could also use other formats, such as PDF (--convert-to pdf) and then you can use another viewer like atril. Note with I think the libreoffice convert command may use the settings used by the user last in the Importer, so if it is set to use a delimiter other than , it may not work. Also, you can modify the commands to... hide output: COMMAND > /dev/null 2>&1 separate from the terminal: COMMAND & disown
Open CSV File And Go Straight To Spreadsheet
1,455,402,031,000
Creating a git repository for testing. ~ $ mkdir somefolder ~ $ cd somefolder/ ~/somefolder $ git init Initialized empty Git repository in /home/user/somefolder/.git/ ~/somefolder $ echo test > xyz ~/somefolder $ mkdir somefolder2 ~/somefolder $ echo test2 > ./somefolder2/zzz ~/somefolder $ git add * ~/somefolder $ git commit -a -m . [master (root-commit) 591fda9] . 2 files changed, 2 insertions(+) create mode 100644 somefolder2/zzz create mode 100644 xyz When turning the whole repository into a tar.gz, it results in a determinstic file. Example. ~/somefolder $ git archive \ > --format=tar \ > --prefix="test/" \ > HEAD \ > | gzip -n > "test.orig.tar.gz" ~/somefolder $ sha512sum "test.orig.tar.gz" e34244aa7c02ba17a1d19c819d3a60c895b90c1898a0e1c6dfa9bd33c892757e08ec3b7205d734ffef82a93fb2726496fa16e7f6881c56986424ac4b10fc0045 test.orig.tar.gz Again. ~/somefolder $ git archive \ > --format=tar \ > --prefix="test/" \ > HEAD \ > | gzip -n > "test.orig.tar.gz" ~/somefolder $ sha512sum "test.orig.tar.gz" e34244aa7c02ba17a1d19c819d3a60c895b90c1898a0e1c6dfa9bd33c892757e08ec3b7205d734ffef82a93fb2726496fa16e7f6881c56986424ac4b10fc0045 test.orig.tar.gz Works. But when changing a minor detail, when only compressing a sub folder, it does not end up with a deterministic file. Example. ~/somefolder $ git archive \ > --format=tar \ > --prefix="test/" \ > HEAD:somefolder2 \ > | gzip -n > "test2.orig.tar.gz" ~/somefolder $ sha512sum "test2.orig.tar.gz" b523e9e48dc860ae1a4d25872705aa9ba449b78b32a7b5aa9bf0ad3d7e1be282c697285499394b6db4fe1d4f48ba6922d6b809ea07b279cb685fb8580b6b5800 test2.orig.tar.gz Again. ~/somefolder $ git archive \ > --format=tar \ > --prefix="test/" \ > HEAD:somefolder2 \ > | gzip -n > "test2.orig.tar.gz" ~/somefolder $ sha512sum "test2.orig.tar.gz" 06ebd4efca0576f5df50b0177d54971a0ffb6d10760e60b0a2b7585e9297eef56b161f50d19190cd3f590126a910c0201616bf082fe1d69a3788055c9ae8a1e4 test2.orig.tar.gz No deterministic tar.gz this time for some reason. How to create a deterministic tar.gz using git-archive when just wanting to compress a single folder?
When you do a simple export with HEAD, an internal timestamp is initialized based on the commit's timestamp. When you use more advanced filtering options, the timestamp is set to the current time. To change the behavior, you need to fork/patch git and change the second scenario, eg proof of concept: diff --git a/archive.c b/archive.c index 94a9981..0ab2264 100644 --- a/archive.c +++ b/archive.c @@ -368,7 +368,7 @@ static void parse_treeish_arg(const char **argv, archive_time = commit->date; } else { commit_sha1 = NULL; - archive_time = time(NULL); + archive_time = 0; } tree = parse_tree_indirect(sha1);
How to create a deterministic tar.gz using git-archive?
1,455,402,031,000
I want the output of: diskutil list; diskutil info [multiple devices] Without having to do: diskutil info disk0; diskutil info disk0s1; diskutil info disk1 ...etc For example, with many builtin commands like touch rm mkdir, etc. I can use bracket expansion to perform commands on multiple versions of a file: mkdir njboot0{1,2,3} Or, when searching, I can use special characters to specify matching specific patterns in a string, like: find ~/ -type f \( -name *mp3 -or -name *mp4 \) But: diskutil info disk{0,0s1,1,1s1} #syntax error diskutil info /dev/* #syntax error (diskutil info disk$'')$'0|1|2' #syntax error diskutil info disk\(0,s0\) #syntax error etc... diskutil info disk${disk}'0'$'s1' #okay, getting somewhere. this works at least. How do I perform the diskutil command mentioned above on multiple disks correctly? Is there any general syntax I can follow for expansion/substitution? As requested, this is the output of diskutil list: /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_HFS Macintosh HD 999.7 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3
The brace expansion you're asking about will only expand for files/directories that match on disk to the pattern you use. The other issue you'll run into is diskutil may not be able to handle more than 1 argument at a time. To expand these you'd need to do a while or for loop, and pass the results to diskutil as you iterate through the loop. Example $ for i in /dev/*; do diskutil "$i";done As to your 2nd question there isn't really any method where brace expansion can help you in situations where there are no corresponding files/directories on disk. Parsing diskutil Given the output from this command you'll have to resort to using a tool such as awk, sed, or grep to "parse" this output so that you can get meaningful information about the disks, for further queries calling diskutil a 2nd time. Example Here's a rough cut of parsing the output from diskutil using grep and sed. There are more efficient ways to do this but this shows you the general approach. $ diskutil list | grep -E "[ ]+[0-9]+:.*disk[0-9]+" | sed 's/.*\(disk.*\)/\1/' disk0 disk0s1 disk0s2 disk0s3 With this approach you can then modify the for loop above like so: $ for i in $(diskutil list | grep -E "[ ]+[0-9]+:.*disk[0-9]+" | \ sed 's/.*\(disk.*\)/\1/'); do diskutil info $i; done
How can I list the info for an array of /dev/disks using bash expansion or substitution?
1,455,402,031,000
What is the most portable or standard way to send an email from the console or a script on Linux, and possibly Unix?
To do this, you can use the mailx command. Below is an usage example: mailx -v -s "Subject" -S smtp-use-starttls -S ssl-verify=ignore -S smtp-auth=login -S smtp=smtp://<server_name>:25 -S from="[email protected]" -S smtp-auth-user=<username> \ -S smtp-auth-password=<password> [email protected] This example is using SSL and SMTP authentication.
Standard and portable way to send email from console?
1,455,402,031,000
When writing in command line I have this aversion to scrolling my eyes down to the bottom of the page as I write commands. How do I keep the cursor/line at the top and allow the output to be displayed below it every time I write and execute a command? Has anyone ever tried to accomplish this?
You can do something like that by adding \[\e[f\e[K\] at the beginning of your prompt variable (PS1). But it doesn't take scrolling into account. \[ start non-printing sequence \e[f ANSI escape sequence to move cursor to position 1;1 \e[K ANSI escape sequence to erase from cursor to end of line \] end of non-printing sequence
How to keep the terminal cursor fixed at the top?
1,455,402,031,000
I'm trying to write a script that switches focus to Emacs. This is what I have: #!/bin/bash wmctrl -a 'emacs@pat-ubuntu-desktop' It works fine when there is only one Emacs window (or "frame," in Emacs parlance) open, but it doesn't do anything when multiple Emacs windows are open. The problem seems to be that the window titles change when a second window is opened. When there's a single window open, its name is emacs@pat-ubuntu-desktop: ➜ ~ wmctrl -l 0x05c000a3 0 pat-ubuntu-desktop emacs@pat-ubuntu-desktop But when I open a second window, the window titles change: ➜ ~ wmctrl -l 0x05c000a3 0 pat-ubuntu-desktop *scratch* 0x05c00921 0 pat-ubuntu-desktop *scratch* EDIT: The following issue was illusory, the result of my web browser having "emacs" in its title (because I was searching information about my first problem). Another issue (perhaps related, perhaps not), is that even when there is only a single Emacs window open, the command wmctrl -a 'emacs' doesn't work, but wmctrl -a 'emacs@' (or wmctrl -a 'emacs@pat-ubuntu-desktop') does. Why must the @ be included?
Matching on the window title isn't very reliable. For example, if you're viewing this question in your browser, then wmctrl -a 'emacs' might activate your browser. You can customize the frame title format with frame-title-format. I use (multiple-frames "%b" ("" invocation-name "@" system-name)). But I don't recommend relying on this in your script. You can tell wmctrl to look for a window by class with the option -x. That's both simple and reliable. wmctrl -x -a Emacs Alternatively, you can make Emacs do the job. This gives you a better chance of picking the “best” frame when there are multiple active frames.
Getting wmctrl to work with multiple Emacs windows
1,455,402,031,000
Say I want to modify the latter of some concatenated command line options, is it possible without killing the first command? Specifically I have compile and run scripts executed thusly: > compile ; run The compile is in progress (half way through two hour duration), but new information tells me I don't really want the "run" command to run anymore (it launches a lot of background processes I don't want to go clean up). Is there a way to accomplish this adjustment or should I just be smarter about how I string together commands in the future?
Something like compile && { test -f /path/to/dont_run || run; } should solve your problem. touch /path/to/dont_run would prevent run from being executed. You can make this more complicated (and more convenient) by e.g. defining a shell function cond_run_cmd which does some check like that, limited to its tty (so that you can have several in parallel) or whatever.
Suspend and edit previous single line commands
1,455,402,031,000
Question: Is there an easy way how I can teach zsh to check the command line before executing it? I know that I can completely wrap a specific program with an extra script, but this is not what I want to do. Example: Using tab completion, it could happen to me easily, that I overwrite my input file by calling gcc wrongly e.g. gcc test.c -o test.c instead of gcc test.c -o test
You could redefine the accept-line zle widget to do all the checks you want like: accept-line() { if [[ $BUFFER =~ '^gcc.*-o\s*\S*\.c\b' ]]; then zle -M 'I will not do that!' else zle .$WIDGET "$@" fi } zle -N accept-line
zsh - check arguments of a command before executing it
1,455,402,031,000
I'm trying to figure out if there is a way to get the UNIX command tree to display only directories that match a specific pattern. % tree -d tstdir -P '*qm*' -L 1 tstdir |-- d1 |-- d2 |-- qm1 |-- qm2 `-- qm3 5 directories The man page shows this bit about the switch. -P pattern List only those files that match the wild-card pattern. Note: you must use the -a option to also consider those files beginning with a dot .' for matching. Valid wildcard operators are*' (any zero or more characters), ?' (any single character),[...]' (any single character listed between brackets (optional - (dash) for character range may be used: ex: [A-Z]), and [^...]' (any single character not listed in brackets) and|' sepa‐ rates alternate patterns. I'm assuming that the bit about ...List only those files... is the issue. Am I correct in my interpretation that this switch will only pattern match on files and NOT directories?
Someone mentioned on stackoverflow in the trees man page that this is why the -P switch doesn't exclude things that don't match the pattern. BUGS Tree does not prune "empty" directories when the -P and -I options are used. Tree prints directories as it comes to them, so cannot accumu‐ late information on files and directories beneath the directory it is printing. So it doesn't appear to be possible to get tree to filter its output using the -P switch. EDIT #1 From a question I had posted on SO that got closed. Someone, @fhauri, had posted the following information as alternative ways to accomplish what I was trying to do with the tree command. I'm adding them to my answer here for completeness. -d switch ask to not print files: -d List directories only. So if you WANT use this, you could: tree tstdir -P '*qm*' -L 1 | grep -B1 -- '-- .*qm' |-- id1 | `-- aqm_P1800-id1.0200.bin -- |-- id165 | `-- aqm_P1800-id165.0200.bin |-- id166 | `-- aqm_P1800-id166.0200.bin -- |-- id17 | `-- aqm_P1800-id17.0200.bin -- |-- id18 | `-- aqm_P1800-id18.0200.bin -- |-- id2 | `-- aqm_P1800-id2.0200.bin At all, if you use -L 1, -L level Max display depth of the directory tree. you could better use (in bash) this syntax: cd tstdir echo */*qm* or printf "%s\n" */*qm* and if only dir is needed: printf "%s\n" */*qm* | sed 's|/.*$||' | uniq At all, you could do this very quickly if pure bash: declare -A array;for file in */*qm* ;do array[${file%/*}]='';done;echo "${!array[@]}" This could be explained: cd tstdir declare -A array # Declare associative array, (named ``array'') for file in */*qm* ;do # For each *qm* in a subdirectory from there array[${file%/*}]='' # Set a entry in array named as directory, containing nothing done echo "${!array[@]}" # print each entrys in array. ... if there is no file matching pattern, result would display *. so for perfect the job, there left to do: resultList=("${!array[@]}") [ -d "$resultList" ] || unset $resultList (This would be a lot quicker than declare -A array for file in */*qm*; do [ "$file" == "*/*qm*" ] || array[${file%/*}]='' done echo "${!array[@]}" )
Can the UNIX command tree display only directories matching a pattern?
1,455,402,031,000
I'm using the following script to copy multiple files into one folder: { echo $BASE1; echo $BASE2; echo $BASE3; } | parallel cp -a {} $DEST Is there any way to use only one echo $BASE with brace expansion? I mean something like this: { echo $BASE{1..3} } | parallel cp -a {} $DEST
You could use an array: BASES[0]=... BASES[1]=... BASES[2]=... # or BASES+=(...) # or BASES=(foo bar baz) echo "${BASES[@]}" | parallel cp -a {} $DEST To make it safer (spaces and newlines in the variable in particular), something like this should work more reliably: printf "%s\0" "${BASES[@]}" | parallel -0 cp -a {} "$DEST" Note: arrays aren't in POSIX, this works with current versions of bash and ksh though.
Copy multiple files to one dir with parallel
1,455,402,031,000
I am contemplating using mpg123 as an audiobook player. I can't find any other good audiobook players for Linux, and I think mpg123 may be my best option. My audiobooks are organized by directories and the track names are numbered (e.g., Track-01.mp3, Track-02.mp3, etc.). What I am seeking is a way to save the last location played (the track and the position within the track) when I stop listening, and then be able to start mpg123 at that place in the audiobook the next time I listen. It would be ideal to have this "last location" info saved in a text file in the directory. That way I could start each audiobook at the last location by using the text file stored in that audiobook's directory. A similar bookmark feature would be nice too. It would be almost the same implementation, it seems. The "last location" info could be saved in a text file named e.g. "last" and each bookmark could be saved in a text file named bookmark.N (where N simply increments). Is a trivial implementation possible, maybe as a simple bash script? I'm not a developer.
Thomas Orgis, a mpg123 developer and maintainer, just implemented this functionality in mpg123 (as a script called 'conplay') at my request. His description is: This little wrapper runs mpg123 on a given directory (hand in '.' for the current one), playing all *.mp[123] files therein in terminal control mode. The extra trick is that a playlist file (conplay.m3u) is read and updated (created) with the position you left playback at (via 'q' key), to return on next invokation. The name stands for CONtinued PLAYback. What did you think?;-) I think it is brilliant! It does exactly what I asked for in my question above. I've been using it all day and it works flawlessly. I could not be happier! You can get it from http://mpg123.org/snapshot Thanks Thomas!
command line audio with mpg123 - how to save position in audio and begin from that location next time?
1,455,402,031,000
I have an interactive terminal program, that accepts stdin (telnet for example). I want to send it some input before interacting with it, like this: echo "Hello" | telnet somewhere 123 But that only sends in Hello and kills telnet afterwards. How can I keep telnet alive and route input to it?
You can't change what STDIN of telnet is bound to after you start, but you can replace the simple echo with something that will perform more than one action - and let the second action be "copy user input to the target": { echo "hello"; cat; } | telnet somewhere 123 You can, naturally, replace cat with anything that will copy from the user and send to telnet. Keep in mind that this will still be different to just typing into the process. You have attached a pipe to STDIN, rather than a TTY/PTY, so telnet will, for example, be unable to hide a password you type in.
Sending some input into a process, then resuming input from command line
1,455,402,031,000
We use the `exceed tool to connect to our UNIX servers, but sometimes the command-line behaves erratically. When I am typing some command on the command-line, nothing happens -- nothing is displayed on the screen and I need to close the terminal and open a new one. Why does that happen? Is it related to stty sane? I have typed stty sane thinking that it is used when your command-line starts behaving erratically; is that what it's for?
I'm not sure if it is what is happening in your case, but pressing Ctrl+S will freeze the tty, causing no updates to happen, though your commands are still going through. To unfreeze the tty, you need to hit Ctrl+Q. Again, I'm not totally sure this is what is happening in your case, but I do this by accident often enough, that it is possible it may affect others as well.
Keyboard input not displayed on the screen?
1,455,402,031,000
I have some HTTP requests in raw format such as GET /docs/index.html HTTP/1.1 Host: www.nowhere123.com Accept: image/gif, image/jpeg, */* Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1) (blank line) I have to test and debug them for this reason I need an easy way to repeat them on my computer, how can I make an HTTP request from raw data using curl, httpie or another CLI HTTP client?
I suggest telnet: telnet www.nowhere123.com 80 and you are talking raw to the server. Example: telnet 127.0.0.1 80 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. GET /index.htm <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https:///index.htm">here</a>.</p> </body></html> Connection closed by foreign host.
How to make an HTTP request from raw data?
1,455,402,031,000
What kinds of files are okay to delete in /usr directory without blowing up the server?? Is it possible to increase the size of /usr directory without blowing up server?
The /usr directory usually contains the /usr/share/doc directory tree which contains just documentation files. That should be fairly safe to move elsewhere or outright delete in case of emergency. But you should use something like du -kx /usr | sort -rn | less to list the directories in the /usr filesystem in order of decreasing size, and so find out what is taking the most space. For example, here's the beginning of such a listing from my system: 14638572 /usr 7232184 /usr/share 6150780 /usr/lib 2195108 /usr/lib/x86_64-linux-gnu 2143388 /usr/share/doc 1213312 /usr/share/doc/texlive-doc 1123212 /usr/lib/debug 816908 /usr/share/doc/texlive-doc/latex 775616 /usr/share/locale 731592 /usr/bin 589800 /usr/lib/i386-linux-gnu 503216 /usr/lib/python2.7 442400 /usr/lib/python2.7/dist-packages 324104 /usr/share/fonts 315548 /usr/include 313148 /usr/share/doc/texlive-doc/generic 301928 /usr/share/texlive 300248 /usr/share/texlive/texmf-dist 292876 /usr/lib/libreoffice ... Obviously /usris at the top, since it contains everything else. /usr/share and /usr/lib both contain stuff that is important to various programs. However, if the use of /usr has suddenly grown, it might be useful to take a look inside those directories to see if there are any new files accidentally misplaced in there. But it does look like /usr/share/doc takes a significant chunk of my /usr, and in particular, /usr/share/doc/texlive-doc is a major disk hog. Based on this, I could go to package manager, and look at any *-doc packages, in particular any TeXLive documentation packages, and tell the package manager to uninstall them if they are not needed. (If this was a critical server and I needed some disk space in /usr ASAP, I might just delete /usr/share/doc/texlive-doc and rely on the package manager to reinstall its contents if necessary. But I would strongly prefer using the package manager instead of just deleting entire directories manually.) /usr/lib/debug is also fairly big; perhaps I've forgotten to uninstall some debugging packages I no longer need?
My /usr directory is 100% full. Unable to download packages
1,455,402,031,000
I was skimming over the documentation of find to better utilize the command usage. I was reading the part that says GNU find will handle symbolic links in one of two ways; firstly, it can dereference the links for you - this means that if it comes across a symbolic link, it examines the file that the link points to, in order to see if it matches the criteria you have specified. Secondly, it can check the link itself in case you might be looking for the actual link. If the file that the symbolic link points to is also within the directory hierarchy you are searching with the find command, you may not see a great deal of difference between these two alternatives. By default, find examines symbolic links themselves when it finds them (and, if it later comes across the linked-to file, it will examine that, too). To my understanding, if I do something like: find -L -iname "*foo*" this will search the current directory recursively and when it encounters a symlink, it follows the link to the original file. If the original file has the name pattern *foo*, the former link is reported. However, this doesn't seem the case. I have main-file sl-file -> main-file Running the command above find -L -iname "*main*" reports ./main-file And I was expecting ./main-file # because it matches the criterion ./sl-file # because the file points to matches the criterion That being said, using another test like -type works as I am expecting. Say I have this: main-file dir/sl-file -> ../main-file Running this find dir -type f returns nothing. But this find -L dir -type f reports dir/sl-file. What gives? I have gone through this post that says a file name isn't a file property. This is something I can't really get my head around.
Gnu find documentation is not as strict in terminology as the POSIX one. The latter sheds light and I will refer to it. It doesn't define -iname so I will concentrate on -name. I assume -iname is designed to be like -name, only case-insensitive. Therefore I expect all properties of -name that have nothing to do with case to apply to -iname as well. These are relevant parts of the POSIX documentation: find [-H|-L] path... [operand_expression...] The find utility shall recursively descend the directory hierarchy from each file specified by path, […]. Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The relative portion shall contain no dot or dot-dot components, no trailing <slash> characters, and only single <slash> characters between pathname components. -name pattern The primary shall evaluate as true if the basename of the current pathname matches pattern using the pattern matching notation […] -print […] it shall cause the current pathname to be written to standard output. And definitions: Basename For pathnames containing at least one filename: the final, or only, filename in the pathname. […] Filename A sequence of bytes […] used to name a file. The bytes composing the name shall not contain the <NUL> or <slash> characters. […] A filename is sometimes referred to as a "pathname component". […] Pathname A string that is used to identify a file. […] It has optional beginning <slash> characters, followed by zero or more filenames separated by <slash> characters. So -name is interested in the final filename in the current pathname; and the current pathname is a string that is used to identify the current file. Used by whom? In this case by find. Conceptually a pathname may have nothing to do with names in the filesystem. If find uses some string to identify a file then the string is called "pathname" and -name uses it. Invoke find . -print or find -L . -print. You will see all pathnames used by this particular invocation of find. Their final filenames are what -name would test if you used -name. In your example with main-file and sl-file, the command is find -L -iname "*main*". There is implicit -print at the end, the output you observed is from -print. You expected: ./main-file # because it matches the criterion ./sl-file # because the file points to matches the criterion But if this was the case, it would mean -print gave you ./main-file and ./sl-file, so these are the exact pathnames, so main-file and sl-file are the respective basenames -name (or -iname) dealt with. This doesn't fit. Only one of these basenames matches the pattern you used (*main*). This is why you got only one result. Specifying -name "*main*" (or -iname "*main*") and expecting ./sl-file to appear is equivalent to expecting sl-file to match *main*. It would make some sense to expect ./main-file to appear twice. The premise would be the symlink causes find to change the second pathname from ./sl-file to ./main-file. Then both pathnames would match *main* and both would be printed as ./main-file. This doesn't happen. If you'd like this to happen, consider a symlink bar pointing to /etc/fstab and placed in /tmp/foo/. We're in the foo directory. What should find -L . print (besides .)? It seems you'd like this pathname to pass -name fstab test, so the basename must be fstab. On the other hand, according to the rules the pathname must begin with ./ (because . is the provided path) and shall contain no dot-dot components. There is no sane and meaningful pathname that can be used. Now what? Fortunately in such case the tool prints just ./bar. This is the pathname and it (as a string) carries no connection to fstab. Few examples that don't use symlinks but show how -name works: cd /etc && find . -name . 2>/dev/null It finds . despite the fact its "real" (specific, in-the-filesystem) name is etc. It doesn't find subdirectories despite the fact any directory can be . in some circumstances. cd /etc && find . -name etc 2>/dev/null It finds neither etc nor .. Create an empty FAT32 filesystem and mount it, cd to the mountpoint. The filesystem is case-insensitive and Linux knows it. Create a file named a in the filesystem. Experiment like this: $ find . . ./a In this case the tool must have obtained a from the filesystem at some point. $ find a a $ find A A In this case the tool uses a or A taken from its command line argument. The filesystem only confirms such file exists. The filesystem (and the OS) knows this particular file can be referred to as a or A. $ find a -name A $ find A -name a Nothing! This shows -name doesn't care what the filesystem knows about the file. Only the pathname used by find matters. Somewhat similarly in case of your example: -iname doesn't care what the filesystem knows about the symlink and its target. Only the pathname used by find matters. To clarify and explicitly state what happens, let us go back to your example with the following directory structure: . ├── main-file └── sl-file -> main-file find . -print or find -L . -print prints: . ./main-file ./sl-file These are the pathnames, i.e. strings find uses to identify the three files (directory is also a file). The string . comes from the command, the other two were build by examining . (now I mean the file, not the string), learning it is of the type directory, deciding if we should descend (in general think of -prune, -maxdepth if supported), listing its content: main-file, sl-file. Note the string /.sl-file is built before anything is done to the file identified by it. To do anything with the file, find needs the string. But -name or -print don't do anything to the file, they don't need its data or metadata. They work with the pathname, the string. When -name "*main*" is evaluated for any pathname, the corresponding file or the entire filesystem is completely irrelevant. The only relevant thing is the pathname which is a string; more specifically the last component of it, i.e. the basename, also a string. For any given pathname -name doesn't care if you used -L or if the file is a symlink in the first place, or where it points to, or if it's not broken. It works with the already known string. On the other hand tests like -type or -mtime need to query the filesystem about the file identified by the pathname. String is not enough for them. In case of a symlink -L decides if they query about the target of the symlink or about the symlink itself. Still, if there is -print involved then it will print the pathname, regardless of what was queried. In other words: without -L ./main-file string identifies ./main-file file of type f ./sl-file string identifies ./sl-file file of type l with -L ./main-file string identifies ./main-file file of type f ./sl-file string also identifies ./main-file file of type f Then you should mind which test or action works with pathnames (strings) and which works with files. -name and -print work with pathnames so find . -name "*main*" with or without -L will only print ./main-file -type works with files so find . -type f will print one pathname: ./main-file and find -L . -type f will print two pathnames: ./main-file ./sl-file
find and symbolic link
1,455,402,031,000
The following error occurs while using the below commands.Guide me to overcome this issue. rpasa-vd1-363: cd /home/rpasa/DDEMO No more processes.
You have hit our per-user process limit and you will have to talk to your system administrator to find out how many process you have running under your user account or you can try and run the ps command to see what processes you are running.
Why the following error occurs in the terminal while using linux commands?
1,455,402,031,000
I'm looking at the mail command which fires off Heirloom Mail. My procmail failed and it has 55 messages in the queue. I need to forward them out to another email and then process them manually. I'm not sure how to get them from the Linux server out to my email though.
I discovered that I had to "set forward-as-attachment" then I could forward out the mail...
Is there any way to forward mail from the command line in Linux
1,455,402,031,000
The following command takes about 10 minutes to output the result find . -name "muc*_*_20160920_*.unl*" | xargs zcat | awk -F "|" '{if($14=="20160920100643" && $22=="567094398953") print $0}'| head How can I improve its performance?
As noted in comments zgrep is better choice for such kind of tasks with globstar option which allows to use ** as all path inside the directory except hidden shopt -s globstar zgrep -m 10 '^\([^|]*|\)\{13\}20160920100643|\([^|]*|\)\{7\}567094398953' ./**muc*_*_20160920_*.unl* shopt -u globstar
How can I optimize this Unix command?
1,455,402,031,000
I have a file with spelling mistakes in it. It has a lot of mistakes I need to find all mistakes and correct them. I used : spell [filename] it shows me spelling mistakes thiis iz wurse ... and many others. How can I correct all mistakes in one command?
I couldn't find a man entry for spell, or ispell, but I did find a ComputerHope entry, which I'll quote: spell is essentially a wrapper for the much more complex ispell utility. However, unlike ispell (or the very similar GNU program, aspell), spell does not make any spelling suggestions. It only reports which words were misspelled. In other words, you can't use spell to correct mistakes.
Spelling mistakes in the file?
1,455,402,031,000
Objective: I am trying to set up a 3G connection using NetworkManager 0.9.4 via command line. I have previously succeeded (see this question) by setting up the connection through the nm-applet (GUI in X) but now I need to replicate this on many machines and therefore want to do it via command line as part of an installation bash script that does this among other things. My approach: I have written a bash script that creates this connection file and places it in /etc/NetworkManager/system-connections/: [connection] id=viettel uuid=df62d4f8-0699-11e5-8996-ab1b9b4c6754 type=gsm autoconnect=false [ppp] lcp-echo-failure=5 lcp-echo-interval=30 [ipv4] method=auto [serial] baud=115200 [gsm] number=*99# password-flags=1 apn=e-connect The file looks exactly like the (working) file created by the GUI tool before. I added monitor-connection-files=yes to /etc/NetworkManager/NetworkManager.conf so NM would notice changed configuration files during runtime. Problem: However, when trying to establish a connection (sudo nmcli con up id viettel), I get this error: Error: Unknown connection: viettel. And indeed, when running sudo nmcli con list to see the connections that NM is aware of, I get an empty list: NAME UUID TYPE TIMESTAMP-REAL I saw that some people recommend running nmcli con reload to refresh this list but that command is not available in the latest stable NM package for my system (see below) and it shouldn't be needed with monitor-connection-files=yes anyway, as far as I understand. I tried restarting the NM service and rebooting, both without success. So it looks like NM is simply not looking for the connection file in the right place or has some sort of list of available connections that was not refreshed after the new file was added. My question: How do I make NM aware of the new connection file? Any other advice what to do next? System information: I am running this on a Raspberry Pi 2 with Raspbian Wheezy (all packages updated). NetworkManager is version 0.9.4 (I saw that there are a newer versions available but apparently not released as stable debian package for the RPi) Thanks a lot for you help!
The NetworkManager.conf man page notes of the base config file plugin: For security, it will ignore files that are readable or writeable by any user or group other than root. In this case the result is "Unknown connection". chown your connection to root and chmod it 0600 to match those created by NetworkManager. More generally, connection files are very sensitive to spelling and nmcli a) will ignore a connection entirely if anything is wrong and b) won't tell you about typos in its normal output. However, see /var/syslog (default, configurable) for NetworkManager messages. NetworkManager doesn't seem to notice permission or ownership changes on their own so touch the file to have it re-scanned in those cases. I had created a connection file with the incorrect line key-mgmt=wpa2-psk ...that value should have been simply "wpa-psk" and only the log file had been telling me where the problem was: Sep 17 12:26:05 ahost NetworkManager[2477]: keyfile: updating /etc/NetworkManager/system-connections/ATT2 Sep 17 12:26:05 ahost NetworkManager[2477]: keyfile: error: invalid or missing connection property 'key-mgmt'
Manual creation of NetworkManager connection file fails (Error: Unknown connection)
1,455,402,031,000
Knowing that "How to convert from text to .pdf" is already well answered here link and here link, I am looking for something more specific: Using Claws-Mail [website] and a Plug-In [RSSyl] to read RSS feeds I collected a lot of text files. These I want to convert into .pdf files. Problem: The files inside the folders are numbered [1, 2, …, 456]. Every feed has its own folder, but inside I have 'just' numbered files. Every file contains a header [followed by the message's content]: Date: Tue, 5 Feb 2013 19:59:53 GMT From: N/A Subject: Civilized Discourse Construction Kit X-RSSyl-URL: http://www.codinghorror.com/blog/2013/02/civilized-discourse-construction-kit.html Message-ID: <http://www.codinghorror.com/blog/2013/02/civilized-discourse-construction-kit.html> Content-Type: text/html; charset=UTF-8 <html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <base href="http://www.codinghorror.com/blog/2013/02/civilized-discourse-construction-kit.html"> </head><body> <p>URL: <a href="http://www.codinghorror.com/blog/2013/02/civilized-discourse-construction-kit.html">http://www.codinghorror.com/blog/2013/02/civilized-discourse-construction-kit.html</a></p> <br> <!-- RSSyl text start --> Question: A way to convert each file into a .pdf file and rename it, based upon the name given under Subject. Super-awesome would be converting and re-naming this way: "folder.name"_"date"_"file name" with each information taken from the header data. As there are a few hundred files, I am looking for a batch processing way. The files are html formatted, but without a .htm[l] suffix.
If you have a relatively simple file tree where you have only one level of directories, and where each directory contains a list of files but there are no sub directories, you should be able to do something like this (you can paste this directly into your terminal and hit Enter): for dir in *; do ## For each directory if [ "$(ls -A "$dir")" ]; then ## If the dir is not empty for file in "$dir"/*; do ## For each file in $dir i=0; ## initialize a counter ## Get the subject sub=$(grep ^Subject: "$file" | cut -d ':' -f 2-); ## get the date, and format it to MMDDYY_Hour:Min:Sec date=$(date -d "$(grep ^Date: $file | cut -d ':' -f 2-)" +%m%d%y_%H:%M:%S); ## the pdf's name will be <directory's name> _ <date> _ <subject> name="$dir"_"$date"_"$sub"; ## if a file of this name exists while [ -e "$dir/$name".pdf ]; do let i++; ## increment the counter name="$dir"_"$date"_"$sub"$i; ## append it to the pdf's name done; wkhtmltopdf "$file" "$dir"/"$name".pdf; ## convert html to pdf done fi done NOTES This solution requires wkhtmltopdf: Simple shell utility to convert html to pdf using the webkit rendering engine, and qt. On Debian based systems you can install it with sudo apt-get install wkhtmltopdf It assumes there are no files in the top level directory and only desired html files in all sub directories. It can deal with file and directory names that contain spaces, new lines and other unorthodox characters. Given a file dir1/foo with the contents of the example you have posted, it will create a file called dir1/dir1_020513_20:59:53_Civilized Discourse Construction Kit10.pdf
convert bulk of text files to pdf with naming based upon header file
1,455,402,031,000
I have a Debian Squeeze variant (MintPPC 9) installed on an old Mac Powerbook G4 (PowerPC CPU). I wish to boot it into a multiuser CLI login shell instead of automatically booting into the login screen for LXDE. I do, however, wish to keep GDM or whatever DM is used by LXDE since I also use it to switch between LXDE and Awesome WM. I wish to boot by default into a CLI login shell; I could then startx to start my GUI if I need it. I am aware that Ctrl-Alt-(F1-6) will open a separate tty instance with a login shell but it seems wasteful to have a GUI running even if asleep if I am working purely from the command line particularly considering the limited resources of my Powerbook G4. I now know how to do this on Ubuntu installs on other, Intel based machines, by configuring GRUB, however this machine uses Yaboot as the bootloader.
The extra options in grub on the kernel line are in fact kernel boot options passed across to the kernel when loaded. So if you are referring to appending text to the grub line in Ubuntu, then the same config should be able to be used for mint and passed to the kernel by Yaboot. It looks like Yaboot supports an append= option append="root=/dev/sda4 ro quiet splash text"
Configuring Yaboot and Debian to Boot into a Command Line Login Shell?
1,455,402,031,000
With this plugin we can fuzzy search through candidates of apt packages. the output of running it as below: # apt zzuf zziplib-bin zytrax The code in the link (I put if [[ "${BASH_<...> fi to a function my-fuzzy-test): #!/usr/bin/env bash function insert_stdin { # if this wouldn't be an external script # we could use 'print -z' in zsh to edit the line buffer stty -echo perl -e 'ioctl(STDIN, 0x5412, $_) for split "", join " ", @ARGV;' \ "$@" } function my-fuzzy-test() { if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then packages="$(apt list --verbose 2>/dev/null | \ # remove "Listing..." tail --lines +2 | \ # place the description on the same line # separate the description and the other information # with a ^ sd $'\n {2}([^ ])' $'^$1' | \ # place the package information and the package description # in a table with two columns column -t -s^ | \ # select packages with fzf fzf --multi | \ # remove everything except the package name cut --delimiter '/' --fields 1 | \ # escape selected packages (to avoid unwanted code execution) # and remove line breaks xargs --max-args 1 --no-run-if-empty printf "%q ")" if [[ -n "${packages}" ]]; then insert_stdin "# apt ${@}" "${packages}" fi fi } I put above code in ~/.zshrc and map my-fuzzy-test to a keybinding: zle -N my-fuzzy-test bindkey "^[k" my-fuzzy-test as press alt-k to trigger my-fuzzy-test function, it showed nothing when press alt-k. If I remove line if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then and fi at the end, then it can trigger the function but it can't display expected output as above and echoes error stty: 'standard input': Inappropriate ioctl for device I know we can populate candidates into terminal's prompt as in this code as below: # CTRL-R - Paste the selected command from history into the command line fzf-history-widget() { local selected num setopt localoptions noglobsubst noposixbuiltins pipefail no_aliases 2> /dev/null selected=( $(fc -rl 1 | perl -ne 'print if !$seen{(/^\s*[0-9]+\s+(.*)/, $1)}++' | FZF_DEFAULT_OPTS="--height ${FZF_TMUX_HEIGHT:-40%} $FZF_DEFAULT_OPTS -n2..,.. --tiebreak=index --bind=ctrl-r:toggle-sort $FZF_CTRL_R_OPTS --query=${(qqq)LBUFFER} +m" $(__fzfcmd)) ) local ret=$? if [ -n "$selected" ]; then num=$selected[1] if [ -n "$num" ]; then zle vi-fetch-history -n $num fi fi zle reset-prompt return $ret } zle -N fzf-history-widget bindkey '^R' fzf-history-widget So how can I correctly populate candidate to terminal's prompt with my-fuzzy-test?
That Ctrl-R widget uses zle vi-fetch-history to insert history matches, which is not applicable to what you're doing here. What you need to do is insert the matches you've generated into the editing buffer, as is done by another widget in the same code over here: https://github.com/junegunn/fzf/blob/master/shell/key-bindings.zsh#L62 That widget inserts the matches echoed by the function __fsel into the right side of the left part of the buffer (meaning, it inserts them into the current position of the cursor and then, afterwards, the cursor will end up to the right of what's inserted): LBUFFER="${LBUFFER}$(__fsel)"
Populate selected candidates to terminal after fuzzy search with fzf via keybinding?
1,455,402,031,000
I'm currently trying to make a tar backup to a tape with mbuffer acting as a buffer between e.g: tar --acls -c /var/test | mbuffer -m 8G -P 100% -R 100M | dd of=/dev/nst0 bs=256k But the buffer is still empty about 10-20 times. So I wanted to use a file based buffer with the following command: tar --acls -c /var/test | mbuffer -m 60G -T /srv/testbuffer -P 100% -R 100M | dd of=/dev/nst0 bs=256k My understanding was, that the option -T would create a file at the given location and use it as a read/write buffer. But no file is created, and on one system mbuffer just hangs. Am I missing something? Does mbuffer create the file in a special way? Does mbuffer create the file in another directory? Does mbuffer extend/use the swap in some way? As far as I read in the source of mbuffer (settings.c) mbuffer allocs space of the size of the tmpfile via malloc(). Does that mean that I still need the same amount of RAM, or at least swap, as if I would use it without the -T parameter?
no file is created mbuffer -m 60G -T /srv/testbuffer creates a file, opens it and unlinks it right after. If you check the file descriptors in /proc/<PID of mbuffer>/fd you will find one that points to /srv/testbuffer (deleted) This is quite a standard trick when it comes to temporary files. If mbuffer terminates for whatever reason (even if brutally killed), the filesystem will eventually free the space (in the worst case of power failure or kernel panic: after fsck in the future). There's no risk a totally unused, abandoned file will occupy your diskspace forever. on one system mbuffer just hangs The tool actually reserves diskspace like fallocate (not just creates a sparse file like truncate) before it starts its main job of buffering data. Allocating a large file may take time. This delay depends on the filesystem type, possible fragmentation etc. Does that mean that I still need the same amount of RAM, or at least swap, as if I would use it without the -T parameter? No. The file exists in the filesystem appropriate for the chosen path (/srv/testbuffer in your case), but the path no longer exists. (Note: a new file with the same path will be a separate file with different inode number).
How does `mbuffer -T` work?
1,455,402,031,000
I've followed the instructions provided on the wp-cli.org site but I can't seem to get the wp command to run for all users. I copied the file to /usr/local/bin and renamed it to wp per the instructions. And when I'm logged in as root I can run wp from anywhere and it works (although it gives me the "are you sure you want to run this as root?" warning). I was under the impression that moving an executable to /usr/local/bin would make that executable available to all users. However, when I switch to another user, I get a "command not found" error. How can I install wp-cli for all users? I want to make sure every user with SSH access can run wp-cli. Thanks in advance!
I determined that the issue was related to cageFS. Solution If the server environment is using CloudLinux with cageFS enabled, cageFS definitions must be updated to allow the wp command to be accessible to non-root users I found these instructions here to do this: https://docs.redy.host/knowledge-base/install-wp-cli-cpanel-cagefs/
How do I install WP-CLI for all users on centOS 7
1,461,781,712,000
I have Okular configured that when it's already running and I open a pdf from the command line, it should open the new file as a new tab in Okular. However, if this new file is in another directory than the other one, Okular fails to open the document and only displays an empty tab with the file name and the error message Could not open /path/to/bar.pdf Short example: okular foo.pdf & okular bar.pdf works the way I would expect it to. okular foo.pdf & cd .. okular foobar.pdf would only open foo.pdf correctly, but fail to display foobar.pdf. The Okular version I'm working with is 0.19.3. //Edit: Maybe I should mention: If I open the same combination of files from a file manager (in my case Dolphin), Okular behaves as expected. I only have issues using the command line. //Edit: I just tried the same thing on another computer using Okular 0.23.2. It worked fine, so I guess the bug has been taken care of already.
Workaround: Open additional pdf files either using the absolute path names or relative path names to the initial pdf file. So for example okular foo.pdf & okular ../foobar.pdf and okular foo.pdf & okular /the/complete/absolute/path/to/foobar.pdf both work. //Update: To automate the workaround, this function can be added to ~/.bashrc. It simply reads the file's absolute name and passes it to okular via stdin. function okular { command readlink -f $1 | xargs okular }
Open multiple pdfs in Okular from command line
1,461,781,712,000
Given a list of file names, how can I sort it by file modification time? The resulting output needs to look exactly like the input with the exception that the data has been sorted accordingly. Here is a sample of the input. /jobs/crm/import/done/20140227-1359-0009.txt /jobs/bridge/open-workitem/done/20140227-1359-0009.txt /jobs/bridge/opened-workitem/done/20140227-1359-0009.txt /jobs/bridge/update-workitem/done/20140227-1401-0001.txt /jobs/bridge/update-workitem/done/20140227-1403-0001.txt /jobs/tfs/import/done/20140227-1401-0001.txt /jobs/tfs/import/done/20140227-1403-0001.txt /jobs/tfs/open-workitem/done/20140227-1359-0009.txt
Assuming your input is small, and the file names don't contain spaces or other weird characters, you can just use ls. ls -dt $(cat files) $(cat files) puts the contents of files on the command line, and splits them on whitespace to get a list of arguments. ls -t takes a list of files as its arguments, sorts them by mtime, and prints them. -d is needed so it lists directories using their name, rather than their contents. If that's not sufficient, you can try the decorate/sort/undecorate pattern, e.g. $ while IFS=$'\n' read file; do printf '%d %s\n' "$(stat -c +%Y "$file")" "$file" done <files | sort -k1nr | cut -f 2- -d ' ' >files.sorted where IFS=$'\n' read file; do ... done <files sets file to each newline-delimited entry in files in turn, printf...stat... turns <filename> into <mtime> <filename>, sort -k1nr sorts lines based on the first field in reverse numeric order, then cut removes the <mtime>, leaving you with just <filename>s in sorted order.
How to sort a list of files by time, given only the filenames
1,461,781,712,000
I am planning to analyze audio via linux command line. There are a lot of analyzers out there which have a graphical interface, but since I want to make an automated input/output of this information I can't use a GUI. The optimal case would be to send an audiostream from an embedded device to powerful machine, which analyzes the audio and then generates a csv or db entries with the data and builds some graphs out of it for a website. If there so no such thing for audiostreams, an analyzer for audiofiles will also be a great step forward.
Command-line batch audio processing tools are sox(http://sox.sourceforge.net/) and ecasound(http://www.eca.cx/ecasound/). You may want to check man soxformat for the list of supported formats (i.e. streaming and file formats). It may be also beneficial to consider including ffmpeg(see http://ffmpeg.org/ffmpeg-all.html) into the toolchain for conversion from some of the exotic codecs into mainstream formats.
Is there a command line tool for analyzing audio frequency
1,461,781,712,000
I like the command line because it preserves the context of what I'm doing. But if I use a "rogue" mode program like vi or less, the whole screen gets taken over. Is a middle way possible, where the console mode program takes over only half the screen (above or below the shell part)? I'm borrowing the term "rogue" from Eric Raymond: Roguelike programs are designed to be run on a system console, an X terminal emulator, or a video display terminal. They use the full screen and support a visual interface style, but with character-cell display rather than graphics and a mouse. I already use tmux and GNU screen to split the terminal into panes, but I'm looking for a way to stay in one shell session.
I guess you could run your full-screen program in tmux or Screen pane directly, without additional shell session (shell is just another program). Another way, which I prefer, is to use tiling/stacking window manager like i3 and terminal program urxvt. The latter has very fast daemon/client structure, which allows opening new windows instantly, so you could run any program in new window this way: urxvtc -e <command> <args> This needs to be in a script or a function, really. New window will take one half, one third, or so on of the screen in default tiling mode. Combined modes are also possible in these WMs.
Is there any way to have console (rogue) mode programs take over only part of the terminal screen?
1,461,781,712,000
I need to count the number of files under a folder and use the following command. cd testfolder bash-4.1$ ls | wc -l 6 In fact, there are only five files under this folder, bash-4.1$ ls total 44 -rw-r--r-- 1 comp 11595 Sep 4 22:51 30.xls.txt -rw-r--r-- 1 comp 14492 Sep 4 22:51 A.pdf.txt -rw-r--r-- 1 comp 8160 Sep 4 22:51 comparison.docx.txt -rw-r--r-- 1 comp 903 Sep 4 22:51 Survey.pdf.txt -rw-r--r-- 1 comp 1206 Sep 4 22:51 Steam Table.xls.txt It looks like ls | wc -l even counts the total 44 as a file, which is not correct.
wc is a char, word, and line counter, not a file counter. You, the programmer/script writer, are responsible for making it count what you want and to adjust the calculation accordingly. In your case, you could do something like: echo $((`ls|wc -l`-1)) Finally note that your ls is probably an alias as it gives a long listing which is not the normal ls without arguments. It may therefore be a good idea to refer to ls's full path (usually /bin/ls) to avoid confusion.
The result of "ls | wc -l" does not match the real number of files [duplicate]
1,461,781,712,000
In the following code, I have to poll $tmp_input to continue executing the code because wezterm cli send-text is asynchronous. This makes sure that $tmp_input is ready. tmp_input=$(mktemp ./tmp_input.XXXXXX) echo "read input; echo \$input > $tmp_input" | wezterm cli send-text --pane-id $bottom_pane_id --no-paste while [ ! -s "$tmp_input" ]; do sleep 1 done input_value=$(cat "$tmp_input") rm "$tmp_input" echo "Input was: $input_value" | wezterm cli send-text --pane-id $bottom_pane_id --no-paste The code works, but I was wondering if there is another way of accomplishing the same result.
You could create a named pipe with mkfifo instead, and read that. Reads will block until something has written to the pipe, no manual polling required. Something like: tmp_input=$(mktemp -d ./tmp_input.XXXXXX) mkfifo "$tmp_input/fifo" echo "read input; echo \$input > $tmp_input/fifo" | wezterm cli send-text --pane-id $bottom_pane_id --no-paste input_value=$(cat "$tmp_input/fifo") rm "$tmp_input/fifo" rmdir "$tmp_input" echo "Input was: $input_value" | wezterm cli send-text --pane-id $bottom_pane_id --no-paste I switched to mktemp -d as a hopefully safer alternative to getting a name from mktemp and then using that name with mkfifo.
Alternatives to file polling?
1,461,781,712,000
A coworker recently asked "What is man"? After being informed that not all things accessible from the Bash CLI are commands, I was wary to call man a command. man man just calls it an interface: NAME man - an interface to the on-line reference manuals man has an executable: $ which man /usr/bin/man $ file /usr/bin/man /usr/bin/man: ELF 64-bit LSB shared object So is man a program, because it has an executable? What other nouns could man be? What noun would best describe it? Really, I'm interested in the general case of how I could determine what an arbitrary thing on the cli is, man is just an example. For that matter, what is the word for all things that one could use on the Bash CLI? A word that encompasses commands, aliases, system calls, etc?
I have a small shellscript, that can help me identify a command: what kind of command it is and if installed via a program package, which package. Maybe use the name what-about, #!/bin/bash LANG=C inversvid="\0033[7m" resetvid="\0033[0m" if [ $# -ne 1 ] then echo "Usage: ${0##*/} <program-name>" echo "Will try to find corresponding package" echo "and tell what kind of program it is" exit 1 fi command="$1" str=;for ((i=1;i<=$(tput cols);i++)) do str="-$str";done tmp="$command" first=true curdir="$(pwd)" tmq=$(which "$command") tdr="${tmq%/*}" tex="${tmq##*/}" if test -d "$tdr"; then cd "$tdr"; fi #echo "cwd='$(pwd)' ################# d" while $first || [ "${tmp:0:1}" == "l" ] do first=false tmp=${tmp##*\ } tmq="$tmp" tmp=$(ls -l "$(which "$tmp")" 2>/dev/null) tdr="${tmq%/*}" tex="${tmq##*/}" if test -d "$tdr"; then cd "$tdr"; fi # echo "cwd='$(pwd)' ################# d" if [ "$tmp" == "" ] then tmp=$(ls -l "$tex" 2>/dev/null) tmp=${tmp##*\ } if [ "$tmp" == "" ] then echo "$command is not in PATH" # package=$(bash -ic "$command -v 2>&1") # echo "package=$package XXXXX 0" bash -ic "alias '$command' > /dev/null 2>&1" > /dev/null 2>&1 if [ $? -ne 0 ] then echo 'looking for package ...' package=$(bash -ic "$command -v 2>&1"| sed -e '0,/with:/d'| grep -v '^$') else echo 'alias, hence not looking for package' fi # echo "package=$package XXXXX 1" if [ "$package" != "" ] then echo "$str" echo "package: [to get command '$1']" echo -e "${inversvid}${package}${resetvid}" fi else echo "$tmp" fi else echo "$tmp" fi done tmp=${tmp##*\ } if [ "$tmp" != "" ] then echo "$str" program="$tex" program="$(pwd)/$tex" file "$program" if [ "$program" == "/usr/bin/snap" ] then echo "$str" echo "/usr/bin/snap run $command # run $command " sprog=$(find /snap/"$command" -type f -iname "$command" \ -exec file {} \; 2>/dev/null | sort | tail -n1) echo -e "${inversvid}file: $sprog$resetvid" echo "/usr/bin/snap list $command # list $command" slist="$(/usr/bin/snap list "$command")" echo -e "${inversvid}$slist$resetvid" else package=$(dpkg -S "$program") if [ "$package" == "" ] then package=$(dpkg -S "$tex" | grep -e " /bin/$tex$" -e " /sbin/$tex$") if [ "$package" != "" ] then ls -l /bin /sbin fi fi if [ "$package" != "" ] then echo "$str" echo " package: /path/program [for command '$1']" echo -e "${inversvid} $package ${resetvid}" fi fi fi echo "$str" #alias=$(grep "alias $command=" "$HOME/.bashrc") alias=$(bash -ic "alias '$command' 2>/dev/null"| grep "$command") if [ "$alias" != "" ] then echo "$alias" fi type=$(type "$command" 2>/dev/null) if [ "$type" != "" ] then echo "type: $type" elif [ "$alias" == "" ] then echo "type: $command: not found" fi cd "$curdir" Sometimes there are two alternatives, e.g. for echo, both a separate compiled program and shell built-in command. The shell built-in will get priority and be used unless you use the full path of the separate program, $ what-about echo -rwxr-xr-x 1 root root 35000 jan 18 2018 /bin/echo ---------------------------------------------------------------------------------- /bin/echo: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=057373f1356c861e0ec5b52c72804c86c6842cd5, stripped ---------------------------------------------------------------------------------- package: /path/program [for command 'echo'] coreutils: /bin/echo ---------------------------------------------------------------------------------- type: echo is a shell builtin Sometimes a command is linked to program, that might be hidden, e.g. the version of rename that I use, $ what-about rename lrwxrwxrwx 1 root root 24 maj 12 2018 /usr/bin/rename -> /etc/alternatives/rename lrwxrwxrwx 1 root root 20 maj 12 2018 /etc/alternatives/rename -> /usr/bin/file-rename -rwxr-xr-x 1 root root 3085 feb 20 2018 /usr/bin/file-rename ---------------------------------------------------------------------------------- /usr/bin/file-rename: Perl script text executable ---------------------------------------------------------------------------------- package: /path/program [for command 'rename'] rename: /usr/bin/file-rename ---------------------------------------------------------------------------------- type: rename is /usr/bin/rename I have an alias for rm in order to avoid mistakes, and the alias has priority over the program in PATH. You can prefix with backslash, \rm to skip the alias and run the program directly. (Please remember that the alias applies only for the specific user, and not for sudo and other users, unless they have defined a similar alias.) $ what-about rm -rwxr-xr-x 1 root root 63704 jan 18 2018 /bin/rm --------------------------------------------------------------------------- /bin/rm: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, uildID[sha1]=864c9bbef111ce358b3452cf7ea457d292ba93f0, stripped --------------------------------------------------------------------------- package: /path/program [for command 'rm'] coreutils: /bin/rm --------------------------------------------------------------------------- alias rm='rm -i' type: rm is /bin/rm
How to know what are commands, system calls, bash built-ins, etc?
1,461,781,712,000
I recently installed Debian for CLI purposes. I am looking to install CLI packages, I want to know how to search for packages (CLI packages such as nano)?
To find CLI packages in Debian, you can look for packages tagged as interface::command-line, either using the tag search engine, or on your system by installing debtags and running debtags search interface::command-line Both approaches have options to refine the search. See the Debtags wiki page for more details. This does have limits: packages aren’t all tagged appropriately. You can also look for packages which depend on libncurses6: apt-rdepends -r libncurses6
How to search for Debian CLI packages?
1,461,781,712,000
I want to use grep to determine how many characters should be displayed before and after what is being searched. For example I want to filter 'example' from the 'this is an example' line. By using grep example the result would be all the line containing the string, when what I hope to obtain is just "an example", two characters before the word example.
there are grep options grep -Eo '...example' test.txt where -E use extended regular expression -o ouput only matched string ... means any 3 character both option can be merge as -Eo ' ' enclose patern to avoid shell subsitution As sugested, alternative for 3 or less caracter grep -Eo '.{,3}example' test.txt where x{a,b} stand for between a and b occurence of x .{,3} up to 3 any char X{2,10} from 2 to 10 consecutive X (upper case X) [xy]{5} exactly 5 of either x or y a note about shell substitution grep -Eo ???example file might not work as expected if a file like 123example is present in dir where grep is being run.
How to find word with part of surrounding context using grep?
1,461,781,712,000
I have a bunch of text files in the following format: Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. - At vero eos et accu- sam et justo duo dolores et ea rebum. - Stet clita kasd guber- gren, no sea takimata sanctus est Lorem ipsum dolor sit amet. How can I print this as continuous text on the command line, but with removing the syllable division on the line ends: Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. - At vero eos et accusam et justo duo dolores et ea rebum. - Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. I could use tr '\n' ' ' to convert the new-lines into spaces The problem is tr can only replace one character and I would need some command to remove the -\n in advance. How can I achieve this on the bash comman-line?
Using awk: awk -F'-$' '{ printf "%s", sep $1; sep=/-$/?"":OFS } END{ print "" }' infile with the -F'-$', we defined the Field Separator to single hyphen at the end of line, so with this and by taking the first field $1, we will always have the line without that hyphen for those line having this hyphen or still entire line for those not having that hyphen. then we do simply printing it with a sep in between but that changes when reading the next line to empty-string if current line was ending with a hyphen otherwise to OFS (Output Field Separator, default is Space character). at the END{...} block we are adding a final newline character to make it a POSIX text file, if you don't want that to be added, just remove that part. Using sed, alternatively: sed ':loop /-$/N;s/-\n//;t loop; N;s/\n/ /;t loop' infile :loop if a line ended with a hyphen (testing with /-$/), do read the Next line and replace the "hyphen+\newline" with an empty string. if substitution was successful (testing with t), then jump to the label loop and process the next line and skip executing rest of the code. else, read the Next line and replace the embedded \newline in between those two lines with a space character. if substitution here was also successful, then jump to the label loop and process the next line.
Bash - remove dashes and new-lines before replacing new-lines with spaces
1,461,781,712,000
Is there a pkill-like tool which does this: send signal to all matching processes wait N seconds until the processes have terminated if all processes have terminated, nice: exit if some processes have not terminated send SIGKILL For me it is important that the tool waits until the processes have really terminated. I know that it is quite easy to do this in my favourite scripting language, but in this case it would be very nice if I use a tool which already exists. This needs to run on SuSE-Linux and Ubuntu.
There is no “standard” command which provides the behaviour you’re after. However, on Debian and derivatives, you can use start-stop-daemon’s --stop action with the --retry option: start-stop-daemon --stop --oknodo --retry 15 -n daemontokill will send SIGTERM to all processes named daemontokill, wait up to 15s for them to stop, then send SIGKILL to all remaining processes (from the initial selection), and wait another 15s for them to die. It will exit with status 0 if there was nothing to kill or all the processes stopped, 2 if some processes were still around after the second timeout. There are a number of options to match processes in various ways, see the documentation (linked above) for details. You can also provide a more detailed schedule with varying timeouts. start-stop-daemon is part of the dpkg package so it’s always available on Debian systems (and derivatives). Some non-.deb distributions make the package available too; for example, openSUSE Leap 42 has it. It’s quite straightforward to build on other platforms: git clone https://salsa.debian.org/dpkg-team/dpkg.git cd dpkg autoreconf -fi && ./configure && make You’ll need autoconf, automake, libtool, gettext. Once the build is finished you’ll find start-stop-daemon in the utils directory.
pkill which waits
1,461,781,712,000
I am trying to copy a file to 10 files. Say for example I have a E-mail message named test1.eml. I want 10 copies of the same file. When I searched in internet, I came across this stackoverflow thread https://stackoverflow.com/questions/9550540/linux-commands-to-copy-one-file-to-many-files and followed the eval command mentioned by one of the community member 'knittl'. eval 'cp test1.eml 'test{2..10}.eml';' The above mentioned command worked and it met my requirements. Are there any other alternative/more elegant commands to achieve this, since the person who mentioned about eval command told it's kind of a dirty hack.
I would do something like for i in {2..10}; do cp test1.eml test$i.eml; done Yet is more or less the same thing.
Alternate solution for making multiple copies of a single file using command line?
1,461,781,712,000
I want to grep for a word in a file in the last n lines without using the pipe. grep <string> filename enables to search the filename for a string. But, I want to search for a string in the last N lines of the file. Any command to search for that without using the pipe?
If your shell supports it (zsh, bash, some implementations of ksh), you could utilise process substitution grep <pattern> <(tail -n5 yourfile.txt) Where -n5 means get the five last lines. Similarly, grep <pattern> <(head -n5 yourfile.txt) would search through the 5 first lines of yourfile.txt. Explanation Simply speaking, the substituted process pretends to be a file, which is what grep is expecting. One advantage with process substitution is that you can feed output from multiple commands as input for other commands, like diff in this example. diff -y <(brew leaves) <(brew list) This gets rid of the pipe (|) character, but each substitution is in fact creating a pipe1. 1Note that with ksh93 on Linux at least, | does not use a pipe but a socket pair while process substitution does use a pipe (as it's not possible to open a socket): $ ksh93 -c 'readlink <(:)' pipe:[620224] $ ksh93 -c ': | readlink /proc/self/fd/0' socket:[621301]
Grep for a string in file without using pipe
1,461,781,712,000
when I do ls the output is in columns, however I need the output to be in only one column, line by line, one entry per line. So the only way I could come up with is: echo * | xargs -n1 echo Is this the standard way to achieve it or is this bad style?
That's bad, for all the reasons plain xargs is bad, namely it breaks with filenames containing whitespace or backslashes: $ touch "foo bar" $ echo * | xargs -n1 echo foo bar Besides, it runs a copy of (the external) echo command for every file. In most shells you could use printf "%s\n" * to get the listing. Or ls -1. However, the question is, what do you want to do with the list of files? Just look at them or use them in a script? For the latter, you're probably better off using for f in * ; do something with "$f" ; done or some variant of find ... -exec somecmd {} +
echo * | xargs -n1 echo , Is there a shorter, more elegant way of a line by line listing?
1,461,781,712,000
When I write the code the way below, I am able to run several commands after the else statement: if [ "$?" -eq 0 ] then echo "OK" else echo "NOK" exit 1 fi However, when I use another syntax I am unable to union 2 commands after the OR: [ "$?" -eq 0 ] && echo "OK" || (echo "NOK" >&2 ; exit 1) In my use-case I have a complex script based on "$?" == 0, so I'm looking for a way to abort (additionally to echoing the message) when it is not true.
The pair of ( ) spawns a subshell, defeating the goal of exiting the whole script with the exit command inside. Just replace the ( ) with { } (and adjusted syntax because { } are not automatical delimiters but more treated like commands: a space after { and last command inside must end with some terminator:; fits): this will run the chain of commands inside in the same shell, thus exit will affect this shell. [ "$?" -eq 0 ] && echo "OK" || { echo "NOK" >&2; exit 1;} UPDATE: @D.BenKnoble commented that should echo fail, the behaviour won't be like the former if ...; then ... else ... fi construct. So the first echo's exit code has to be "escaped" with a noop : command (which being built-in can't fail). [ "$?" -eq 0 ] && { echo "OK"; :;} || { echo "NOK" >&2; exit 1;} references: POSIX: Grouping Commands The format for grouping commands is as follows: (compound-list) Execute compound-list in a subshell environment; see Shell Execution Environment. Variable assignments and built-in commands that affect the environment shall not remain in effect after the list finishes. [...] { compound-list;} Execute compound-list in the current process environment. The semicolon shown here is an example of a control operator delimiting the } reserved word. Other delimiters are possible, as shown in Shell Grammar; a <newline> is frequently used. dash manpage,bash manpage,...
Union commands after the || (OR) operator
1,461,781,712,000
I was wondering, is there any way to kill a process that is running on a specific IP and port on Ubuntu 14.04 on a local IP and port? Preferably, this would be in one command, but if not, a bash script would be perfectly fine as well.
There are likely cleaner ways, but something along the lines of: netstat -lnp | grep 'tcp .*127.0.0.1:9984' | sed -e 's/.*LISTEN *//' -e 's#/.*##' | xargs kill
How can I kill a process running on a specific IP and port? [closed]
1,461,781,712,000
Resolution: the files were saved with CR rather than LF line breaks. Mosvy pointed this out, but only posted as a comment, rather than an answer, so I am unable to officially thank him for helping me to find the cause and solve the problem. Thanks mosvy, if you come back please post as an answer so I can give you a thumbs up. SED seems to have: sed '3,10d;/<ACROSS>/,$d' input.txt > output.txt (delete line 3-10, then delete from line containing "<ACROSS>" to end of file; then write out output.) Even when I try with only: sed '3,10d' input.txt > output.txt but for some reason neither seems to work on my Mac. Not sure what else to try. I am hoping there is something very similar with AWK. Update: when I enter: sed '3,10d' input.txt > output.txt it does not delete lines 3 - 10; it just spits back the entire file to output.txt; when I try: sed '/<ACROSS>/,$d' input.txt > output.txt output.txt is blank Also, I'm on 10.9.4 ** Update 2: Thank you to mosvy!! I wish I could upvote your comment. It was the problem solver. It turns out the file was saved with CR rather than LF line breaks When I converted it, that cured everything. Thanks to everyone who contributed.
The OP's problem was caused by file file using CR (\r / ascii 13) instead of LF (\n / ascii 10) as line terminators as expected by sed. Using CR was the convention used in classic MacOS; as a non Mac user, the only use of it I've met with in the wild in the last two decades was in PDF files, where it greatly complicates any naive PDF parser written in perl (unlike RS in mawk and gawk, $/ in perl cannot be a regex). As to the question from the title, yes, awk supports range patterns, and you can freely mix regexps and line number predicates (or any expression) in them. For example: NR==1,/rex/ # all lines from the 1rst up to (and including) # the one matching /rex/ /rex/,0 # from the line matching /rex/ up to the end-of-file. awk's ranges are different from sed's, because in awk the end predicate could also match the line which started the range. sed's behavior could be emulated with: s=/start/, !s && /last/ { s = 0; print } However, ranges in awk are still quite limited because they're not real expression (they cannot be negated, made part of other expressions, used in if(...), etc). Also, there is no magic: if you want to express something like a range with "context" (eg. /start/-4,/end/+4) you'll have to roll your own circular buffer and extra logic.
Does AWK have similar ability as SED to find line ranges based on text in line rather than line number?
1,461,781,712,000
I've extracted strings I'm interested in from another file and now have a list like this: StringA StringB StringA StringA StringB StringC StringB How can I extract the number of occurrences each string has using common command-line tools? I would like to end up with a list like this: StringA 3 StringB 3 StringC 1
Use: sort file | uniq -c Looks simple?
Count number of string occurrences [duplicate]
1,461,781,712,000
For example, from this file: CREATE SYNONYM I801XS07 FOR I8010.I801XT07 * ERROR at line 1: ORA-00955: name is already used by an existing object CREATE SYNONYM I801XS07 FOR I8010.I801XT07 * ERROR at line 1: ORA-00955: name is already used by an existing object Table altered. Table altered. Table altered. Table altered. Table altered. Table altered. Table altered. Table altered. DROP INDEX I8011I01 * ERROR at line 1: ORA-01418: specified index does not exist Index created. I want a way to find ORA- and show the ORA- line and the previous 4 lines: CREATE SYNONYM I801XS07 FOR I8010.I801XT07 * ERROR at line 1: ORA-00955: name is already used by an existing object CREATE SYNONYM I801XS07 FOR I8010.I801XT07 * ERROR at line 1: ORA-00955: name is already used by an existing object DROP INDEX I8011I01 * ERROR at line 1: ORA-01418: specified index does not exist
Supposing you're on an elderly system, like HP-UX, that doesn't have GNU utilities, just the old, original BSD or AT&T "grep". You could do something like this: #!/bin/sh awk '/ORA-/ { print line1; print line2; print line3; print line4; print $0 }\ // {line1 = line2; line2 = line3; line3 = line4; line4 = $0}' $1 Yes, there's tons of edge conditions this doesn't get right, but whatta ya want for nothing? Also, given that you're working on some decroded, antiquated OS and hardware, you probably don't have the CPU horsepower for fancy error handling.
Show lines matching a pattern and the 4 lines before each
1,461,781,712,000
I'm trying to remove a file and I keep getting a message similar to: [1] 12345 and nothing happens, I run a directory search (dir) and the file remains and I get a stopped message
You are removing a file with an & character in the name, and the rm command is being put in the background. (For the record, the 1 is the job number, and the 12345 is the process ID) It is important to quote or escape any filenames that contain special characters. A good rule of thumb is: if you think something might be a special character, it can't hurt to quote. Just put 'single quotes' around the whole filename, unless it contains a single quote mark - then it gets more complicated. You could also (instead of quotes) put a backslash \ before every special character (including any backslashes the filename may contain) Though, if you tab-complete the shell will quote or escape anything that actually is a special character for you.
rm command returns a message [1] 12345 What does this mean?
1,461,781,712,000
I want to separate a long path to multiple lines, like this: cd foo1/foo2/foo3/foo4/bar to cd foo1\ foo2\ foo3\ foo4\ bar
You could do this with an array but the cd command looks a bit complicated: path=( foo1 foo2 foo3 foo4 bar ) cd "$(IFS=/; echo "${path[*]}")" Array literals allow for arbitrary whitespace.
How can I separate a long path into multiple lines?
1,461,781,712,000
I want to set a variable if my condition is true on my Ubuntu system. This proves that my if-statement is correct: $ (if [ 1 == 1 ]; then echo "hi there"; fi); hi there This proves that I can set variables: $ a=1 $ echo $a 1 This shows that setting a variable in the if-statement DOES NOT work: $ (if [ 1 == 1 ]; then a=2; fi); $ echo $a 1 Any ideas why? All my google research indicates that it should work like this...
The (...) part of your command is your problem. The parentheses create a separate subshell. The subshell will inherit the environment from its parent shell, but variables set inside it will not retain their new values once the subshell exits. This also goes for any other changes to the environment inside the subshell, including changing directories, setting shell options etc. Therefore, remove the subshell: if [ 1 = 1 ]; then a=2; fi echo "$a"
Set variable conditionally on command line [duplicate]
1,461,781,712,000
I've gotten used to using grep for my command line searches and wanted to know how to successfully do a search using the result of another search. Here's my attempt, where I am looking for 'tool' within my result: grep tool | grep -rl embed= This returns some results and then the console hangs. Are there any simple/elegant solutions to this?
Pipelines run from left to right. More precisely, the processes run in parallel, but the data flows from left to right: the output of the command on the left becomes the input of the command on the right. Here the command on the left is grep tool. Since you're passing a single argument to grep, it's searching in its standard input. Since you haven't redirected the standard input, grep is reading from the terminal: it's waiting for you to type. To search in a file, use grep tool path/to/file | … To search in a directory recursively, use grep -r tool path/to/directory | … You can filter the results to list only the lines that contain embed=. Drop the -l and -r options, they make no sense when the input is coming from standard input. grep -r tool path/to/directory | grep 'embed=' This lists lines containing both tool and embed= (in either order). An alternative method with simpler plumbing would be to do a single search with an or pattern; this is always possible, but if the patterns can overlap (not the case here), the pattern can get very complicated. grep -E -r 'tool.*embed=|embed=.*tool' path/to/directory If you wanted to list files containing both tool and embed=, you'd need a different command structure, with the first grep listing file names (-l) and the second one receiving those file names as arguments, not as input. Converting standard input into command line arguments what the xargs command is for. grep -lZ -r tool path/to/directory | xargs -0 grep -l 'embed='
Search within a search
1,461,781,712,000
# ldd /usr/bin/ffmpeg linux-vdso.so.1 => (0x00007ffffc1fe000) libavfilter.so.0 => not found libpostproc.so.51 => not found libswscale.so.0 => not found libavdevice.so.52 => not found libavformat.so.52 => not found libavcodec.so.52 => not found libavutil.so.49 => not found libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fdd18259000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fdd1803a000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fdd17c75000) /lib64/ld-linux-x86-64.so.2 (0x00007fdd18583000) I am trying to grep only the names left from the "=>" symbol. It works with echo easily: echo linux-vdso.so.1 | grep -oP "^[a-z0-9.]*" linux-vdso.so.1 But when I perform the same RegEx onto the output of ldd it does display anything: ldd /usr/bin/ffmpeg | grep -oP "^[a-z0-9.]*" So I thought, maybe I have to include some whitespace ldd /usr/bin/ffmpeg | grep -oP "^([a-z0-9.]|\w)*" But, this did not work and so I do not know further...
The best solution for this is to use awk: $ ldd /usr/bin/ppdhtml | awk '/ => / { print $1 }' | head -n1 libcupsppdc.so.1 To do this using grep, you will need to use the lookahead and lookbehind features of PCRE: $ ldd /usr/bin/ppdhtml | grep -Po '(?<=\t).+(?= => )' | head -n1 libcupsppdc.so.1 The lookahead and lookbehind features affect that match, but are not included in the match. Also note that this would not work if ldd used a variable number of spaces instead of tabs at the start of the line. Lookbehinds can not have a variable length.
How to perform grep onto the output of ldd properly ?
1,304,454,439,000
I watched a video lecture today that introduced C and things like how to make a C program that will run in Linux. I followed the steps given and now I'm stuck with a bit of a problem. I created my C file (HelloWorld.c) and used the command gcc -o HelloWorld HelloWorld.c to compile the file, both of these steps were successful. Afterwards I checked to make sure that HelloWorld had been created by using the command ls, and it had been. However, when I use the command HelloWorld, which is supposed to run the program, I get an error that says HelloWorld: command not found. In the video lecture the professor did mention that this worked for 32-bit systems and I'm using a 64-bit system. Perhaps this could be the problem? EDIT: Also in the video lecture the professor mentioned that when I use the command ls I should see HelloWorld*. I see only HelloWorld (without the star).
You don't have the value of the PATH environment variable set to include whatever directory the HelloWorld executable file lives in. Supposing you have used cd to get to the directory, you can run HelloWorld with this command: ./HelloWorld Unix shells have a variable called PATH, which is a :-delimited list of directories in which to look when the user issues a command without a fully-qualified path name (/usr/bin/ls is fully qualified: it starts at / and ends at ls, but ls is not fully-qualified by itself). If you don't have an entry of . in PATH, you have to explicitly use ./ on the beginning of a command to get the file of that name in the current directory to execute.
Running C Programs on Linux
1,304,454,439,000
I have a while loop in my script which waits for the connection get online and then continues. #!/bin/sh while ! ping -c1 $1 &>/dev/null do echo "Ping Fail - `date`" done echo "Host Found - `date`" It takes 25 to 45 seconds for the connection to reconnect. I cannot let it wait for any longer than 50 seconds. What would be the best solution to limit the time while loop works?
Without a while loop: # -W 50 = timeout after 50 seconds # -c 1 = 1 packet to be sent response="$(ping -W 50 -c 1 "$1" | grep '1 packets transmitted, 1 received')" if [ "$response" == '' ] ; then echo no response after 50 seconds else echo connected fi
Bash: Timer in while loop
1,304,454,439,000
On Linux, is there a way to sort ls output by users? What I try to achieve is something like this: user_a   file1 user_a   file2 user_b   another_file user_c   this_file user_c   that_file user_d   file3 I am aware that a listing like this would also contain file size, permissions etc. – my main concern is the sorting by users. Would be quite handy, wouln't it So far I found ls -l | sort -k 3 to sort by column three which [if using ls -l] contains the file owner [thus sort -k 4 to sort by group]. BUT what if the file owner isn't in row three? Is there another way to achieve this, independent from the number of the column? Update: I forgot to mention that I work in BASH and try to stick to it for quite a while from here on so things don't become more complicated.
Determining which column the owner name is in from a single ls -l output without knowing which is which is not possible. You could try to match the entries in each column with the passwd file, but there is no guarantee that you would not be matching the group column or the filename column both which could only contain names you find in /etc/passwd. If you want to go with ls, you could run the program twice, once as ls -l and once as ls -g. The latter drops the owner so by matching lines based on the other information you would be able to determine the owner name without specification. This is however not an exercise I would be happy to do in a bash shell script.
sort ls output by users
1,304,454,439,000
If I want to open all mp4 files in a directory, I can simply do something like totem *.mp4. But how can I open all mp4 and all flv files in that directory with one command. I.e. I want to do something like this totem (*.mp4 OR *.flv). What's the easiest way to do this? Perhaps it helps that I am using zsh.
Simply call totem *.mp4 *.flv Too easy, isn't it ;-)
Fastest way to open all files in a directory with multiple file extensions on commandline
1,304,454,439,000
I'm working with graphic design. I've downloaded many files (EPS files, PSD files, etc) from various websites. Because it come from various websites, after downloaded from more than 10 different websites, I got many same files with same size, and same everything but different file name (2 to 4 copies for same file). To remove the duplication by manually open one by one is very time consuming I hope there is a way to rename all downloaded files to be unique name for different files (I don't mind if the new name is not descriptive). For example, 2 same file: file nice-sun.eps downloaded from site 1, while 678.eps downloaded from site 2. It will become same file name after renamed.
This command will rename all files to the md5sum of their content. That means files with the same content will get the same name. for f in *; do mv $f $(md5sum $f | cut -d " " -f 1); done You can replace md5sum with sha1sum in the command. For this demonstration I added -v to mv so we can see what is being renamed. $ echo 1 > a $ echo 2 > b $ echo 1 > c $ ls -1 a b c $ for f in *; do mv -v $f $(md5sum $f | cut -d " " -f 1); done `a' -> `b026324c6904b2a9cb4b88d6d61c81d1' `b' -> `26ab0db90d72e28ad0ba1e22ee510510' `c' -> `b026324c6904b2a9cb4b88d6d61c81d1' $ ls -1 26ab0db90d72e28ad0ba1e22ee510510 b026324c6904b2a9cb4b88d6d61c81d1 You can also safely run this command in a directory where some files have unified filename while other have not. $ echo 1 > d $ echo 2 > e $ ls -1 26ab0db90d72e28ad0ba1e22ee510510 b026324c6904b2a9cb4b88d6d61c81d1 d e $ for f in *; do mv -v $f $(md5sum $f | cut -d " " -f 1); done mv: `26ab0db90d72e28ad0ba1e22ee510510' and `26ab0db90d72e28ad0ba1e22ee510510' are the same file mv: `b026324c6904b2a9cb4b88d6d61c81d1' and `b026324c6904b2a9cb4b88d6d61c81d1' are the same file `d' -> `b026324c6904b2a9cb4b88d6d61c81d1' `e' -> `26ab0db90d72e28ad0ba1e22ee510510' $ ls -1 26ab0db90d72e28ad0ba1e22ee510510 b026324c6904b2a9cb4b88d6d61c81d1 Note that it will still calculate the hash of the files that are already hashed. So if the files are huge you might want to prevent the rehashing.
remove duplicates by renaming identical fles to same name
1,304,454,439,000
This is my current prompt definition: PS1=$'%F{063}%1~%f %(1v.%F{099}%1v %f.)%F{063}%%%f ' RPROMPT='$VIMODE %m' and I'm working on integrating this. Basically I'm starting to find it very unreadable. Is there any way that I can make it multiline in a way perhaps similar to what Perl can do with regex's (e.g., what /x mode does. Like m{ ... }x. the ... can be multiline in that)? Something like this: PS1=$' %F{063}%1 # format blue ~ # show current directory %f %(1v.%F{099}%1v %f.) # show git branch if git repo in purple %F{063} # format blue %# # % for user and # for root %f ' RPROMPT='$VIMODE %m'
With Zsh 4.3.11, you can use the Z parameter expansion flag to split a string value according to the normal shell parsing rules while discarding comments (C option to Z) and treating newlines as normal whitespace instead of replacing them with semicolons (n option to Z). You can then stitch the results back together (j::) and evaluate a level of quoting (Q) to let you quote whitespace and other problematic characters (like “bare” comment introducer characters): PS1=${(j::Q)${(Z:Cn:):-$' %F{063}%1 # format blue ~ # show current directory %f" " %(1v.%F{099}%1v %f.) # show git branch if git repo in purple %F{063} # format blue %# # % for user and # for root %f" " '}} Note: This parsing mode seems to know that it should parse the whole %(v…) expression as a single word, so we do not have to protect the space embedded in the conditional value. However, we do need to protect the “top level” spaces (the ones that happen to come after %f) since those will otherwise be taken as a normal word separator. The final unquoting pass will process any quoting mechanism (i.e. \, '', "", $''), so you can pick what you use to protect special characters (e.g. “top level” spaces or comment introducers intended for the final value). If you are not using 4.3.11, then you can use an array to let you intersperse comments with the string elements. You will probably have to use more quoting than the with the Z parameter expansion flag, but the result may still be tolerable. ps1_arr=( %F{063}%1 # format blue \~ # show current directory %f' ' '%(1v.%F{099}%1v %f.)' # show git branch if git repo in purple %F{063} # format blue %\# # % for user and # for root %f' ' ) PS1=${(j::)ps1_arr} Some notes on the quoting: You can avoid quoting the ~ if you say %1~ instead of splitting it (it is %~ with an argument of 1, after all). I quoted the whole %(v…) word, but only the parentheses and the space need protection. You only need to quote the # in %# if you have EXTENDED_GLOB enabled. The spaces that happen to come after %f need some kind of quoting. You can use a backslash, but it might look like a line continuation if you do not have “visible whitespace” in your editor.
Is there a way to make the prompt definition multiline?
1,304,454,439,000
I need to run a command that looks like this mycli --file test.zip --file another_test.zip How could I run that dynamically with all zip files in a directory? I'm sure I could pipe the files from a find command, but I don't know how to actually append them as arguments to another command and my bash-fu is not great
Using an array: unset -v args declare -a args for file in *.zip do args+=( --file "$file" ) done mycli "${args[@]}" Or, POSIXly: set -- for file in *.zip do set -- "$@" --file "$file" done mycli "$@" Or, assuming GNU tools: find . -maxdepth 1 -name '*.zip' -printf '--file\0%f\0' | xargs -0 -- mycli A relevant difference between the array-based approach and the xargs-based one: while the former may fail with an "Argument list too long" error (assuming mycli is not a builtin command), the latter will not, and will run mycli more than once instead. Note, however, that in this last case all but the last invocation's argument lists may end with --file (and the following one start with a file name). Depending on your use case you may be able to use a combination of xargs' options (e.g. -n and -x) to prevent this. Also, note that find will include hidden files in its output, while the array-based alternatives will not, unless the dotglob shell option is set in Bash or, in a POSIX shell, both the *.zip and .*.zip globbing expressions are used. For details and caveats on this: How do you move all files (including hidden) from one directory to another?.
Append all files in a directory as cli arguments
1,304,454,439,000
I have multiple files(100-1000) in foo dir. I want to append each filename to its own content. I think the for loop should resolve appending a random string to for each file, for f in *; do printf "%10s \n" $(shuf -i 50-100 -n 50 -r) >> $f; done How can I append filename to all this shuffled numbers, directly concatanate them? for f in *; do printf "%10s \n" $f $(shuf -i 50-100 -n 50 -r) >> $f; done result in file 5: 5 67 89 69 ... expected result: 567 589 569 ...
Ok, if I get it right, you want to prefix a fixed string to all numbers printed by shuf. If so, just add that string to the start of the printf format string: $ printf "x%10s\n" $(shuf -i 50-100 -n 3 -r) x 71 x 70 x 92 Change the %10s to %s get them back to back without the whitespace. Similarly, you can use the loop variable instead: $ for f in 1 2 3; do printf "$f%s\n" $(shuf -i 50-100 -n 2 -r); done 151 197 268 256 364 354 Add the >> "$f" to redirect to files. Note that since the fixed part is part of the format string here, any % signs and backslashes would be interpreted by printf.
How to print filename into itself
1,304,454,439,000
After installing microk8s (Micro Kubernetes) on my local machine, one of the commands I encountered was microk8s.enable dns which can also be run as microk8s enable dns. This doesn't seem to be a universal thing. git status is a valid command but git.status is not. How do Linux systems support such type of command structures? How can I incorporate this behavior in my Bash scripts?
You'll sometimes see programs (and scripts) inspect the name of the file that was used to invoke the program and condition behavior off of that. Consider for example this file and symbol link: $ ls -l -rwxr-xr-x ... foo lrwxr-xr-x ... foo.bar -> foo And the content of the script foo: #!/bin/bash readonly command="$(basename "${0}")" subcommand="$(echo "${command}" | cut -s -d. -f2)" if [[ "${subcommand}" == "" ]]; then subcommand="${1}" fi if [[ "${subcommand}" == "" ]]; then echo "Error: subcommand not specified" 1>&2 exit 1 fi echo "Running ${subcommand}" The script parses the command name looking for a subcommand (based on the dot notation in your question). With that, I can run ./foo.bar and get the same behavior as running ./foo bar: $ ./foo.bar Running bar $ ./foo bar Running bar To be clear, I don't know that that's what microk8s.enable is doing. You could do a ls -li $(which microk8s.enable) $(which microk8s) and compare the files. Is one a link to the other? If not, do they have the same inode number?
Does Bash support command.subcommand structure in addition to command subcommand structure? If yes, how to incorporate this in bash scripts?
1,304,454,439,000
Given a bash variable with the value 2019-08-15, is there some utility that can convert that date to the format August 15, 2019?
On Linux, or any system that uses GNU date: $ thedate=2019-08-15 $ date -d "$thedate" +'%B %e, %Y' August 15, 2019 On macOS, OpenBSD and FreeBSD, where GNU date is not available by default: $ thedate=2019-08-15 $ date -j -f '%Y-%m-%d' "$thedate" +'%B %e, %Y' August 15, 2019 The -j option disables setting the system clock, and the format string used with -f describes the input date format (should be a strptime(3) format string describing the format used by your variable's value). Then follows the value of your variable and the format that you want your output to be in (should be a strftime(3) format string). NetBSD users may use something similar to the above but without the -f input_fmt option, as their date implementation uses parsedate(3). Note also the -d option to specify the input date string: $ thedate=2019-08-15 $ date -j -d "$thedate" +'%B %e, %Y' August 15, 2019 See also the manual for date on your system.
How to convert 2019-08-15 date format to August 15, 2019 in the command line?
1,304,454,439,000
I see people running shell scripts by typing ./scriptname. Now this seems to be the default way, since I have seen it so often, however occasionally, but not rare, I see them type sh scriptname. Is it just a matter of preference or is there a more significant difference between ./ and sh ?
There are a few differences. ./scriptname requires that the file called scriptname be executable, and it uses the shell specified as its first line (in the “shebang”, e.g. #!/bin/sh), if any. sh scriptname works as long as the file called scriptname is readable, and it uses sh (whatever that is) regardless of what the script’s shebang specifies. With some shells, if scriptname doesn’t exist in the current directory, the directories specified in PATH will be searched, and the first scriptname found there (if any) will be read and interpreted instead. Put another way, sh scriptname will work without setup, but you might use the wrong shell, and you might run the wrong script. ./scriptname will try to run the correct script using the right shell (or at least, the shell specified by the script’s author, if any), but it might need some setup first (chmod a+x scriptname).
What is the difference between sh and ./ when invoking a shell script? [duplicate]
1,304,454,439,000
I have text-files containing many lines, of which some starts with ">" (it's a so-called *.fasta file, and the ">"s marks the beginning of a new information container): >header_name1 sequence_info >header_name2 sequence_info I want to add the name of the file these lines are located in to the header. For example, if the file is named "1_nc.fasta", all the lines inside the file starting with > should have the label "001" added: >001-header_name1 sequence_info >001-header_name2 sequence_info Someone nice provided me with this line: sed 's/^>/>001-/g' 1_nc.fasta>001_tagged.fasta Accordingly, all headers in 2_nc.fasta should start with "002-", 3_nc.fasta -> "003-", and so on. I know how to write parallel job scripts, but the jobs are done so quickly, I think a script that serially processes all files in a loop is much better. Unfortunately, I can't do this on my own. Added twist: 11_nc.fasta and 149_nc.fasta are not available. How can I loop that through all the 500 files in my directory?
This should do the trick. I break the filename at the underscore to get the numerical prefix, and then use a printf to zero-pad it out to a three digit string. for file in *.fasta; do prefix="$(printf "%03d" "${file%%_*}")" sed "s/^>/>$prefix-/" "$file" > "${prefix}_tagged.fasta" done
Wrapping a loop around a 'sed'-command processing many files in a single directory
1,304,454,439,000
I use the following command to convert my input file contents to lowercase tr A-Z a-z < input > output This command works fine. But when I try to store the output in input file itself, it is not working. The input file is empty after executing the command. Why? tr A-Z a-z < input > input
But when I try to store the output in input file itself, it is not working. The input file is empty after executing the command. Why? Because the > input causes the shell to truncate the file before the tr command is run. Incidentially, you can get around this with more advanced descriptor handling in Bash: exec 8<>input exec 9<>input tr '[A-Z]' '[a-z]' <&8 >&9 The exec #<>file opens a file into descriptor # in read-write mode without truncating.
Convert file contents to lowercase and store result in same file [duplicate]
1,304,454,439,000
I'd like to schedule 8 curl commands over the next 12 hours. I was wondering if there's a way to do with with 8 single-line calls to at. Sorta like: $ at now + 1 min "curl -X POST 'http://localhost:5566/export/778'" or $ at now + 1 min -- curl -X POST 'http://localhost:5566/export/778' But neither of those work. I don't see anything in the man page about this. Barring that, is there a way to set the time for the next command while inside the at subshell?
A portable way is: $ echo "curl -X POST 'http://localhost:5566/export/778'" | at now + 1 min
Is it possible to specify an at command on a single line?
1,304,454,439,000
For example, a file file1.txt contains Hi how are you hello today is monday hello I am fine Hi how are you After the processing of file1.txt it should write to file2.txt and contents should be like this without repeating the same lines. Hi how are you hello today is monday I am fine What command can I use to do that in linux terminal?
This is an easy job for sort, use the unique (-u) option of sort: % sort -u file1.txt hello Hi how are you I am fine today is monday To save it in file2.txt: sort -u file1.txt >file2.txt If you want to preserve the initial order: % nl file1.txt | sort -uk2,2 | sort -k1,1n | cut -f2 Hi how are you hello today is monday I am fine
How to write contents of a file to new file removing repeated lines [duplicate]
1,304,454,439,000
I'm going to config my network interfaces and run this command: auto eth0 but it returns -bash: auto: command not found. It seems I've missed something which I don't know. Any idea? What should I install?
auto eth0 is interfaces(5) syntax. It's a line you would add to /etc/network/interfaces, not a command to be run in a shell. Once you correctly configure the interface in /etc/network/interfaces, you can run the ifup/ifdown commands to apply them.
-bash: auto: command not found
1,304,454,439,000
I've script which is connecting to remote host via SSH, creates temporary file and executing the following command: Calling system(mysql --database=information_schema --host=localhost < /tmp/drush_1JAjtt) Each time it's creating different file (pattern: drush_xxxxxx). I've tried couple of times manually running on the remote: tail -f /tmp/drush_* but my connection is too slow and most of the time I've end up with the error: tail: cannot open `/tmp/drush_*' for reading: No such file or directory Is there any trick accessing such file straight after it's being created to show it's content?
If the file is created for enough short amount of time, you can run the following command on separate terminal before running the script: while true; do cat /tmp/drush_* 2>/dev/null && break; done Where /tmp/drush_* is your pattern. The advantage is that it's quick and you don't have to install any external tools (if you don't have e.g. admin/root permissions). Please note that using inotifywatch (from inotify-tools) tool won't work in this particular case, because the file is created after the watches have been placed and the change will not be detected. Read more: Why inotify doesn't print list of changed files? But still you can use inotifywait tool which efficiently waits for changes to files using Linux's inotify interface. Here is the simple example: inotifywait -m --format "%e %f" /tmp And example to show the content of newly created files in /tmp: inotifywait -m --format "%f" /tmp | grep --line-buffered ^drush | xargs -L1 -I% cat /tmp/% 2> /dev/null Add sudo before cat if necessary. Change /tmp and drush to your suitable values.
How to access temporary file straight after creation?
1,304,454,439,000
I often mistype a command. So I will type this sublimetext myfile.txt instead of git add myfile.txt When I do this, I hit up to restore the last command. But after doing so, my cursor is at the end of the previously typed line. Is there a keyboard short cut to jump back to the prompt?
On bash command line, I use ctrl+a to go to the beginning of command and ctrl+e to go to the end of command.
keyboard shortcut to jump cursor to prompt?
1,304,454,439,000
Where can I find more information about the command(?) print since I don't receive a result when I input man print? For example, in the zsh I can do the following: $ print "Hello, world\!" Hello, world! I've seen print -P foo and print -n bar used among other flags and have no idea what they mean nor do I know where to look for further information. So we have two questions really: Where does print come from and where can I find documentation for it? Where does one find documentation for similar items that are not to be found in the man pages? NOTE: For clarification, I'm not trying to print a sheet of paper. I am also aware of printf, which allows for formatted output and has a man page.
In zsh, at the prompt, type print, and then Alt-H. If it gives you the man page for the print system command instead of the print builtin, you may want to follow the instructions given under Accessing On-Line Help at: info zsh Utilities For zsh documentation, I prefer to use info in general. The zsh documentation is properly indexed and it's very easy to find documentation using info. For instance to find the documentation for print, type info zsh, and within info, type i to bring up the index prompt and type print (you can throw in a couple of Tab to get a completion list). Or just run info zsh print To open the zsh info book and jump directly to the print index entry.
How can I `man print`?
1,304,454,439,000
I am running a program fls (from the Sleuth Kit) with option -v for verbose mode. However it takes too long, and the program is still running since yesterday. I guess it will run faster without verbose mode, but I am not sure how long it takes to finish running and whether it is worth to stop and rerun it without verbose. so I wonder if it is possible to turn off verbose mode in the middle of running and resume the running after that? Thanks!
As lynxlynxlynx points out, unless the program author makes provisions for it, you cannot change the verbosity while the program is running, but you can keep it from printing to a terminal in case that is a bottle neck. To do this, close the terminal after telling the shell not to send a SIGHUP. Most shells will send a SIGHUP to any jobs that are still running when you try to exit. You can tell the shell not to do this. There are various ways to do this; the most straightforward is probably with disown. If you haven't yet, suspend the job with ctrl+z, then make it run again in the background with bg, then run disown. The shell no longer tracks this process as a job, so it will not send a SIGHUP when exiting. If you have already put the program in the backgound, then if there are any other background jobs that were started after it, you'll need the jobspec of the program you're interested in to use as a parameter to pass to bg and disown.
Is it possible to disable verbose in the middle of running?
1,304,454,439,000
I installed VLC through terminal but it shows: bash: /snap/bin/vlc: No such file or directory I also tried: which vlc and it showed: /usr/bin/vlc When I try to run it through sudo su, it shows this error: VLC is not supposed to be run as root. Sorry. If you need to use real-time priorities and/or privileged TCP ports you can use vlc-wrapper (make sure it is Set-UID root and cannot be run by non-trusted users first). Any idea how I can fix this issue? I tried using the snap VLC package, which I installed using the terminal, but I couldn't navigate to my Downloads folder. I could only navigate in the "computer" folder, which consists of /bin, /usr, /var, etc. I was able to play the items of the folder I wanted by dragging and dropping. I'm also only able to open VLC through the terminal. Opening it through the start menu doesn't do anything. I'm using Zorin OS 16, which is based on Ubuntu 20.04, if I'm not wrong.
You should run $ /usr/bin/vlc As for why executing vlc looks for /snap/bin/vlc, I wouldn't know. If you had a snap for vlc installed, I guess it should have worked as well. Perhaps you have an alias set in your ~/.bashrc or elsewhere. If you find such an alias, and remove it, you could probably start running vlc without the need for prepending the full path. EDIT To remove the difficulty, you could check if you actually have any file or soft link /snap/bin/vlc. Check with $ type vlc $ ls -al /snap/bin/vlc Also, you could setup your own alias vlc=/usr/bin/vlc in ~/.bashrc. If that is read after the presumed other alias, you would be ok.
VLC doesn't open through terminal or GUI
1,304,454,439,000
I use grep to get the output of mysqladmin as sudo mysqladmin ext -i10 | grep 'buffer_pool_pages_flushed' and the output is continuous (every 10 seconds) as | Innodb_buffer_pool_pages_flushed | 265708726 | | Innodb_buffer_pool_pages_flushed | 265735665 | | Innodb_buffer_pool_pages_flushed | 265751712 | | Innodb_buffer_pool_pages_flushed | 265754576 | | Innodb_buffer_pool_pages_flushed | 265774380 | How can I adjust the grep command to output the differences between consecutive numbers in the second column, like 26939 (265735665-265708726) 16047 (265751712-265735665) 2864 (265754576-265751712) 19804 (265774380-265754576)
Append: | awk '{if(NR>1){print $4-last,"("$4"-"last")"} last=$4}' Output: 26939 (265735665-265708726) 16047 (265751712-265735665) 2864 (265754576-265751712) 19804 (265774380-265754576)
How to subtract number in previous line from current line's using grep?
1,304,454,439,000
I have a directory structure as follows: dir |___sub_dir1 |_____files_1 |___sub_dir2 |_____files_2 |___sub_dirN |_____files_N Each sub_directory may or may not have a file called xyz.json. I want to find the total count of xyz.json files in the directory dir. How can I do this?
You can use : find path_to_dir -name xyz.json | wc -l
get count of a specific file in several directories
1,304,454,439,000
I'm writing a script to configure new debian installs while finding the best solution to confirming that a user exists in the script, the best way I found gives me wierd output. PROBLEM: id -u $var and id -u $varsome give the same output even though var has a value (the username) and varsome has no value [19:49:24][username] ~ ~↓↓$↓↓ var=`whoami` [19:53:38][username] ~ ~↓↓$↓↓ id -u $var 1000 [19:53:42][username] ~ ~↓↓$↓↓ echo $? 0 [19:53:49][username] ~ ~↓↓$↓↓ id -u $varsome 1000 [19:09:56][username] ~ ~↓↓$↓↓ echo $? 0 [20:10:18][username] ~ ~↓↓$↓↓ bash --version GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2016 Free Software Foundation, Inc. Licens GPLv3+: GNU GPL version 3 eller senere <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. [20:27:08][username] ~ ~↓↓$↓↓ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" I got the command from this question on stackoverflow: Check Whether a User Exists QUESTIONS: What is happening here? Is there a better way you can find to verify a user exist's in a script? Pointers on the script well appreciated
Since the variable expansion wasn't quoted, the empty word that results from $varsome being expanded is removed completely. Let's make a function that prints the number of arguments it gets and compare the quoted and non-quoted case: $ args() { echo "got $# arguments"; } $ var="" $ args $var got 0 arguments $ args "$var" got 1 arguments The same happens in your case with id: id -u $var is exactly the same as just id -u when var is empty. Since id doesn't see a username, it by default prints the current user's information. If you quote "$var", the result is different: $ var="" $ id -u "$var" id: ‘’: no such user With that fixed, you can use id to find if a user exists. (We don't need the outputs here though, so redirect them away.) check_user() { if id -u "$1" >/dev/null 2>&1; then echo "user '$1' exists" else echo "user '$1' does not exist" fi } check_user root check_user asdfghjkl That would print user 'root' exists and user 'asdfghjkl' does not exist. This is a bit of the inverse of the usual problems that come from the unexpected word splitting of unquoted variables. But the basic issue is the same and fixed by what half the answers here say: always quote the variable expansions (unless you know you want the unquoted behaviour). See: When is double-quoting necessary? Why does my shell script choke on whitespace or other special characters? BashGuide on Word Splitting
id -u $var gives the same output if $var has a value or not
1,304,454,439,000
I'm trying to do simple grep and grep -v so I will get the lines from a.txt that exists in b.txt and not in c.txt. Example of 3 files a.txt: a b c d e up.txt: a.up b.up c.up dw.txt: a.dw b.dw Desired output: c I wrote the below code but the grep looks on the $(sed...) as one single line at a time and not as a whole: sed 's/.up//' /tmp/b.txt | grep -f /tmp/a.txt | grep -vf $(sed 's/.dw//' /tmp/c.txt)
Assuming the files are all sorted and that we're using a shell that understands process substitutions (like bash): $ join -t . -v 1 -o 0 <( join -t . a.txt b.txt ) c.txt c or, for other shells, $ join -t . a.txt b.txt | join -t . -v 1 -o 0 - c.txt c This uses join twice to perform relational joins between the files. The data is interpreted as dot-delimited fields (with -t .). The join between a.txt and b.txt is straight forward and produces a.up b.up c.up These are all the lines from the two files whose first dot-delimited field occurs in both files. The output consists of the join field (a, b, c) followed by the other fields from both files (only b.txt has any further data). The second join is a bit more special. With -v 1 we ask to see the entries in the first file (the intermediate result above) that can't be paired with any line in the second file, c.txt. Additionally, we only ask to see the join field itself (-o 0). Without the -o flag, we would get c.up as the result. If the files are not sorted, then each occurance of a filename file could be replaced by <( sort file ) in the command.
grep lines that exists on one file but not in the other
1,304,454,439,000
This has long puzzled me and this seemed like the best venue to solicit the perspective of those with far more POSIX time-in-grade than I have. I consider the parsing of ls output in this manner to be crucial and creating aliases to modify the ls command to default to it is always one of the first customizations I make to a new terminal profile. Is this just an nasty side-effect of too many formative years spent using the Windows Explorer? Is there a mindset for interpreting the default mixed output that I've never heard explained, and once I do will have an epiphany with instant comprehension of why only cretins want directories and files separated? I know this is trivial but ls is such a touchstone for all command line activities that I feel as though I've missed something profound. Thank you in advance for your teleological tutelage.
This was discussed when the option was added to ls; Jim Meyering said Just one little question about this patch: are you sure not to add a short option for --group-directories-first ? For now, yes. It would take a strong argument to go against the “no new short option names” policy for ls, especially considering the alternative mentioned below. --group-directories-first is already in my ls aliases ;) but in this month I had to use other linuxes where those aliases were not defined, and I realized that typing --group-directories-first for such a useful feature is IMHO really annoying... Did you know that you can abbreviate that option with --g, since there is no other long option name starting with g? So basically, ls already has so many short options (which is itself a running joke in Unix circles) that it takes a really strong argument to add one, and --group-directories-first has a nice pseudo-short alternative.
Why has the --group-directories-first switch for the ls command never evolved to have a short form as well?
1,304,454,439,000
I want to delete all hidden directoris from a directory and its sub-directory. I also use rm -rf .directory_name this command is iterative command I want to a recursive command. Please anybody help me??
It sounds like you want something like this (although it's not clear what you mean when distinguishing "iterative command" from "recursive command", since rm -rf is both recursive and iterative): LC_ALL=C find . -name '.?*' -type d -exec echo rm -rf {} + Once you're happy, remove echo from the option arguments to -exec to remove the listed directories.
Recursively delete hidden directory & its files?
1,304,454,439,000
I find myself doing this often enough that I wonder if there's a standard Unix way to do it: % mkdir -p /TARGETDIR/relative/path/to % cp ./relative/path/to/somefile /TARGETDIR/relative/path/to In other words, I don't want to just copy somefile to /TARGETDIR, but actually I want to copy its entire relative path. Is there a simpler way to do this than the two-liner above?
With GNU coreutils (non-embedded Linux, Cygwin): cp -p --parents path/to/somefile /TARGETDIR With the POSIX tool pax (which many default installations of Linux unfortunately lack): pax -rw -pp relative/path/to/somefile /TARGETDIR With its traditional counterpart cpio: find relative/path/to/somefile | cpio -p -dm /TARGETDIR (This last command assumes that file names don't contain newlines; if the file names may be chosen by an attacker, use some other method, or use find … -print0 | cpio -0 … if available.) Alternatively, you could make it a shell script or function. cp_relpath () { mkdir -p -- "$2/$(dirname -- "$1")" cp -Rp -- "$1" "$2/$(dirname -- "$1")" }
Can one copy a relpath in one command?