date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,353,826,112,000
I run this command to find the biggest files: du -Sh | sort -rh | head -5 Then I do -rm rf someFile. Is there a way to automatically delete the files found from the former command?
If you're using GNU tools (which are standard on linux), you could do something like this: stat --printf '%s\t%n\0' ./* | sort -z -rn | head -z -n 5 | cut -z -f 2- | xargs -0 -r echo rm -f -- (remove the 'echo' once you've tested it). The stat command prints out the filesize and name of each file in the current directory separated by a tab, and with each record terminated by a NUL (\0) byte. the sort command sorts each NUL-terminated record in reverse numeric order. The head command lists only the first five such records, then cut removes the file size field from each record. Finally xargs takes that (still NUL-terminated) input and uses it as arguments for echo rm -f. Because this uses NUL as the record (filename) terminator, it copes with filenames that have any valid character in them. If you want a minimum file size, then you could insert awk or something between the stat and the sort. e.g. stat --printf '%s\t%n\0' ./* | awk 'BEGIN {ORS = RS = "\0" } ; $1 > 25000000' | sort -z -rn | ... NOTE: GNU awk doesn't have a -z option for NUL-terminated records, but does allow you to set the record separator to whatever you want. We have to set both the output record separator (ORS) and the input record separator (RS) to NUL. Here's another version that uses find to explicitly limit itself to regular files (i.e. excluding directories, named pipes, sockets, etc) in the specified directory only (-maxdepth 1, no subdirs) which are larger than 25M in size (no need for awk). This version doesn't need stat because GNU find also has a printf feature. BTW, note the difference in the format string - stat uses %n for the filename, while find uses %p. find . -maxdepth 1 -type f -size +25M -printf '%s\t%p\0' | sort -z -rn | head -z -n 5 | cut -z -f 2- | xargs -0 -r echo rm -f -- To run it for a different directory, replace the . in the find command. e.g. find /home/web/ .... shell script version: #!/bin/sh for d in "$@" ; do find "$d" -maxdepth 1 -type f -size +25M -printf '%s\t%p\0' | sort -z -rn | head -z -n 5 | cut -z -f 2- | xargs -0 -r echo rm -f -- done save it as, e.g., delete-five-largest.sh somewhere in your PATH and run it as delete-five-largest.sh /home/web /another/directory /and/yet/another This runs the find ... once for each directory specified on the command line. This is NOT the same as running find once with multiple path arguments (which would look like find "$@" ..., without any for loop in the script). It deletes the 5 largest files in each directory, while running it without the for loop would delete only the five largest files found while searching all of the directories. i.e. five per directory vs five total.
Find biggest files and delete automatically
1,353,826,112,000
I have a directory full of logs named in the following style: info.log00001 info.log00002 info.log00003 ... info.log09999 info.log My current output (using grep -c) I need to analyze the frequency a particular error that happens occasionally, so go to that directory and use grep -crw . -e "FooException BarError" | sort -n | less obtaining something like: ./info.log00001: 1 ./info.log00002: 0 ./info.log00003: 42 ... ./info.log09999: 25 ./info.log: 0 Then, I can ls -lt to see their modification date and analyze when the error happened the most. My desired output (with count and date) Anyway, I'd like find a way to get an output with the count and the date in the same line. That would make my analysis easier. I would like something as: 2015-09-31 10:00 ./info.log00001: 1 2015-09-31 10:15 ./info.log00002: 0 2015-09-31 10:30 ./info.log00003: 42 ... 2016-04-01 13:20 ./info.log09999: 25 2015-09-31 13:27 ./info.log: 0 Additional info Ideally, I'd like to accomplish this with only one command, but first throwing grep's output to a file and then processing that file would make it, too. Also, I really don't care about the date format or whether the date is at the end or at the beginning of the line. All I want to to is to have the files sorted by date starting with the oldest (which is also the file with the lowest number in its name) I found a way to accomplish something similar with awk, but in my case it would not work, since it parses the filename from grep's output, and in my case, grep's output has more text that just the path to the file. I'd really appreciate any feedback on this.
If you have gnu find - and assuming none of your file names contains newlines - you could use find's -printf to output the mtime in the desired format + the file name then run grep to get the count: find . -type f -printf '%TY-%Tm-%Td %TH:%TM %p: ' -exec grep -cw "whatever" {} \; | sort -k1,1 -k2,2 Alternatively, with zsh you could glob and sort by modification time (via glob qualifiers - . selects regular files, Om sorts in descending order by mtime) and then for each file print the mtime using the stat module, the file name and then, again, get the count via grep: zmodload zsh/stat for f in ./**/*(.Om) do printf '%s %s\t%s %s: ' $(zstat -F '%Y-%b-%d %H:%M' +mtime -- $f) $f grep -cw "whatever" $f done
Add mtime to grep -c output and sort the output by mtime
1,353,826,112,000
Sometimes it happens that I have a list of files or strings in the clipboard and want to paste it as an argument list in a bash shell: Example file list (only an example): createsnapshot.sh directorylisting.sh fetchfile.sh What I want: md5sum createsnapshot.sh directorylisting.sh fetchfile.sh Currently, I enter the following hacky command line (the filenames are pasted from clipboard; the list can contain dozens of lines): md5sum $(echo $(echo " createsnapshot.sh directorylisting.sh fetchfile.sh ")) This has several disadvantages: it's complex it doesn't look well it doesn't support lines which contain spaces What other options do I have? md5sum " doesn't work because in this case I get only one argument with a multi-line string. Similar with here-documents. It's not always md5sum. It can also be tar or git add or du -hsc. I don't just ask for a way to get the md5 checksums of these files. Such situations occur about 2-5 times a day.
If the commands don't use stdin, use xargs, which reads input and translates it to arguments (note that I am using the echo command to show how xargs builds the command): $ xargs echo md5sum # paste text createsnapshot.sh directorylisting.sh fetchfile.sh # press Ctrl-D to signify end of input md5sum createsnapshot.sh directorylisting.sh fetchfile.sh Using xargs with -d '\n', so that each line is taken as a complete argument, spaces notwithstanding: $ xargs -d'\n' md5sum # paste a file with spaces afilewithoutspaces foo " " bar # Ctrl D md5sum: a file with spaces: No such file or directory md5sum: afilewithoutspaces: No such file or directory md5sum: foo " " bar: No such file or directory As you can see, md5sum prints errors for each filename, irrespective of the other whitespace in the filenames. If you're willing to use xclip, then you can pipe or otherwise feed it to xargs: xargs -a <(xclip -o) -d '\n' md5sum xclip -o | xargs -d '\n' md5sum This should reliably work with filenames with spaces.
Interactively add arguments line-by-line in bash
1,353,826,112,000
I recently created a username and a group called gamesForAdmin. Since then I deleted it and the folder I made for it is still there: drwxr-xr-x 5 root root 4096 Jul 4 11:28 . drwxr-xr-x 23 root root 4096 May 29 12:41 .. drwxr-xr-x 2 sftpuser sftpaccess 4096 Jul 4 11:24 gamesForAdmin drwxr-xr-x 27 ryan ryan 4096 Jul 4 11:31 ryan drwxr-xr-x 3 root sftpaccess 4096 Jul 4 11:29 sftpuser When I try to run a sudo rmdir gamesForAdmin, I get this error message: rmdir: failed to remove ‘gamesForAdmin’: Directory not empty But theres nothing in the directory! When I run an ls, there is nothing listed. Why does this occur? How I can successfully remove this directory? Output of ls -la gamesForAdmin: total 28 drwxr-xr-x 2 sftpuser sftpaccess 4096 Jul 4 11:24 . drwxr-xr-x 5 root root 4096 Jul 4 11:28 .. -rw------- 1 sftpuser sftpaccess 471 Jun 9 19:49 .bash_history -rw-r--r-- 1 sftpuser sftpaccess 220 Apr 8 21:03 .bash_logout -rw-r--r-- 1 sftpuser sftpaccess 3637 Apr 8 21:03 .bashrc -rw-r--r-- 1 sftpuser sftpaccess 675 Apr 8 21:03 .profile -rw------- 1 sftpuser sftpaccess 644 Jun 9 17:48 .viminfo
Based on the output you're showing in your question the directory gamesForAdmin is not empty, so rmdir cannot remove this directory. To remove it you'll need to use rm -fr instead. Try this: sudo rm -rf gamesForAdmin which should fix you right up.
Removing a directory that has no files in it
1,353,826,112,000
I am trying to get ls like output from find command (this is on Linux Mind with find (GNU findutils) 4.7.0. This is because I want to see the numerical chmod permissions. What I managed so far is: % find . -maxdepth 1 -printf "%m %M %y %g %G %u %U %f %l\n" 755 drwxr-xr-x d blueray 1000 blueray 1000 . 664 -rw-rw-r-- f blueray 1000 blueray 1000 .zshrc 644 -rw-r--r-- f blueray 1000 blueray 1000 .gtkrc-xfce 644 -rw-r--r-- f blueray 1000 blueray 1000 .sudo_as_admin_successful 777 lrwxrwxrwx l root 0 root 0 resolv.conf /run/systemd/resolve/resolv.conf Here, %l print empty string if file is not a symbolic link. What I am looking for is, if %l is not empty then print -> %l. How can I do that with -printf?
You can tell find to print one thing for links and another for non links. For example: $ find . -maxdepth 1 \( -not -type l -printf "%m %M %y %g %G %u %U %f\n" \) -or \( -type l -printf "%m %M %y %g %G %u %U %f -> %l\n" \) 755 drwxr-xr-x d terdon 1000 terdon 1000 . 644 -rw-r--r-- f terdon 1000 terdon 1000 file1 755 drwxr-xr-x d terdon 1000 terdon 1000 dir 644 -rw-r--r-- f terdon 1000 terdon 1000 file 777 lrwxrwxrwx l terdon 1000 terdon 1000 linkToFile -> file Or, a little more legibly: find . -maxdepth 1 \( -not -type l -printf "%m %M %y %g %G %u %U %f\n" \) \ -or \( -type l -printf "%m %M %y %g %G %u %U %f -> %l\n" \) The \( -not -type l -printf '' ... \) will be run for anything that isn't a symlink, while the -or \( -type l -printf '' ...\) will be run for symlinks only.
How to get ls like output using find command
1,353,826,112,000
Let's say if my text file contains: 101 Adam 201 Clarie 502 Adam 403 Tom and i want to write a command in the shell to only give me the numbers based of a specific name. For example, only output the number for the name 'Adam' would give: 101 502 I was thinking of something like: cut -f 1 Data_1 | grep "Adam" but it doesn't work. Data_1 is the filename. 1 refers to the first column. I'm new to Unix so I'll appreciate some feedback on this.
First you have the order of grep/cut backwards. And, unless those are actual tabs (as in Tab) separating your columns (I can't tell) you also need to specify that normal whitespace (as in Space) is your delimeter. grep Adam Data_1 | cut -f1 -d' ' If you are using tabs then leave off -d' '. Generally speaking try one thing at a time while building a compound command like this. What do you see when you do cut alone? Does it look like applying grep to it makes sense? If not then rethink things. And always give the man page for each command a good read. Bonus: Here's a sed command to do the same thing: sed -n 's/^\(.*\)\t\+Adam$/\1/p' Data_1 This goes through each line in the file but prints only those that end with one or more tabs and your search string. Then, before printing, it strips off those same tabs and search string.
Getting only specific data based on name in text file
1,353,826,112,000
Sometimes I find myself installing a package and then trying to run a command using the same name, like with geoip-bin package: $ sudo apt install geoip-bin [...] $ geoip-bin geoip-bin: command not found How may I find all the commands associated with a given package?
dpkg -L -L, --listfiles package-name List files installed to your system from package-name. Two alternatives: Usually works just: dpkg -L byobu | egrep '/bin/|/sbin/' (or even with grep bin if you don't care getting some false positives). Or dpkg -L byobu | xargs which Or with some bash magic: for f in $(dpkg -L geoip-bin) ; do test -x $f -a ! -d $f && echo $f ; done Optionally you could add | grep "/usr/bin/" at the end to list executables files on that particular folder. geoiplookup was the command of geoip-bin. I also found this very useful to learn about other commands of any package.
How to find commands associated to a package? [duplicate]
1,353,826,112,000
I have come to learn that it is compulsory to learn C language before trying to learn Linux. What is the reason behind it?Does the knowledge of C somehow aid me in understanding Linux commands,file directories better? And yes,if I must learn C how do I know when that I have learnt enough to begin Linux. Thank you
Linux is just an operating system kernel. A core component found at the heart of some operating systems like Android, ChromeOS, Ubuntu or Fedora. You don't use Linux, you use software built for Linux. For instance, a command line is something that is interpreted by another piece of software called a shell. Such shells include for instance bash, the shell of the GNU operating system (some of those systems above (Ubuntu/Fedora) actually extend the GNU OS while using Linux as a kernel). bash existed before Linux and can be built for Linux and dozens other operating systems. As a user, you use bash or a file manager application or a web browser or an Android phone or a smart TV, but you don't use Linux. You could say that you use an operating system like Debian, Ubuntu or Fedora, but not really Linux. Learning Linux could refer to learning (as a programmer) the internals or the interfaces of that core component of a Android/Ubuntu/Debian/Fedora operating system that is Linux, and as it is written in C, you'd have to learn C beforehand. But to use a Linux-based system like a PC running Ubuntu or ChromeOS, an Android phone or a smart TV, you certainly don't need to learn C.
Why learn C at all? [closed]
1,353,826,112,000
I have a folder with a complicated folder structure: ├── folder1 │ ├── 0001.jpg │ └── 0002.jpg ├── folder2 │ ├── 0001.jpg │ └── 0002.jpg ├── folder3 │ └── folder4 │ ├── 0001.jpg │ └── 0002.jpg └── folder5 └── folder6 └── folder7 ├── 0001.jpg └── 0002.jpg I would like to flatten the the folder structure such that all the files reside in the parent directory with unique names such as folder1_0001.jpg, folder1_0002.jpg... folder5_folder6_folder7_0001.jpg etc. I have attempted to use the code suggested in "Flattening folder structure" $ find */ -type f -exec bash -c 'file=${1#./}; echo mv "$file" "${file//\//_}"' _ '{}' \; The echo demonstrates that it is working: mv folder3/folder4/000098.jpg folder3_folder4_000098.jpg But the output files are not placed in the parent directory. I have searched the entire drive and cannot find the output files. I have also attempted "Flatten a folder structure to a file name in Bash" $ find . -type f -name "*.jpg" | sed 'h;y/\//_/;H;g;s/\n/ /g;s/^/cp -v /' | sh -v demonstrates that it is working: ‘./folder3/folder4/000098.jpg’ -> ‘._folder3_folder4_000098.jpg’ However the output creates hidden files in the parent directory, this complicates my workflow. I am able to view these hidden files in the parent directory using ls -a I have also tried the suggested code below from "Renaming Duplicate Files with Flatten Folders Command" find . -mindepth 2 -type f | xargs mv --backup=numbered -t . && find . -type d -empty -delete But the command overwrites files with similar file names. Any suggestions on how to flatten a complicated folder structure without overwriting files with similar names? The current solutions seem to only work on folder structures one layer deep. My ultimate goal is to convert the unique names into sequential numbers as described in "Renaming files in a folder to sequential numbers" a=1 for i in *.jpg; do new=$(printf "%04d.jpg" "$a") #04 pad to length of 4 mv -- "$i" "$new" let a=a+1 done
I have no idea why the first solution in your question wouldn't work. I can only assume you forgot to remove the echo. Be that as it may, here's another approach that should also do what you need, assuming you're running bash: shopt -s globstar for i in **/*jpg; do mv "$i" "${i//\//_}"; done Explanation The shopt -s globstar turns on bash's globstar feature which makes ** recursively match any number of directories or files. for i in **/*jpg; will iterate over all files (or directories) whose name ends in .jpg, saving each as $i. "${i//\//_}" is the name of the current file (or directory) with all instances of / replaced with _. If you can also have directories with names ending in .jpg and want to skip them, do this instead: shopt -s globstar for i in **/*jpg; do [ -f "$i" ] && echo mv "$i" "${i//\//_}"; done And for all files, irrespective of extension: shopt -s globstar for i in **/*; do [ -f "$i" ] && echo mv "$i" "${i//\//_}"; done
Flattening complex folder structures with duplicate file names
1,353,826,112,000
The formatting character %s makes stat print the filesize in bytes # stat -c'%A %h %U %G %s %n' /bin/foo -rw-r--r-- 1 root root 45112 /bin/foo ls can be configured to print the byte size number with "thousand-separator", i.e. 45,112 instead of the usual 45112. # BLOCK_SIZE="'1" ls -lA -rw-r--r-- 1 root root 45,112 Nov 15 2014 Can I format the output of stat similarly, so that the file size has thousand-separator? The reason why I am using stat in the first place is, I need to output like ls, but without time, therefore -c'%A %h %U %G %s %n'. Or is there some other way to print the ls-like output without the time?
Specify the date format, but leave it empty eg. ls -lh --time-style="+" Produces -rwxrwxr-x 1 christian christian 8.5K a.out drwxrwxr-x 2 christian christian 4.0K sock -rw-rw-r-- 1 christian christian 183 t2.c
change file size format when using stat
1,353,826,112,000
I am getting output from a terminal in Ubuntu 12 that I don't understand. $cat sublime no such file or directory $mkdir sublime cannot create directory 'sublime': File exists How can both of these be true? I am trying to install sublime text with these instructions but having trouble making a symbolic link because /usr/bin/sublime does not exist.
A "file" can be a couple of things. For example man find lists: -type c File is of type c: b block (buffered) special c character (unbuffered) special d directory p named pipe (FIFO) f regular file l symbolic link s socket D door (Solaris) In your case that "file" might be a broken symlink or a regular file containing the text "no such file or directory". You can use ls -ld sublime to find out. (The first character indicates the type of the file.)
Can't cat a file or make a directory: contradictory output?
1,353,826,112,000
I need to run these commands very often: sudo apt-get install <package> sudo apt-get remove <package> Can I make it simple like: install <package> remove <package> I think I need to write a function like this: function install(){ sudo apt-get install <package> } ...and then need to copy paste to some location i don't know. Can anyone tell me how can I make such an install <package> command available all the time after boot?
Use shell aliases, they won't interfere with other scripts/commands, they are only replaced when the command has been typed interactively: alias install="sudo apt-get install" You may place this in your shell configuration file (~/.bashrc for example) and it will be defined in all your shell sessions.
creating simple command for sudo apt-get install?
1,353,826,112,000
I've got a bunch on strings which I need to find in a couple of files, for example: string1 string2 stringn file1.txt file2.txt filen.txt Is there an (easy) way to do that in bash? I need to know, if a string was found, in which file is it.
Simple grep command with -e option: grep -e "string1" -e "string2" -e "stringn" file*.txt Or you can put all the search strings in a file called search.txt like this: string1 string2 string3 ... ... stringN and then run grep like this with -f option: grep -f search.txt file*.txt
Searching strings on files
1,353,826,112,000
I have a sed oneliner, to print line range from a file: sed -n '10,20p' file.txt The above will print lines 10 to 20 from file.txt. But how can I also print the line numbers?
Only using sed. sed -n '10,20{=;p}' file.txt | sed '{N; s/\n/ /}' N; tells sed to add the next line into the pattern space, so now sed is working with both lines. s/\n/ / replaces the newline character with a space, "merging" the two lines together. Sources : (1) Numbering lines matching the pattern using sed (2) How can I “merge” patterns in a single line? - Unix & Linux Stack Exchange Explanations by Alaa Ali (in source 2)
Print line range from file, and include line numbers
1,353,826,112,000
I am following this question though some of the file names here contain a dash at the beginning of the filename. This is interpreted as an additional option for cp. Following another question (on ServerFault), I tried altering the command to: shuf -zn8 -e *.jpg | xargs -0 cp -vt -- {} target/ or shuf -zn8 -e *.jpg -exec cp -vt -- {} target/ to no avail. How do I cope with - at the beginning of the filename?
The -t option (a GNU extension) takes an argument which is the target directory. With xargs -0 cp -vt -- target/, that would try to copy target/ and the selected files into a directory called --, and you would still not have marked the end of options. You would need to mark the end of options for shuf as well. {} is only special with find's -exec predicate, or with xargs if you use -I'{}', but you don't need it here. shuf has no -exec predicate. Here, you'd want: shuf -zen8 -- *.jpg | xargs -r0 cp -vt target -- Or shuf -zen8 ./*.jpg | xargs -r0 cp -vt target With zsh, you can also use its expression-based ordering glob qualifier: cp -v -- *.jpg(oe['REPLY=$RANDOM'][1,8]) target/ That has the advantage over the shuf approach to work on any system (provided zsh is installed; you may need to give up on the non-standard -v on some systems) and also to still work even if there are two many jpg files in the current directory (and executing shuf would fail with a Too many arguments error).
Coping with filenames starting with a dash ("-") when using `-exec` and `xargs`
1,353,826,112,000
There are a couple of questions related to the fork bomb for bash :(){ :|: & };: , but when I checked the answers I still could not figure out what the exactly the part of the bomb is doing when the one function pipes into the next, basically this part: :|: . I understand so far, that the pipe symbol connects two commands by connecting the stdandard output of the first to the standard input to the second, e.g. echo "Turkeys will dominate the world" | sed 's/s//'. But I do not get it what the first function is pushing through its standard out, which gets pushed into the second one, after all there are no return values defined inside the function, so what is travelling through the human centipede if the man at the beginning has an empty stomach?
Short answer: nothing. If a process takes in nothing on STDIN, you can still pipe to it. Simiarly, you can still pipe from a process that produces nothing on STDOUT. Effectively, you're simply piping a single EOF indicator in to the second process, that is simply ignored. The construction using the pipe is simply a variation on the theme of "every process starts two more". This fork bomb could also be (and sometimes is) also written as: :(){ :&:; }; : Where the first recursive call is backgrounded immediately, then the second call is made. In general, yes, the pipe symbol (|) is used to do exactly what you mentioned - connect STDOUT of the first process to STDIN of the second process. That's also what it's doing here, even though the only thing that ever goes through that pipe is the single EOF indicator.
What exactly is the function piping into the other function in this fork bomb :(){ :|: & };:?
1,353,826,112,000
I was following a tutorial on how to find out the dependent libraries of a program and it was explained like this: whereis firefox shows the folders, where it is installed, take the full path to the binary, and ldd /usr/bin/firefox put it as argument of the ldd command. the tutorial also used firefox as an example and therefore I was sure to recreate it, but when I typed: üåê whereis firefox firefox: /usr/bin/firefox /usr/lib/firefox /etc/firefox /usr/share/man/man1/firefox.1.gz üåê ldd /usr/bin/firefox not a dynamic executable I got this "not a dynamic executable" message, instead of the list of libraries. Why?
The firefox executable is a shell script on your system. Some applications employ a wrapper script that sets up the execution environment for the application, possibly to allow for better integration with the current flavor of Unix, or to provide alternative ways to run the application (new sets of command line options etc.) that the application itself is not providing. Sometimes a wrapper script is used to pick the correct actual binary to run based on the way that script was called. For example, the MPI ("Message Passing Interface") C compiler is nothing more than a wrapper script around cc (or whatever compiler it's set up to use) that ensures that the MPI headers are in the search path and that the MPI library is linked in when compiling. Have a look at this script to see what binaries it's calling under what circumstances.
Why does not "ldd /usr/bin/firefox" list library files?
1,353,826,112,000
I tried the following : [$] wget http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm > Conan1.webm [1:21:05] --2017-02-23 01:51:50-- http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm Resolving cdn.edgecast.steamstatic.com (cdn.edgecast.steamstatic.com)... 117.18.232.131 Connecting to cdn.edgecast.steamstatic.com (cdn.edgecast.steamstatic.com)|117.18.232.131|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 11255413 (11M) [video/webm] Saving to: ‘movie480.webm’ movie480.webm 100%[===================================================================>] 10.73M 33.7KB/s in 6m 25s 2017-02-23 01:58:16 (28.6 KB/s) - ‘movie480.webm’ saved [11255413/11255413] As can be seen the first part of the command worked, wget downloaded the file but the second part of the command that renaming the file, in this case a very generic movie480.webm was downloaded. Why wasn't conan1.webm , the name I had suggested it took. I do know that if I had done - $ mv movie480.webm conan1.webm then it would work, but this means an additional command. Why that failed ? Could there have been another way to do the same thing in a single command though similar to shown above?
You didn't "suggest it took" the name Conan1.webm, you redirected its standard output stream to file called Conan1.webm. Since wget doesn't write to standard output by default, that has no effect on where the content is saved. See man wget - in particular the -O option: -O file --output-document=file The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If - is used as file, documents will be printed to standard output, disabling link conversion. (Use ./- to print to a file literally named -.) So you could have used wget -O Conan1.webm http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm or wget -O- http://cdn.edgecast.steamstatic.com/steam/apps/256679148/movie480.webm > Conan1.webm
Why doesn't wget url/mediafile.ext > medafile2.ext work?
1,481,873,711,000
Sometimes when I log on to a system via SSH (for example to the production server), I have such privileges that there can install some software, but to do that I need to know the system with which I am dealing. I would be able to check how the system is installed there. Is there a way from the CLI to determine what distribution of Unix/Linux is running?
Try: uname -a It will give you output such as: Linux debianhost 3.16.0-4-686-pae #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) i686 GNU/Linux You can also use: cat /etc/*release* PRETTY_NAME="Debian GNU/Linux 8 (jessie)" NAME="Debian GNU/Linux" VERSION_ID="8" VERSION="8 (jessie)" ID=debian HOME_URL="http://www.debian.org/" SUPPORT_URL="http://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
How can I get system information from the command line? [duplicate]
1,481,873,711,000
I have a multi columns csv file, comma separated which has two columns with different date (mm/dd/yyyy). I am going to identify the difference between these two date. following is the example: echo filename 001xxxc,28.2,03/04/2009,11/19/2009 00cvbfd,34.4,03/04/2009,01/06/2010 04rsdsd,34,12/01/2006,10/02/2001 456dfds,40,12/01/2006,04/23/2002 et556ss,40.8,12/01/2006,10/22/2002 I wonder is there anyway to use awk command to get the date difference. I tried this awk command but I am sure it is not the correct way? awk -F, '{print $1","$2","$3-$4}' filename
Assuming you want the difference in days, then if you have GNU awk (gawk) you could do something like gawk -F, ' { split($3,a,"/"); split($4,b,"/"); t1 = mktime(sprintf("%d %d %d 0 0 0 0",a[3],a[1],a[2])); t2 = mktime(sprintf("%d %d %d 0 0 0 0",b[3],b[1],b[2])); print (t2-t1)/86400 } ' filename 260 308 -1886 -1683 -1501 The mktime argument needs to be a string of the format "YYYY MM DD HH MM SS [DST]"; setting the optional DST flag to zero tells it to ignore daylight savings (otherwise the naive division by 86400 results in fractional days). See Gawk: Effective AWK Programming, 9.1.5 Time Functions
How to use awk command to calculate the date difference between two columns in the same file?
1,481,873,711,000
I'm trying to do this but I cant create the file. I enter: sort myfile.txt uniq -u | tee newfile.txt and It wont create the file automatically. What am I missing here?
You are missing one pipe | character. Try: sort myfile |uniq -u|tee newfile.txt If this is not working, please provide the error message you are getting. By the way, this command uniq -u eliminates all lines which have duplicates. If this is your intention, that is fine. But if you want to see one of the duplicate lines, you need to drop -u for the uniq part of this command line, i.e., sort myfile | uniq | tee newfile.txt
Give the command to remove duplicate lines in a .txt file and save the new file as new.txt file
1,481,873,711,000
EDIT: Total rewrite of question for clarity. I have a directory tree (new) with a bunch of files of with an extension of .new. I have an identical tree (old) where many of the files have names identical to those in the new tree except that the extension is .old. I would like to copy all of the .new files from the new directory tree into the old directory tree which contains the .old files. As a file with a .new extension is written into the old directory tree, I would like to delete any file with the same name but a .old extension. So, if in the new directory tree, there is a file named new/foo/bar/file.new, it will be copied to the old directory tree as old/foo/bar/file.new and then the file old/foo/bar/file.old will be deleted if it exits. EDIT #1 This answer was hashed out below (using the old question that had extraneous background information that was confusing). See the actual solution that I worked out below as one of the answers.
This was the final answer that got hashed out in the comments for terdons answer. cd new for i in */*/*.new; do cp "$i" "path/to/old/${i}" && rm "path/to/old/${i//new/old}"; done
Need to copy files to existing directory and remove files already there with the same name but different extension
1,481,873,711,000
What is the correct format for if then? I tried lots of varieties. Should the format work at the command line when not in a program? $ if [1==2] then echo "y" fi; > ; -bash: syntax error near unexpected token `;'
Try this one: if [ 1 == 2 ]; then echo "y" ; fi And better use -eq, unless you want to compare 1 and 2 as a strings. Useful link: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_01.html
-bash: syntax error near unexpected token `;' [duplicate]
1,481,873,711,000
I need to find my OS (not hardware) is 32-bit / 64-bit. Which command is best? uname -p uname -i uname -m arch All the above commands returns the same answer: On 32 bit systems: i686/i386 On 64 bit systems: x86_64
I would recommend instead using getconf LONG_BIT. [root@mymachine ~]# getconf LONG_BIT 64 This will clearly output either 32 or 64, depending on your installed kernel, whereas uname -m (and etc.) indicate the underlying hardware name. See also the Stack Overflow question How to determine whether a given Linux is 32 bit or 64 bit?, but be sure to read the helpful commentary.
Which cmd is the best for determining the OS' word size (32/64)-bit? [duplicate]
1,481,873,711,000
I don't quite understand pipes in Linux command line. I noticed that: ll - R | grep *.pdf will list files ending with .pdf But locate *.pdf | du -h will not calculate the size of files ending with .pdf. Rather it will list the size of files in the current directory. What is going wrong here? What I have in mind is the output of the first command is the input of the next.
Pipes work by sending one program's output to another program's input. This means that the program receiving the output of the other has to be able to read from STDIN (standard streams). In this case, grep is able to read the output of ll because it is designed that way. du expects a command line argument pointing to the directory it should run in (if a directory isn't given, it will default to the current working directory ./). As for seeing the sizes of the .pdf files, if all the files are in a directory, you can run du -h -d1 /path/to/pdf/dir or locate *.pdf | xargs | du -h. If they are in different directories, you will want to use -exec and find together (another user will probably give you a hand with this, I'm not quite sure how to do it).
Passing values through pipes
1,481,873,711,000
I have data like: ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom1/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom2/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom3/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom4/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom5/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom6/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom7/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom8/', '/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom9/'] I want each value without quote one per line so that I can pipe it to another command, like: /org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/ /org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom1/ .... awk -F'[][]' '{print $2}' removes the square brackets. I am not understanding how to proceed further.
You could just remove the leading [', trailing '] and replace all ', ' with a newline, and delete a [] or @as [] line which I see gsettings using to represent empty lists of strings (where the @as prefix specifies that the array is an array of strings). gsettings get some-path some-array-key... | sed "/^@as \[\]\$/d /^\[\]\$/d s/^\['// s/'\]\$// s/', '/\\ /g" With GNU sed, the newline in the replacement in the last s command above can be expressed as \n instead of \ followed by a literal newline. If it weren't for the '...' instead of "..." and that @as prefix for empty lists, that would be a valid JSON array, so you could also do: gsettings get some-path some-array-key | sed "y/'/\"/; s/^@as //" | jq -r '.[]' Note that none of those work for arbitrary arrays of strings as they don't account for the escaping that can be done by gsettings as evidenced by: $ gsettings list-recursively | grep '\\' org.freedesktop.ibus.panel.emoji favorites ['\u200b'] org.gnome.evolution.shell filename-illegal-chars "'\"&`();|<>$%{}!" (where we can even see it switching to "..." in the last case where the string value contains a ', making it a proper JSON string in that case). $ gsettings set org.gnome.seahorse last-search-text $'\1\xa\U10FFFF\'\"\\' $ gsettings get org.gnome.seahorse last-search-text "\u0001\n\U0010ffff'\"\\" The format seems to be the one described at https://developer-old.gnome.org/glib/stable/gvariant-text.html The JSON:PP perl module supports JSON strings using single quote delimiters, so you could use it to extract and decode any single-quoted or double-quoted string in any GVariant object with something like: gsettings any-path any-key | perl -C -MJSON::PP -lne ' BEGIN{ $j = JSON::PP->new->allow_singlequote } print $j->decode($_) for /'\''(?:\\.|[^\\'\''])*'\''|"(?:\\.|[^\\"])*"/g' Where the regexp finds pairs of '...' or "..." (allowing for escaped quotes or backslashes inside), that are passed to the JSON decoder. Ignoring the b'...', b"..." bytestrings on which JSON:PP would choke as AFAICT, gsettings outputs them as [byte 0xHH, 0xHH...] instead.
Extract data which is inside square brackets and separated by comma
1,481,873,711,000
I would like to list the files matching a specific pattern along with their number of rows. So far I have tried the following, which list me the files matching the desired pattern: find 2021.12.*/ -maxdepth 2 -name "myfilepattern.csv" -ls 123456789 32116 -rw-rw-r-- 1 user1 user1 32881884 Dec 1 23:59 2021.12.01/myfilepattern.csv 234567891 4 -rw-rw-r-- 1 user1 user1 144 Dec 2 00:00 2021.12.02/myfilepattern.csv I would like to add a column to this result containing the number of rows of each files 2021.12.01/myfilepattern.csv and 2021.12.02/myfilepattern.csv. I don't have any specific requirements about the position of such column. Can be at the beginning or at the end.
You can use -printf and -exec actions, along with wc -l to count lines/rows: find 2021.12.*/ -maxdepth 2 -name "myfilepattern.csv" -printf '%i\t%k\t%M\t%n\t%u\t%g\t%s\t%Tb %Td %TH:%TM\t' -exec wc -l {} \; The row count will be the second to last column.
Count number of rows of files matching pattern [closed]
1,481,873,711,000
For now i use this: mkdir -p a/b/c/d/e; touch a/b/c/d/e/file.abc; Is there more efficient ways?
In terms of tools used: no. touch will fail (rightly) if you are trying to operate in a directory that does not exist, and mkdir does precisely one thing: create directories, not normal files. Two different jobs mandate two different tools. That said, if you're talking about efficiency in terms of the number of lines in a script, or the readability of one, you could put it into a function: seedfile() { mkdir -p -- "$(dirname -- "$1")" && touch -- "$1" } seedfile /path/to/location/one/file.txt seedfile /path/to/somewhere/else/file.txt seedfile local/paths/work/too/file.txt
Create file in subdirectories that doesn't exist (../new_folder/new_folder/new_file.ext)
1,481,873,711,000
Is it possible to change the default behavior of a command? I assume that this question is pretty straight forward but let me give an illustration and example. While connected to some servers via SSH autocomplete via tab does not work making long commands long tedious. For example here is a ls command: ls -lah --group-directories-first This is saying to list all files including hidden files in a top to bottom list in a human readable format while sorting files from directories making directories come first. Is there a way that I can configure this ls command to perform the above command by simply typing ls. By default ls list all files. There must be a place where the 'ls' command is located so that when you run it the system knows where to look to find it, see what its default behavior is, and then run it based on the specifications of the provided 'ls' command. Ultimately can I change the output or default function of the ls command?
The typical way of doing this on Unix- or Linux-style systems is to use shell aliases: alias ls='ls -lah --group-directories-first' Note that this will overwrite any existing ls alias, so you might want to check the output of alias ls first and combine the above; for example alias ls='ls -lah --group-directories-first --color=tty' To make this permanent, add it to your .bashrc file in your home directory (assuming you’re using Bash, which you probably are if you’re discovering all this). Altering the default behaviour in such an extensive way can end up being confusing, so I’d suggest creating a new command using an alias instead; I have alias l='ls -lah --group-directories-first' for example. This also allows building upon other aliases: this l alias uses whatever is defined as ls, and if that’s an alias, that gets used to (so there’s no need to repeat the --color=tty option in this case). To remove an alias, use unalias: unalias ls
Is it possible to change the default behavior of a command?
1,481,873,711,000
I have some .jpeg images saved in a file in a cluster at school. I am at home using PuTTY to access one of the Debian nodes. I'm able to get to the folder with all the images but I need to see the dimensions of each one in order to run one more computation on them. Is there a way to do this? In keeping with one of the fundamental rules of asking for help in computer programming, I should illustrate my ultimate goal. The objective was to take an input image and perform a singular value decomposition on it in order to see what happens as we truncate the image. With lower amounts of singular values used, the resulting images were blurry. I now need to determine the compression ratio for each image (ratio of output to input image file size). My intention was to look at the dimensions of each resulting image and divide the new width by original width and new height by original height. If anyone knows a better way to do this I'm all ears :) Note: I do not have sudo privilege because it is a university computer, not my own.
There are number of tools that will do this: identify from ImageMagick jhead jpeginfo some versions of the file command If these programs are not installed, note that both jhead and jpeginfo are quite simple and presuming a compiler is available will be easy to build in your own user account.
Is there a way to see the dimensions of a jpeg file in linux using the command line?
1,481,873,711,000
I read the following in the grymoire: A simple example is changing "day" in the "old" file to "night" in the "new" file: sed s/day/night/ <old >new Or another way (for UNIX beginners), sed s/day/night/ old >new Why might the author consider the first form more advanced? I mean, what are the advantages of using this form over the "beginner's" syntax?
One advantage to allowing the shell to do the open() like: utility <in >out as opposed to allowing the named utility to do the open() like: utility in >out ...is that the file-descriptor is secured before the named utility is called, or else if there is an error encountered during the open(), the utility is never called at all. This is the best way to guard against side-effects of possible race conditions - as can happen from time to time when working with streams and the stream editor. If a redirection fails, the shell short-circuits the call to the utility and writes an error message to stderr - the shell's stderr and not whatever you might have temporarily directed it to for the utility (well, that depends on the command-line order of redirections as well) - in a standard diagnostic format. The most simple way to test if you can open a file is to open it, and < does that implicitly before anything else. Probably the most obvious race condition indicated in the commands in your question involves the out redirection. In both forms the shell does the > write open as well and this happens regardless of whether sed can successfully open the readfile in the second form. So out gets truncated - and possibly needlessly. That could be bad if you only wanted to write your output if you could successfully open your input. That's not a problem, though, if you always open your input first, as is done in the first form. Otherwise, there are at least 10 numerically referenced file descriptors that can be manipulated with shell redirection syntax in that way, and these combinations can get kind of hairy. Also, when the shell does the open, the descriptor does not belong to the called command - as it does with the second version - but to the shell, and the called command only inherits it. It inherits in the same way any other commands called in the same compound command might be, and so commands can share input that way.
Input from file: "advanced" (using less-than sign) vs. "beginner" syntax
1,481,873,711,000
Ran across a shell script that had '=~' in a contitional and I was wondering what it meant. Not much luck on Google or SO sites. Example: if [[ $VAR =~ 'this string' ]]
It's a regular expression match operator. From the bash man page: An additional binary operator, =~, is available, with the same precedence as == and !=. When it is used, the string to the right of the operator is considered an extended regular expression and matched accordingly (as in regex(3)). The return value is 0 if the string matches the pattern, and 1 otherwise. See bash's man page for more details (search for =~)
What does =~ mean? [duplicate]
1,481,873,711,000
I'm trying to use apropos so that I would look for all man pages starting with system. I try: apropos ^system apropos "^system" but these seem to return lines that don't start with system, but where system occurs somewhere in the line. Any ideas? Edit As per comment below, the above actually works but it matches against several compoments: - cmd name - cmd description - cmd one liner. So when I searched for system, I got a line like this: tapset::task_time (3stap) - systemtap task_time tapset Which makes sense because the description starts with system. One way to get really just the lines starting with "system" would be: apropos "" | grep "^system"
Running apropos '^system' works for me, returning the list of man pages where either the page name itself starts with system or the one line description starts with system. For example, the output on Debian (jessie) includes: system-config-printer (1) - configure a CUPS server sigset (3) - System V signal API I know of no clean way to tell apropros to search only in page names or in the one-line description, but there's always grep: apropos system | grep -- '^system' # page names apropos system | grep -- '- system' # descriptions Either of these can be encapsulated in a shell function such as this: apro() { apropos "$1" | grep -- "^$1"; }
apropos regex start with?
1,481,873,711,000
I've got some third party log files I'm trying to pull errors out of on the command line. The logs look like this: time=1 time=2 time=3 at com.test.com.... at com.test.com.... at com.test2.com.... time=4 time=5 time=6 time=7 time=8 time=9 at org.badstuff.com... at org.badstuff.com... at org.badstuff.com... time=10 time=11 The lines that start with at start with a TAB character, so they can be easily matched. How can I pull these stack traces out of this file, and a couple of lines before the first stack trace each time? I'm on a Mac, but would prefer a generic solution that works on Mac/Linux if possible, as I have to work on both quite often. So from my above example, I'd pull out the following time=2 time=3 at com.test.com.... at com.test.com.... at com.test2.com.... time=8 time=9 at org.badstuff.com... at org.badstuff.com... at org.badstuff.com...
Just use -After context, -Before context or -Context option in grep, e.g. to fit to your example: grep -B2 '^\t' file
How to match multiple lines starting with a TAB, and the line before the 1st one in a group?
1,481,873,711,000
I am trying to list all of the files that include a function, e.g., matrixCal. How can I to do this in linux?
Would grep be ok? grep -R matrixCal /location/of/your/code
How to find every file that includes a given function?
1,481,873,711,000
I have this command: sed -i 's/^CREATE DATABASE.*$//' world.sql If I run that, it says: sed: -I or -i may not be used with stdin and creates a new file called orld.sql. The original file still exists afterwards. So I guess, sed sees world.sql as a w orld.sql command?  How can I prevent that behaviour? [This is on macOS.]
As far as I know, sed on MacOS is the FreeBSD flavor which requires a backup suffix to be supplied when using the -i option. The error message you get implies that it mis-interpreted parts of your command because you used the option without providing one. So, try it with sed -i".backup" 's/^CREATE DATABASE.*$//' world.sql which will create a world.sql.backup and otherwise perform the in-place edit of world.sql. Some sed versions may accept an empty backup suffix, as in sed -i '', which will prevent creation of a backup file, but you would need to look it up in the documentation for the specific version you are using. See sed command with -i option failing on Mac, but works on Linux How can I achieve portability with sed -i (in-place editing)? for more insight
Using "sed" with "-i" option creates strangely-named new file while leaving the input file untouched
1,481,873,711,000
I am trying to find a command that displays files larger than 1 GB and displays those files ordered by size. I have tried find . -maxdepth 2 -type f -size +1G -print0 |xargs -0 du -h |sort -rh but for some reason this displays files of size that are not greater than 1 GB. For example this is in the output 1.0K ./<repo>/.git/info
There are at least two possible causes: Maybe your find prints nothing. In this case xargs runs du -h which is equivalent to du -h .. Investigate --no-run-if-empty option of GNU xargs. Or better get used to find … -exec … instead of find … | xargs …. Like this: find . -maxdepth 2 -type f -size +1G -exec du -h {} + | sort -rh find -size tests (almost) what du --apparent-size shows, while du without this option may greatly disagree, especially when the file is sparse. The option is not portable. I think in your case the first cause is the culprit. Note ./<repo>/.git/info couldn't come from find . -maxdepth 2 -type f because its depth is 3. This means du operated recursively on some directory.
Linux show files in directory larger than 1 GB and show size
1,481,873,711,000
I have the following basic.json file: { "user": "user", "pass": "password" } I'm trying to encode it in base64 like this: "Basic dXNlcjpwYXNzd29yZA==" I think I'm close: echo Basic $(echo $(cat basic.json | jq '.user'):$(cat basic.json | jq '.pass') | base64) I've used the jq access method found here. I've used the base64 method found here. But the result is wrong: Basic InVzZXIiOiJwYXNzd29yZCIK I've tried the -e flag as mentioned in the article: echo Basic $(echo $(cat basic.json | jq '.user'):$(cat basic.json | jq '.pass') | base64 -e) But it throws this error: base64: invalid option -- 'e' Where did I mistake? Thanks in advance. My solution The command ended this way: RUN echo "map \"\" \$basicAuth {\n\ \tdefault $(jq '"Basic " + ("\(.user):\(.pass)" | @base64)' basic.json);\n\ }" > basic.conf And my basic.conf file finally have the correct basic auth: map "" $basicAuth { default "Basic dXNlcjpwYXNzd29yZA=="; } Thank you all
Using jq and @base64 operator: <basic.json jq '"Basic " + ("\(.user):\(.pass)"|@base64)' "Basic dXNlcjpwYXNzd29yZA==" user and pass values are given as string to base64 operator. The rest is simple string concatenation.
How to properly encode string based on json file?
1,481,873,711,000
my file pheno_Mt.txt looks like this: IID pheno 1000017 -9 1000025 -9 1000038 1 1000042 -9 1000056 -9 So it is space separated and I would like to convert it into tab separated. I tried: cat pheno_Mt.txt | tr ' ' '\t' > pheno_Mtt.txt and this: sed 's/ /\t/g' pheno_Mt.txt > pheno_Mtt.txt but this just tab separated the first line, the rest stay space separated. Machine I am running this on is: NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" od -c pheno_Mt.txt > outt head outt 0000000 I I D p h e n o \n 1 0 0 0 0 1 0000020 7 - 9 \n 1 0 0 0 0 2 5 - 9 \n 0000040 1 0 0 0 0 3 8 1 \n 1 0 0 0 0 4 0000060 2 - 9 \n 1 0 0 0 0 5 6 - 9 \n 0000100 1 0 0 0 0 7 4 - 9 \n 1 0 0 0 0 0000120 8 9 - 9 \n 1 0 0 0 0 9 3 1 \n 0000140 1 0 0 0 1 0 8 - 9 \n 1 0 0 0 1 0000160 1 5 - 9 \n 1 0 0 0 1 2 7 2 \n 0000200 1 0 0 0 1 3 0 - 9 \n 1 0 0 0 1 0000220 4 9 - 9 \n 1 0 0 0 1 5 1 - 9 od -c pheno_Mtt.txt > outtt head outtt 0000000 I I D \t p h e n o \n 1 0 0 0 0 1 0000020 7 \t - 9 \n 1 0 0 0 0 2 5 \t - 9 \n 0000040 1 0 0 0 0 3 8 \t 1 \n 1 0 0 0 0 4 0000060 2 \t - 9 \n 1 0 0 0 0 5 6 \t - 9 \n 0000100 1 0 0 0 0 7 4 \t - 9 \n 1 0 0 0 0 0000120 8 9 \t - 9 \n 1 0 0 0 0 9 3 \t 1 \n 0000140 1 0 0 0 1 0 8 \t - 9 \n 1 0 0 0 1 0000160 1 5 \t - 9 \n 1 0 0 0 1 2 7 \t 2 \n 0000200 1 0 0 0 1 3 0 \t - 9 \n 1 0 0 0 1 0000220 4 9 \t - 9 \n 1 0 0 0 1 5 1 \t - 9
$ tr ' ' '\t' <pheno_Mt.txt IID pheno 1000017 -9 1000025 -9 1000038 1 1000042 -9 1000056 -9 This looks as if the tr command only did something to the first line of the file, but since the output of a tab brings the cursor up to the next multiple of eight position on the screen, and since this happens to be exactly one space after the 7 character number, the effect is that the tabs on the other lines appears to only be a single space. Another way of doing this, by the way, which is not dependent on the number of spaces used in the original file, is $ awk -v OFS='\t' '{ print $1, $2 }' pheno_Mt.txt IID pheno 1000017 -9 1000025 -9 1000038 1 1000042 -9 1000056 -9 This uses awk to output two tab-delimited columns read from the whitespace-delimited input. Or, for any number of columns, $ awk -v OFS='\t' '{ $1=$1; print }' pheno_Mt.txt IID pheno 1000017 -9 1000025 -9 1000038 1 1000042 -9 1000056 -9 This forces awk to re-form the whole record by modifying the first field. A plain print would print the record with tabs as delimiters.
How to convert space separated file into tab separated? [closed]
1,481,873,711,000
function mv1 { mv -n "$1" "targetdir" -v |wc -l ;} mv1 *.png It does only move the first .png file it finds, not all of them. How can I make the command apply to all files that match the wildcards?
mv1 *.png first expands the wildcard pattern *.png into the list of matching file names, then passes that list of file names to the function. Then, inside the function $1 means: take the first argument to the function, split it where it contains whitespace, and replace any of the whitespace-separated parts that contain wildcard characters and match at least one file name by the list of matching file names. Sounds complicated? It is, and this behavior is only occasionally useful and is often problematic. This splitting and matching behavior only occurs if $1 occurs outside of double quotes, so the fix is easy: use double quotes. Always put double quotes around variable substitutions unless you have a good reason not to. For example, if the current directory contains the two files A* algorithm.png and graph1.png, then mv1 *.png passes A* algorithm.png as the first argument to the function and graph1.png as the second argument. Then $1 is split into A* and algorithm.png. The pattern A* matches A* algorithm.png, and algorithm.png doesn't contain wildcard characters. So the function ends up running mv with the arguments -n, A* algorithm.png, algorithm.png, targetdir and -v. If you correct the function to function mv1 { mv -n "$1" "targetdir" -v |wc -l ;} then it will correctly move the first file. To process all the arguments, tell the shell to process all arguments and not just the first. You can use "$@" to mean the full list of arguments passed to the function. function mv1 { mv -n "$@" "targetdir" -v |wc -l ;} This is almost correct, but it still fails if a file name happens to begin with the character -, because mv will treat that argument as an option. Pass -- to mv to tell it “no more options after this point”. This is a very common convention that most commands support. function mv1 { mv -n -v -- "$@" "targetdir" |wc -l ;} A remaining problem is that if mv fails, this function returns a success status, because the exit status of commands on the left-hand side of a pipe is ignored. In bash (or ksh), you can use set -o pipefail to make the pipeline fail. Note that setting this option may cause other code running in the same shell to fail, so you should set it locally in the function, which is possible since bash 4.4. function mv1 { local - set -o pipefail mv -n -v -- "$@" "targetdir" | wc -l } In earlier versions, setting pipefail would be fragile, so it would be better to check PIPESTATUS explicitly instead. function mv1 { mv -n -v -- "$@" "targetdir" | wc -l ((!${PIPESTATUS[0] && !${PIPESTATUS[1]}})) }
Why does a file move/copy function only move one file at a time when using the “*” wildcard?
1,481,873,711,000
My system: OS: MacOS / Mac OS X (Mojave 10.14.5) OS core: Darwin (18.6.0) Kernel: Darwin Kernel / XNU (18.6.0 / xnu-4903.261.4~2/RELEASE_X86_64) ls: version unknown, but man ls gives a page from the BSD General Commands Manual Shells: Bash: GNU bash, version 5.0.7(1)-release (x86_64-apple-darwin18.5.0) Zsh: zsh 5.7.1 (x86_64-apple-darwin18.2.0) In MacOS, in a terminal CLI using a shell such as bash or zsh, I'd like to use the (BSD) command ls (or perhaps a similarly common and useful tool) to list the contents of a directory other than the current working directory, where all files except those ending with a tilde (~) are shown. Excluding the last stipulation, ls naturally accomplishes this task when the non-current directory is used as an argument to ls: ls arg where arg is an absolute or relative path to the non-current directory (such as /absolute/path/to/directory, ~/path/from/home/to/directory, or path/from/current/dir/to/directory). I know how to list non-backup contents in the current directory, using filename expansion (aka "globbing") and the -d option (to list directories and not their contents), like so: ls -d *[^~] (or ls -d *[!~]). I want the same sort of results, but for a non-current directory. I can almost achieve what I want by using ls -d arg/*[^~], where arg is the same as described above, but the results show the path to each content element (ie, each file and directory in the directory of interest). I want ls to display each element without the path to it, like is done with ls arg. In Linux, using the GNU command ls, I can achieve exactly what I want using the -B option to not list backup files: ls -B arg. Although this is what I want, I'd like to achieve this using tools native to MacOS, preferably the BSD ls. Note: I do not want to use grep (eg, ls arg | grep '.*[^~]$'), because grep changes the formatting and coloring of the output. Question recap: On a Mac, how can I list the contents of a non-current directory but not the backup files, preferably using ls?
You could execute ls in a subshell: (cd arg; ls -d *[^~])
On a Mac, how can I list contents of a non-current directory without showing backup files (ending with ~), preferably with BSD command ls?
1,481,873,711,000
On Mac, the reset command in terminal almost does the same thing as clear. On Ubuntu Linux, and maybe other flavors of Linux, the reset command actually "resets" the terminal so that you can't scroll up or see previously input commands by scrolling. Is there a way to make the reset command on Mac act/do the same thing as reset does on Linux? If so, how can I do it?
Actually (on MacOS), it's not "the exact same thing" (the manual page description for "clear" is different from "reset"). MacOS comes with ncurses 5.7 (9 years old), with some updates to the terminal database. If you want something newer, installing MacPorts lets you update ncurses to the current (less a few months) version. By the way, that would be newer than Ubuntu, which generally lags development versions by 6 months to 2 or more years.
Make reset on Mac like reset on Linux
1,481,873,711,000
Think of it as going to the most high level folder, doing a Ctrl Find, and searching .DS_Store and deleting them all. I want them all deleted, from all subfolders and subfolders subfolders and so on. Basically inside the top level folder, there should be no .DS_Store file anywhere, not even in any of its subfolders. What would be the command I should enter?
find top-folder -type f -name '.DS_Store' -exec rm -f {} + or, more simply, find top-folder -type f -name '.DS_Store' -delete where top-folder is the path to the top folder you'd like to look under. To print out the paths of the found files before they get deleted: find top-folder -type f -name '.DS_Store' -print -exec rm -f {} +
How to remove all occurrences of .DS_Store in a folder
1,481,873,711,000
From the Bash manual, Section 6.6 Aliases,   …   Bash always reads at least one complete line of input before executing any of the commands on that line.  Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. Here I am trying to find a way to understand Which of the following operations does shell perform when "reading a complete line of input" and when "executing any of the commands in the line"? Are there some commands and examples which can show the results after Bash reads one complete line of input but before executing any of the commands on that line ? For example, "one complete line of input" is a compound command which consists of several commands. "one complete line of input" can also be a pipeline of several compound commands, a list of pipelines, etc.
The part of the Bash manual you quoted talks about Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. So aliases are actually one of those examples that show the difference between reading and executing a line! To clarify, here is a compound command. We first try to define an alias, and then we try to use it: alias echo=uname; echo Someone who is unfamiliar with Bash would expect that the output of that command is Linux (or whatever OS you're running), but it actually it just outputs nothing. This is because Bash first reads the line and applies alias definitions. So this is what Bash sees: alias echo=uname; echo Note that the second echo is not converted to uname. This is because Bash hasn't executed our alias echo=uname command yet, it has only read it. After Bash has read the line, it executes what it has read. Therefore it executes: alias echo=uname; echo (which is exactly what it read, as described above) We can even check that Bash executed the command by typing echo. Since we have defined an alias before, echo will now be converted to uname during the read step, so Bash will execute uname. In short: $ alias echo=uname; echo $ echo Linux Edit: After your edit, you asked whether there is a way to show the results after Bash reads one line of input, but before actually executing them. The answer is yes, but it's not very convenient for every-day use (however, it's certainly useful for understanding what's going on inside the shell). Create a file with the following contents: #!/bin/bash shopt -s expand_aliases trapped() { echo "On line ${1}, Bash sees: $BASH_COMMAND" } trap 'trapped ${LINENO}' DEBUG alias echo=uname; echo echo Then, execute that file. Here is an explanation of how that script is working: shopt -s expand_aliases tells Bash to expand aliases even when it is in noninteractive mode (such as in this case) We then define a function trapped(). When called, this function will print the command that is currently executed (along with a line number, that gets passed as parameter). We call trap 'trapped ${LINENO}' DEBUG. This tells Bash "whenever you see a signal called DEBUG, execute trapped ${LINENO} first. DEBUG is a signal that is automatically generated by Bash before any execution of a simple command. In other words, before any command is executed, our function trapped() is called. Lastly, we execute a few commands for demonstration. However, since we have a trap going, trapped() is executed before anything else happens, so Bash prints what is about to be executed. The LINENO variable is just a Bash built-in, that gives us the current linenumber. If you want to be able to "run" commands without executing them at all and still see what Bash reads, you might want to look into the extdebug option in the Bash manual.
How can I show the results after Bash reads one complete line of input but before executing any of the commands on that line?
1,481,873,711,000
I have removed all my python code from the home directory, but when I do ls I get: a11.py~ class1.pdf foobar.txt~ n1 pic.py~ When I do ls like this: ls *.py I get: ls: cannot access *.py: No such file or directory I still have lot of files like the above. Are they hidden files or what? How do I solve this?
The ~ is part of the filename: ls *.py~ Thus, to delete all such files: rm *~
How to delete files with ~?
1,481,873,711,000
I noticed that some commands also come in with e- and f- versions, e.g. grep, egrep, fgrep and a few others. Is there some pattern or a naming convention if a particular command should have e- or f-version?
There is a clue about this in, e.g., man grep, which is also man fgrep and man egrep -- very often tools with minor variations like this will have one man page for all the variations, explaining them in relation to one another: In addition, two variant programs egrep and fgrep are available. egrep is the same as grep -E. fgrep is the same as grep -F. Direct invocation as either egrep or fgrep is deprecated, but is provided to allow historical applications that rely on them to run unmodified. Presumably, fgrep and egrep were once standardized names, but if you look further down the man page note that -E and -F are "specified by POSIX", implying this standardization changed tack, but (as stated above), backward compatibility is maintained. On the topic of whether programs 'should' have variants - no, there is no standard. But there are a lot of programs that do so thanks to the light-weight nature of links (see ln - ignore symbolic links).
e- and f- versions of commands
1,481,873,711,000
I don't remember exactly but there was either cp or mv command which I was able to do something like this with: cp file{.cpp, .cpp.org} Which would copy file.cpp and make a copy named file.cpp.org. Any suggestions?
This is a property of the shell, and not a property of the command itself. Check for more info: http://wiki.bash-hackers.org/syntax/expansion/brace On the command line, file{.cpp,.cpp.org} will always expand to file.cpp file.cpp.org In your example, it would be shorter to just type file.cpp{,.org}
cp : short way of copying [closed]
1,481,873,711,000
Using pdftk it is possible to extract page ranges from a pdf using pdftk a.pdf cat 124-end output b.pdf dont_ask I have a bunch of huge PDFs with about 500 pages and over 100 MB, is it possible to automatically split those in pieces of maximal 5 MB?
I found this python script called smpdf that has this feature. This script is written in German (some of it) but it's easy enough to figure out what it's doing and how to use it. It requires PyPdf. Installation & Setup First download the script: svn checkout http://smpdf.googlecode.com/svn/trunk/ smpdf Then download & install PyPdf: wget http://pybrary.net/pyPdf/pyPdf-1.13.tar.gz tar zxvf pyPdf-1.13.tar.gz cd pyPdf-1.13 sudo python setup.py install cd ../smpdf Next I downloaded a sample PDF file from example5.com. Specifically this file. Usage of smpdf: [ERROR] Ung�ltiger Aufruf =========================================================================== PDF Manipulator (c) 2007 by Franz Buchinger --------------------------------------------------------------------------- Verwendung: pdfm split 5 file.pdf Datei file.pdf in PDFs mit jeweils 5 Seiten splitten pdfm chunk 3 file.pdf Datei file.pdf in max. 3 MB grosse PDFs splitten pdfm burst file.pdf Jede Einzelseite in file.pdf in ein PDF schreiben pdfm merge f1.pdf f2.pdf f1.pdf und f2.pdf in ein PDF mergen pdfm merge output.pdf dir mergt alle PDFs im Verzeichnis dir in die Datei output.pdf pdfm info f1.pdf zeigt Dokumentinformationen (Groesse, Seitenzahl, Titel,..) zu f1.pdf an The sample file we downloaded is as follows: $ pdfinfo chickering04a.pdf Title: chickering04a.dvi Creator: dvips(k) 5.94a Copyright 2003 Radical Eye Software Producer: AFPL Ghostscript 8.0 CreationDate: Fri Oct 8 17:53:18 2004 ModDate: Fri Oct 8 17:53:18 2004 Tagged: no Pages: 44 Encrypted: no Page size: 612 x 792 pts (letter) File size: 386372 bytes Optimized: no PDF version: 1.3 So this sample file has 44 pages and is 386KB in size. Using the following command we can split the PDF up into chunk files that are ~0.1MB (~100KB). python pdfsm.py chunk 0.1 chickering04a.pdf Which produces the following output: ======== NEUES PDF ======== Seite:0, Groesse: 12696 Seite:1, Groesse: 11515 Seite:2, Groesse: 17209 Seite:3, Groesse: 17411 Seite:4, Groesse: 17060 Seite:5, Groesse: 26303 ======== NEUES PDF ======== Seite:9, Groesse: 31014 Seite:10, Groesse: 27666 Seite:11, Groesse: 18548 ... ... ======== NEUES PDF ======== Seite:40, Groesse: 19059 Seite:41, Groesse: 20912 Seite:42, Groesse: 17685 Seite:43, Groesse: 5362 Our directory now contains the following files: $ ls -l total 1220 -rw-rw-r-- 1 saml saml 74471 May 12 09:23 chickering04a-chunk001.pdf -rw-rw-r-- 1 saml saml 78673 May 12 09:23 chickering04a-chunk002.pdf -rw-rw-r-- 1 saml saml 89259 May 12 09:23 chickering04a-chunk003.pdf -rw-rw-r-- 1 saml saml 92569 May 12 09:23 chickering04a-chunk004.pdf -rw-rw-r-- 1 saml saml 96953 May 12 09:23 chickering04a-chunk005.pdf -rw-rw-r-- 1 saml saml 86390 May 12 09:23 chickering04a-chunk006.pdf -rw-rw-r-- 1 saml saml 90815 May 12 09:23 chickering04a-chunk007.pdf -rw-rw-r-- 1 saml saml 92094 May 12 09:23 chickering04a-chunk008.pdf -rw-rw-r-- 1 saml saml 78909 May 12 09:23 chickering04a-chunk009.pdf -rw-rw-r-- 1 saml saml 386372 May 12 08:30 chickering04a.pdf -rwxrwxr-x 1 saml saml 9324 May 12 07:41 pdfsm.py drwxr-xr-x 4 saml saml 4096 May 12 08:25 pyPdf-1.13 -rw-rw-r-- 1 saml saml 35699 May 12 08:24 pyPdf-1.13.tar.gz I used this "hacked" command to show the stats of the generated PDF files: $ printf "%7s%6s\n" "# pages" "size"; for i in chickering04a-chunk00*; do pdfinfo $i | egrep "File size|Pages"|cut -d":" -f2;done|sed 's/[\t ]\+/ /'|paste - - # pages size 5 74471 bytes 3 78673 bytes 3 89259 bytes 5 92569 bytes 4 96953 bytes 3 86390 bytes 5 90815 bytes 6 92094 bytes 5 78909 bytes
Splitting large PDF into small files
1,481,873,711,000
I am using CentOS 5.6, is there a way to view a "live" time, without constantly executing the date command? Constantly excuting the date command can be quite frustrating and repetitive when checking the time for running cron jobs.
If you want to periodically execute a specific command you can use watch (1). Per default the specified program is executed every two seconnds. To run date every second just run: watch -n 1 date
CentOS 5.6 live time feature without repeatedly executing date command
1,481,873,711,000
I'm new to Unix/Linux platforms so I was hunting for good books on *nix... I soon realized there is not one book or resource. But to gain confidence on the command line and scripting (the real *nix user) I finally found two books: 1.The Linux Command Line 2.Linux Command Line and Shell Scripting Bible Both seem to be great. The second one is almost double the size of the first one (I don't mind, I like reading books). But which one is more informative and also has better coverage of topics? It's for you people to tell. I looked at the reviews of the second one on Amazon. But I couldn't find the reviews of the first one. So please tell me which one should I get? Thanks.
I have a general rule of thumb when buying any tech book, avoid the ones that weigh more than a phonebook (remember those? ;)). Avoid any book for dummies, unless you think you are a dummy, avoid any listed as a "Bible". The big fat books are a marketing ploy with tons of white space, large font, and excessive examples. I'm sorry I can't answer your question specifically as there are too many good online bash sites. (google for them) The good books used to be published by Prentice Hall. Kernigan, Ritchie, Pike, Aho, etc. all used Prentice Hall. I also found O'Reily's to be hit or miss; some were excellent, others bad. Look for short and concise, with fewer examples and more exercises left to the reader. My $0.02
Stuck between these two books? [closed]
1,481,873,711,000
For djvused command, there is an option: -e command Cause djvused to execute the commands specified by the option argument commands. It is advisable to surround the djvused commands by single quotes in order to prevent unwanted shell expansion. For example, djvused myfile.djvu -e 'print-pure-txt'. It is quite unusual to me in that a command (here djvused) can run other commands (here by -e option). I was wondering how it is possible? Is this a frequent practice in command line interface? Is this similar to print command used in awk command? The only way I know for a command to be used in another command is: echo `echo hello` Thanks and regards!
It is quite usual, some programs base their working exclusively on this. Some of the more common examples that come to mind are su, sudo and xterm. su -c 'ls -l /root' sudo ls -l root xterm -e 'top -d 10' It is different from your example echo `echo hello` where the inverse quotes are interpreted by the shell, and the program do not execute anything itself. Note also the difference between su and sudo. The first take a string, and could be difficult to set up such a string from the user point of view, for example to expand a variable before it is seen by the command; the second a series of string and is far more simple (there are no quotes in the sudo example). What they use to implement their internal working? There are essentially two ways: the system library routine and the exec system call. The first will call a shell, and allow for various shell expansions, like su -c 'ls -ld /root/.*' while the second method do not allow such freedom.
A command used in another command
1,308,650,780,000
I need Gzip to pre-zip some static files for a webserver I'm building. All of the files I need to gzip will be in a folder named .cache. Since some clients may not accept responses that are gzipped, I would like to keep all of the original files when I gzip them. Is there any command I can use to recursively gzip the files in my .cache folder while still keeping the original files? Will this command gzip gzipped files (ones that are postfixed with .gz already) if run on a folder with already gzipped files? While we're on the topic of gzip: I've been looking for a way to gzip text input passed to gzip instead of files. I came up with postfixing a dash on the command (like gzip -c -), but I'm not sure if that will work or how I use it. Can anyone explain?
Use the option -c to output the result to stdout. gziping all files in .cache: for i in .cache/*; do gzip -c "$i" > "$i.gz"; done EDIT: To gzip them again and not gzip the gziped files check the suffix: for i in .cache/*; do [ "${i:(-3)}" == ".gz" ] || gzip -c "$i" > "$i.gz"; done So only files that not end in .gz will be gziped.
Recursively Gziping Files (and keep original files) & Gziping Text Input
1,308,650,780,000
I want to manipulate a text-file. It contains many blocks of the form CLASS ...some stuff ... END I want to duplicate each of these blocks and add and remove a line of it's content. Can I script that?
The natural tools for this are awk and Perl (assuming you want to script: for a once-off, the natural tool is an interactive editor). Here's an awk script that duplicates all CLASS…END blocks (no balancing supported: each CLASS matches the next END), except that foo lines are omitted from the second copy. awk ' /^CLASS$/ { store = 1; } # start storing store && ! /^foo$/ { hold = hold ORS $0; } # if storing, maybe save line /^END$/ { $0 = $0 hold; # append hold copy to current line store = 0; hold = ""; # end of block } 1 { print; } # print original line, with hold prepended if at end of block ' Here's a sed solution; don't take it too seriously. Don't expect it to behave if the CLASS/END lines are not in strict alternation. sed -e '/^CLASS$/,/^END$/!b' \ -e '/^CLASS$/{' -e 'h' -e 'b' -e '}' \ -e '/^foo$/!H' \ -e '/^END$/G'
How can I manipulate the content of a file, by duplicating and changing some parts?
1,308,650,780,000
I have files with the following format: File name is file.txt chr - seq1 NZ_JAHWGH010000010.1 0 60562 green_a4 chr - seq3 NZ_JAHWGH010000012.1 0 466573 green_a4 chr - seq5 NZ_JAHWGH010000013.1 0 125526 green_a4 chr - seq6 NZ_JAHWGH010000014.1 0 717625 green_a4 chr - seq7 NZ_JAHWGH010000015.1 0 209757 green_a4 chr - seq8 NZ_JAHWGH010000016.1 0 55318 green_a4 chr - seq9 NZ_JAHWGH010000017.1 0 467034 green_a4 chr - seq50 NZ_CAJGBF010000017.1 0 83173 green_a4 chr - seq51 NZ_CAJGBF010000018.1 0 76510 green_a4 chr - seq52 NZ_CAJGBF010000019.1 0 67820 green_a4 chr - seq54 NZ_CAJGBF010000021.1 0 61770 green_a4 chr - seq55 NZ_CAJGBF010000022.1 0 56876 green_a4 chr - seq56 NZ_CAJGBF010000023.1 0 50411 green_a4 chr - seq57 NZ_CAJGBF010000024.1 0 49535 green_a4 I want to change the name of row third column as seq1 if the name in column four starts with NZ_JAHWGH and seq2 if name starts with NZ_CAJGBF. I want output like this from the same file: chr - seq1 NZ_JAHWGH010000010.1 0 60562 green_a4 chr - seq1 NZ_JAHWGH010000012.1 0 466573 green_a4 chr - seq1 NZ_JAHWGH010000013.1 0 125526 green_a4 chr - seq1 NZ_JAHWGH010000014.1 0 717625 green_a4 chr - seq1 NZ_JAHWGH010000015.1 0 209757 green_a4 chr - seq1 NZ_JAHWGH010000016.1 0 55318 green_a4 chr - seq1 NZ_JAHWGH010000017.1 0 467034 green_a4 chr - seq2 NZ_CAJGBF010000017.1 0 83173 green_a4 chr - seq2 NZ_CAJGBF010000018.1 0 76510 green_a4 chr - seq2 NZ_CAJGBF010000019.1 0 67820 green_a4 chr - seq2 NZ_CAJGBF010000021.1 0 61770 green_a4 chr - seq2 NZ_CAJGBF010000022.1 0 56876 green_a4 chr - seq2 NZ_CAJGBF010000023.1 0 50411 green_a4 chr - seq2 NZ_CAJGBF010000024.1 0 49535 green_a4 I tried these two commands but they didn't work: awk 'BEGIN{FS=OFS=" "}($4 == /^NZ_JAHWGH/){$3==seq1}1' file.txt awk 'BEGIN{FS=OFS=" "} {if ($4 ~ /^NZ_JAHWGH/) $3=seq1}1' file.txt
You first awk attempt: awk 'BEGIN{FS=OFS=" "}($4 == /^NZ_JAHWGH/){$3==seq1}1' file.txt fails because $3==seq1 is a test of whether $3 is exactly equal to the value of the variable seq1. What you wanted is = instead of == so you are setting the value, not testing it, and "seq1" to indicate that this is a string and not a variable. Next, to check against a regular expression, you need ~ /regex/, not == /regex/. Your second attempt fails for the same reasons, you need "seq1" to have a string and you can't use == that way. Also, since both FS and OFS default to a space, your BEGIN block isn't needed. Putting all that together, this command, which is the same idea you were trying, should work as expected: $ awk '($4 ~ /^NZ_JAHWGH/){$3="seq1"} ($4 ~ /^NZ_CAJGBF/){$3="seq2"}1' file.txt chr - seq1 NZ_JAHWGH010000010.1 0 60562 green_a4 chr - seq1 NZ_JAHWGH010000012.1 0 466573 green_a4 chr - seq1 NZ_JAHWGH010000013.1 0 125526 green_a4 chr - seq1 NZ_JAHWGH010000014.1 0 717625 green_a4 chr - seq1 NZ_JAHWGH010000015.1 0 209757 green_a4 chr - seq1 NZ_JAHWGH010000016.1 0 55318 green_a4 chr - seq1 NZ_JAHWGH010000017.1 0 467034 green_a4 chr - seq2 NZ_CAJGBF010000017.1 0 83173 green_a4 chr - seq2 NZ_CAJGBF010000018.1 0 76510 green_a4 chr - seq2 NZ_CAJGBF010000019.1 0 67820 green_a4 chr - seq2 NZ_CAJGBF010000021.1 0 61770 green_a4 chr - seq2 NZ_CAJGBF010000022.1 0 56876 green_a4 chr - seq2 NZ_CAJGBF010000023.1 0 50411 green_a4 chr - seq2 NZ_CAJGBF010000024.1 0 49535 green_a4
Renaming column based on matching of second column in a text file
1,308,650,780,000
Assume I have a directory with 5,000 files. each with a name such as: 1.json 2.json 3.json .. 4000.json 4001.json what command-line utility can I use to grep for a string from only the files 1.json through 2000.json?
Bash, Ksh, Zsh: grep pattern {1..2000}.json Were the file names zero-padded: grep pattern {0001..2000}.json
How to grep from file names with number in range
1,308,650,780,000
I know that sudo commandX asks for the user's password and if it is valid it executes commandX as root, until here all is ok. It is mandatory for some commands, such as apt, chown, chmod etc. It has sense for security reasons. Now consider the following scenario/situation: sudo commandA ... sometime sudo commandB Because commandB requires root permission it would run without ask for the password if the sudo's timeout is not expired yet, otherwise the password must be write it again. For Shell Scripting purposes, - if is possible - is there a command to retrieve/show the data requested for each one of the following scenarios? the sudo's timeout - for example N minutes/seconds, it should come from according from /etc/sudoers the sudo's time remaining (prior to expire). For example if the timeout is 5mins, it would return 4 or 3.5 if the sudo's timeout expired or not So I am assuming there is a total of 3 commands or 1 command with special parameters to accomplish the goal of the scenarios
You can get case 3 by doing sudo -n true 2>/dev/null -- if it has nonzero status the cached authentication has expired (or been removed).
sudo: What command(s) use to know its timeout, time remaining and if the timeout expired?
1,308,650,780,000
I installed Debian for CLI purposes. How I can delete things related to the GUI without the system crashing (Because I do not need GUI packages to free up memory)?
This should delete most GUI applications and libraries: sudo apt purge --auto-remove libx11-6 libwayland-client0 How I can delete things related to the GUI without the system crashing Linux (distros) will not crash without the GUI. The only two things required for Linux to run are the kernel and the init process. The init process technically doesn't do anything itself, so to be completely correct, you need three pieces: the kernel, the init process and something else which actually does something, e.g. busybox. busybox itself may be the init process, so we're back to just two components, the kernel and some process doing something.
Debian - How I can delete things related to the GUI?
1,308,650,780,000
I got into a situation that I need to rename lots of files in the form of: file.csv file_1.csv file_2.csv file_3.csv file_4.csv file_5.csv file_6.csv file_7.csv file_8.csv To better order, i.e.: file_1.csv file_2.csv file_3.csv file_4.csv file_5.csv file_6.csv file_7.csv file_8.csv file_9.csv I.e., I can manually rename the first file, but for the remaining "_#" files, I need to renumber them with +1. I tried to use rename -n -v 's/_(.*)./.(\1+1)./' but got file_(1+1).csv file_(2+1).csv etc. Any easy way to batch remain the "_#" files please? PS, better as file_01.csv . . . file_09.csv file_10.csv . . . if possible. UPDATE: Thanks to cas' answer. Because I'll rename them as zero-padded names, so there won't be conflict in file names, so to me the command can be simplified as: touch file.csv file_{1..9}.csv $ rename -v 's/^(file_)(\d+)(\.csv)$/$1 . (sprintf "%02i", $2 + 1) . $3/e' file_* file_1.csv renamed as file_02.csv file_2.csv renamed as file_03.csv file_3.csv renamed as file_04.csv file_4.csv renamed as file_05.csv file_5.csv renamed as file_06.csv file_6.csv renamed as file_07.csv file_7.csv renamed as file_08.csv file_8.csv renamed as file_09.csv file_9.csv renamed as file_10.csv Note the last two file names: file_09.csv file_10.csv
You need to use perl's /e regex modifier to cause it to evaluate the right-hand-side of the s/// operation as a perl expression. You also need to sort the filenames in reverse numerical order so that the highest-numbered filenames are renamed before the lower-numbered filenames (otherwise there will be filename collisions - by default, unless you use -f to force it, rename will refuse to overwrite existing files). To do this, I'll use GNU find with -print0 and GNU sort with -z for NUL-terminated input, -r and -V for a reverse version (i.e. "natural") sort. -t _ and -k 2 options are also used to sort from the second field. rename's -d option is used to make it rename the filename portion of a pathname only, and -0 to make it take a NUL-separated list of files on stdin. e.g. $ touch file.csv file_{1..8}.csv $ find . -name 'file_*.csv' -print0 | sort -z -t _ -k2 -r -V | rename -d -0 's/^(file_)(\d+)(\.csv)$/$1 . ($2 + 1) . $3/e' $ mv file.csv file_1.csv $ ls -l total 5 -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_1.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_2.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_3.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_4.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_5.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_6.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_7.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_8.csv -rw-r--r-- 1 cas cas 0 Aug 21 15:22 file_9.csv This could be simplified a little, but I've made the regex explicitly look for and capture file_, one-or-more digits, and the .csv extension to avoid any possibility of renaming files it shouldn't. To make the file numbering zero-padded, you can use the sprintf function. e.g. ... | rename -0 -d -v 's/^(file_)(\d+)(\.csv)$/$1 . (sprintf "%02i", $2 + 1) . $3/e' Reading filenames from file handle (GLOB(0x555555905960)) ./file_8.csv renamed as ./file_09.csv ./file_7.csv renamed as ./file_08.csv ./file_6.csv renamed as ./file_07.csv ./file_5.csv renamed as ./file_06.csv ./file_4.csv renamed as ./file_05.csv ./file_3.csv renamed as ./file_04.csv ./file_2.csv renamed as ./file_03.csv ./file_1.csv renamed as ./file_02.csv
Re-number and rename
1,308,650,780,000
I'd like to be able to see if, for example, show-all-if-unmodified is enabled in the current session.
You seem to be looking for bind -V: $ bind -V | grep show-all-if-unmodified show-all-if-unmodified is set to `off' As far as I can see, no variable name includes the whole name of a different variable (e.g. there is no show-all variable that will also match show-all-if-unmodified when used as a non-anchored pattern), nor any special character in the context of regular expressions. Hence it should be safe to define bind -V | grep as a shell alias or function.
How to list readline variables with their current value
1,308,650,780,000
I would like to use the command line to put text in the GUI clipboard so that it can, for example, be pasted into a graphical web browser's text input field. I am using Kubuntu 20.04. I tried as an example uptime | xclip, and when I press the middle mouse button, the uptime output is pasted. However, when I press Ctrl-Shift-V on the command line, or Ctrl-V in a GUI application, the uptime text is not pasted; instead, the previously copied text is pasted. I read that there is a distinction between the selection and the clipboard. I presume xclip is copying into the selection. How can I copy into the clipboard?
There are two sets of commands that can do this, xclip and xsel, and they can be used interchangeably. In order to use the clipboard used by graphical applications (rather than the terminal selection buffer), an option must be specified. To copy into the clipboard: uptime | xclip -selection clipboard # or uptime | xclip -sel clip # or uptime | xsel -ib To paste from the clipboard on the command line: xclip -o -selection clipboard # or xclip -o -sel clip # or xsel -ob If typing on the command line, xsel is faster to type; if assigning to an alias, or including in a script, then the verbose xclip form is more intention-revealing.
How to copy into graphical clipboard on command line in KDE? [duplicate]
1,308,650,780,000
I would like to convert a float to a ratio or fraction. Do we have the option to convert a float 1.778 to ratio as 16:9 or 16/9 in bash, in a fashion similar to Python's fractions module (Fraction(1.778).limit_denominator(100)).
Pedantic or not, if our man is looking only looking at 3 decimals of precision.... Breaking out the good old awk hammer for the equally good old fashioned lowest denominator, rather than the high falutin' algorithm, just find the lowest error and denominator echo "1.778" | awk 'BEGIN{en=1; de=101; er=1}{ for (d=100; d>=1; d--) {n=int(d*$1); e=n/d-$1; e=e*e; if (e<=er && d<de){er=e; de=d; en=n}} print en":"de, en/de}' So... 16:9 1.77778 Something like this could equally be done in pure bash with the appropriate multiplier for the fraction. If we are having a race real 0m0.004s user 0m0.001s sys 0m0.003s
How can I convert a float to ratio?
1,308,650,780,000
I have a bash script runs a command in an infinite while loop. The script looks something like: while : do python3 my_program.py done Is there a simple way to terminate this while loop from the terminal in a way that doesn't interrupt the python process? ie the python process will finish, then the loop terminates. Stated another way, perhaps: Is there a while loop whose termination condition is some terminal input?
You can trap SIGINT in your script and set a loop exit condition. Running your program with setsid prevents it from receiving SIGINT from CTRL+C #! /bin/bash STOPFILE=/tmp/stop_my_program rm -f $STOPFILE trap "touch $STOPFILE" INT while [ ! -f $STOPFILE ] do setsid python3 my_program.py done rm -f $STOPFILE
Kill script, but allow currently executing program to exit
1,308,650,780,000
I have to use usermod -G groupname username everytime to add user to group. Isn't there any way to do it by single line like usermod -G groupname user1,user2,user3
You can set the member list of a group with one command: gpasswd -M user1,user2,user3,... groupname But to add using this, you'll need to get the existing list: gpasswd -M "$(getent group groupname | awk -F: '$4 {print $4","}')"user1,user2,user3,... groupname But it's just easier to use xargs or loops manually: for u in user1 user2 ... do gpasswd -a "$u" groupname done Or: $ cat file user1 user2 ... $ xargs -a file -n 1 gpasswd groupname -a
How can we add multiple users to a group by single command in Linux?
1,308,650,780,000
How do I find all folders containing a . in the foldename? I've tried the following, but it listed all folders, not what I wanted; find . -maxdepth 2 -type d -ipath "." I seem to have a lot of folders named somethign like this; Brain.Dead.1990.1080p.BluRay.x264-HD4U[rarbg] and thats just plain ugly for me. I'd like to list them and then edit their names to something more readable. Any clues?
Another find whitout regex find . -maxdepth 2 -type d -name '*.*' ! -name '.*'
finding folders containing but not starting with a . (dot)
1,308,650,780,000
How to merge 2 columns in a file alternatively? See below example. inputfile: sam jam tommy bond expected_output: sam jam tommy bond
Simply with awk: awk '{ print $1 ORS $2 }' file $1 and $2 - are the 1st and 2nd field respectively ORS - Output Record Separator. The initial value of ORS is the string "\n" (i.e., a newline character) The output: sam jam tommy bond
How to merge 2 columns in a file alternatively?
1,308,650,780,000
I'm using the rlwrap utility inside the following shell alias: alias gp='rlwrap git push' The purpose of this gp alias is to be able to use basic line editing commands such as C-a or C-e to get to the beginning or end of the command line, while I'm using the git push command. I've also configured rlwrap to write the input history of every command I use it for, in a dedicated file (~/.config/rlwrap/<command>_history): export RLWRAP_HOME="${HOME}/.config/rlwrap" When I use the gp alias, I have to give my credentials, username and password, and rlwrap saves them in ~/.config/rlwrap/git_history. Is it possible to let rlwrap save the input history of all the commands I use, except the passwords, like for example in the gp alias?
I have to give my credentials, username and password, and rlwrap saves them in ~/.config/rlwrap/git_history Are you certain rlwrap indeed saves your password in its history file? By design, input that isn't echoed back is never put in the history list (in such a case, rlwrap will echo your keystrokes as ****) I checked it, and, on my system at least, this is what happens with git push as well. If you really see your password in the history file, please file a bug at rlwraps GitHub site
How to prevent `rlwrap` from saving a password in an input history file?
1,308,650,780,000
Suppose I entered the following thing into terminal: wgets "link" I will get the response: No command 'wgets' found, did you mean: Command 'wget' from package 'wget' (main) I made a mistake, and the terminal warned me. Is there a command that I can type after the terminal warned me, so that then it will execute the command above with what it thought it was? For example: ->wgets "link" ->No command 'wgets' found, did you mean: Command 'wget' from package 'wget' (main) ->yes (this command I am looking for ... is there one?) -> executing wget "link"
Switch to zsh (installed by default on macOS and available as a package on all major Linux distributions, *BSD, and software collections for other Unix-like operating systems). It has autocorrect for command names. % wgets zsh: correct 'wgets' to 'wget' [nyae]? y wget: missing URL …
Did you mean this command instead? (how to reply to this)
1,308,650,780,000
I'm looking to use ex mode of vim for a script I'm trying to write, but I can't seem to figure out the syntax that will allow me to write multiple commands. My code looks something like this: ex -c 'normal! 2gg19|^V49gg59|y' geom.inc So this just enters into ex mode for the file geom.inc, highlights a block of text, and yanks that text block. All that I want to add is that it will close the file once it has done this, but I can't seem to figure out how to include the additional command to close the file. I know in general "|" is used to string together commands, but no combination that I have tried has worked. It typically causes it to think one of the commands is another file.
I was making silly errors. As @Jeff Schaller suggests above, multiple -c prompts will allow multiple commands. So, my working example looks like this. ex -c 'normal! 2gg19|^V49gg59|y' -cwq geom.inc Where I enter ex mode ex, prompt a command -c, define a block normal! 2gg19|^V49gg59|y (where normal! allows the use of regular vi commands, 2gg19| means move to the 2nd row and 19th column, ^V enters visual block mode, 49gg59| moves to 49th row and 59th column, and y yanks the block), and then prompt another command to write and quit -cwq.
How are multiple commands given for ex from the command line?
1,308,650,780,000
I am sorting files which have gene names and their expression values. All files have same exact number of rows ,however after sorting there is a difference in the positioning of certain genes. This is very weird.below are the sorted versions of two such files. for example: Cxx1c 25.1695 Cxxc1 15.2228 Cxxc4 0.952061 Cxxc5 3.13309 **Cyb5 157.426** Cyb561 0.425933 Cyb561a3 9.55082 Cyb561d1 4.00422 Cyb561d2 3.04411 Cyb5b 16.7622 Cyb5d1 7.25191 Cyb5d2 2.85109 Cyb5r1 15.2511 Cyb5r2 0.48748 Another file has this sorting. Basically, in this file Cyb5 is present after Cyb561d2 gene. How can I have exactly same sorting order. Is there any parameter to do such thing? Cxx1c 44.9795 Cxxc1 19.0346 Cxxc4 1.17429 Cxxc5 2.71589 **Cyb561 7.11003** Cyb561a3 1.97601 Cyb561d1 2.13004 Cyb561d2 2.03376 Cyb5 64.074 Cyb5b 14.5329 Cyb5d1 12.0212 Cyb5d2 1.47763 Cyb5r1 10.5463 Cyb5r2 0 Here is my code which generates the above sorted file: for i in *.txt; do sort -d $i >$i.sort done
You are currently sorting on the entire line, but it seems like you only want to sort on the first column. With the way your command is currently written, the columns will basically be concatenated together, eg: Cyb5 157.426 -> Cyb5157426 Cyb561 0.425933 -> Cyb5610425933 vs Cyb561 7.11003 -> Cyb561711003 Cyb5 64.074 -> Cyb564074 To sort on just the first column, you'll need to use the following command: sort -d -k1,1
Sorting issues in Linux
1,308,650,780,000
I have a command (php file) which takes, as an argument, the location of a file. eg. $ php ./ScriptName.php /my/file/location.txt Sometimes I only need one or two words, so I don't really want to create the location.txt file just so I can reference it with the next command - I'd rather pipe it with the command somehow. For example, consider the contents of location.txt from the above example being just: mywords Is there a way I can rewrite that original commandline argument to just provide the contents of a "virtual" file?
You can use a variant of the /dev/fd/0 trick to pass a string to a script which expects a file name: php ./ScriptName.php php://fd/0 <<<'mywords' For example, script.php contains: <?php $handle=fopen($argv[1],"r"); echo "Read: ".fgets($handle); fclose($handle); ?> Running: php script.php php://fd/0 <<<'Some text' outputs: Read: Some text
inline heredoc instead of file
1,308,650,780,000
I recently installed the brew command on my Debian machine to install tldr man pages on my system. The command looks useful for installing programs that aren't packaged by Debian, also it does not require sudo to install packages . However, there is a limitation: only a few packages can be installed through the command brew. Is it possible to configure brew to install packages from Debian repositories?
Is it possible? Yes. Both programs are open source. Is it convenient? Not really. Why? Package managers work more or less like this: They track packages installed on your system(and their version) To do this, they specify their own format of packages(e.g. .deb), and use these packages as instructions on how to install the program and how to track it They also track dependancies (e.g. "this program needs openssl to work!") This is why having a system that would use few package managers isn't the best idea: Each package manager would have to be informed about the package being installed(e.g. brew would have to know that you installed firefox, and apt would have to know that you installed tldr) Each package manager would have to resolve dependancies from other package managers(e.g. "Brew: This program needs ncurses, but apt already installed ncurses, so I don't need to pull them!"). You see, the problem with 2 is that package managers are abstraction for the underlying repositories. People like Debian folks choose the packages they want users to use, and they make them available to others. However, they also select these packages so that system is consistent; they want the least amount of packages to offer the most functionality. Why install ncurses version 1,2, and 3, when you can get everything to work with version 2? The first problem is also bad news. The package managers would have to inform each other about what they do, or they could collide(brew wouldn't know that ncurses is already installed). So why is it hard? Package managers would need to cooperate tightly Package managers would have to have strict policy about what to do when they can't agree on package Package managers would have to be able to work almost interchangebly, with the only visible difference being available programs Package managers would have to be able to track each others' repositories in case of updates. This effectively means you would need a package manager that would consist of the two package managers. You would need a new program. So what can I do? First of all, I would ask myself "Why do I want to do this?". Honestly, your distribution should supply you with plenty of packages. If you aren't happy with how many packages you have, you might consider switching to other distribution that has more packages that you need. If you are really desperate to get this brew to work, I would propose the following solution, although I'm not sure if this is fully possible: Grab the sources of brew. Learn the brew recipes format. Write a program that automatically translates recipes to Debian packages. Modify brew so that whenever you run it, it calls the program to translate recipes to .deb packages/searches for the programs in your distro's repos, then call apt to install this package. Making such modifications would probably take much time and isn't the easy thing. I suggest changing distro or sticking to your package manager instead.
Is it possible to configure `brew` to install packages from Debian repositories?
1,308,650,780,000
Let's say I have some CLI one-liner, which outputs some lines of text with space-separated parts. Those parts should logically be columns, but because of text width it doesn't look so. How could I automatically format such output to make it pretty columns? For example, I have output like Alice param1 param2345 32768 50 16 Bob param2345 param1 512 10 1 _debug_user_ param0 param0 0 0 0 And I want to make it like Alice param1 param2345 32768 50 16 Bob param2345 param1 512 10 1 _debug_user_ param0 param0 0 0 0
With Linux column(1): column -t <file.txt With BSD rs(1): rs 0 6 <file.txt With awk(1): awk 'FNR==NR { for(i=1; i<NF; i++) if(length($i)>w[i]) w[i]=length($i) } FNR!=NR { for(i=1; i<NF; i++) $i=sprintf("%-" (w[i]+1) "s", $i); print }' \ file.txt file.txt
How to adjust cli output into pretty columns
1,308,650,780,000
Searching through passed Logfile with something like this: cat /path/to/logfile | grep -iEw 'some-ip-address-here|correspondig-mac-adress-here' This gives me all the passed log lines until now so I can see what has been. Now I also want to see what's going on so I need to exchange cat with tail -f giving me this: tail -f /path/to/logfile | grep -iEw 'some-ip-address-here|correspondig-mac-adress-here'
You can use !!:* to refer to all the words but the zeroth of the last command line. !! refers to the previous command, : separates the event specification from the word designator, * refers to all the words but the zeroth. This is from the HISTORY EXPANSION section of bash(1). wieland@host in ~» cat foo | grep bar bar wieland@host in ~» tail -f !!:* tail -f foo | grep bar bar You could also use quick substitution where ^string1^string2^ repeats the last command, replacing string1 with string2: wieland@host in ~» cat foo | grep bar bar wieland@host in ~» ^cat^tail -f tail -f foo | grep bar bar
Is there a shortcut to rerun a command with arguments from last command (not cd, ls or echo)
1,308,650,780,000
In many cases after I find a file using the find command I then want to open the file or cat it or maybe print it. How can I operate on the result from find? For example, : find . -name "myfile.txt" ./docs/myfile.txt : find . -name "myfile.txt" | less does not work because it feeds the string "./docs/myfile.txt" to less, not the contents of the file at the specified path.
Similar to @coffeeMug, this is the more up-to-date way to doing this as it is apparently faster: find . -name "*.log" -exec ls -l '{}' + I'll also point you to CommandLineFu, which is always helpful with these things.
Access a file located with find
1,308,650,780,000
I use CentOS 7 and I install anaconda and some tools, after that some basic command like clear which not work. [zhilevan@localhost ~]$ clear bash: clear: command not found... when I echo $PATH I see below results [zhilevan@localhost ~]$ echo $PATH /usr/lib64/qt-3.3/bin:/home/zhilevan/perl5/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/zhilevan/.local/bin:/home/zhilevan/bin Also, when I try to sudo yum install which but It says already installed. Also I try export PATH=$PATH:/bin:/usr/local/bin but not correct. where is the problem and how can I solve this problem?
It looks as though some of your commands have been modified/removed outside of yum. You need to reinstall the missing commands like so: yum reinstall which You can give multiple packages as you identify them: yum reinstall which clear If you find that lots of commands have been removed, it may be easier to reinstall your whole system.
Bash commands not found
1,308,650,780,000
I want to determine the type and speed of my CPU so I executed the CPU command, However, I have some confusion, so from my output I can't really determine what is my CPU type and my speed is 1600 MHZ, is that right ? I used cat /proc/cpuinfo and I got that my model name is Intel(R) Core(TM)2 Duo CPU , is that my cpu type ?
I never knew about the lscpu command. The man page says its based on the first CPU only. The "rate" may "change". seems the lscpu is under change -- newer ones have a -V (--version) option, on my ubuntu 15.04 system it says: leisner@t7400:/tmp$ lscpu --version lscpu from util-linux 2.25.2 Model name: Intel(R) Xeon(R) CPU E5430 @ 2.66GHz Stepping: 6 CPU MHz: 2667.000 CPU max MHz: 2667.0000 CPU min MHz: 2000.0000 On another system is just says: CPU MHz: 800.000 But in cpufreq-info it says: current CPU frequency is 800 MHz. cpufreq stats: 2.40 GHz:0.02%, 2.40 GHz:0.00%, 2.30 GHz:0.00%, 2.20 GHz:0.00%, 2.10 GHz:0.00%, 1.90 GHz:0.00%, 1.80 GHz:0.00%, 1.70 GHz:0.04%, 1.60 GHz:0.00%, 1.50 GHz:0.00%, 1.40 GHz:0.00%, 1.30 GHz:0.01%, 1.10 GHz:0.00%, 1000 MHz:0.00%, 900 MHz:0.01%, 800 MHz:99.91% (5851) (this is an 8 core i7).
Type and the speed of CPU using the lscpu command
1,308,650,780,000
So here's the deal, my girlfriend wants me to transfer all of her photos off from her iPhone onto her laptop (on which we are running Ubuntu 14.04). All of the dedicated programs I tried to do this with did not work, so I just copied the entire DCIM/ folder, which contains all of the pictures on her iPhone. My predicament is that DCIM/ is divided into four folders, which then contain an individual folder for each of her photos. In each of those folders, every picture has the same name, '5003.jpg'. I want to move and rename (probably with ascending numerical names, e.g. 0001.jpg, 0002.jpg, etc.) all of these files to one folder, say ~/Pictures/iPhone/, using the command line. So far all i've managed to do is compile a text file of all the individual paths for each file. Some example path names: /home/jennie/Pictures/DCIM/101APPLE/IMG_1703.JPG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1431.PNG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1933.JPG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1388.JPG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1954.JPG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1524.JPG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1897.PNG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1582.PNG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1007.PNG/5003.JPG /home/jennie/Pictures/DCIM/101APPLE/IMG_1502.JPG/5003.JPG
#!/bin/bash appledir="$HOME/Pictures/DCIM/101APPLE" jpgname="5003.JPG" for dir in "$appledir"/* do if [[ -d "$dir" ]] then newfile="$appledir/${dir##*/}" mv "$dir"/5003.JPG "$newfile.tmp" && rmdir "$dir" && mv "$newfile.tmp" "$newfile" fi done With an initial tree as this: $ tree Pictures/ Pictures/ └── DCIM └── 101APPLE ├── IMG_1002.JPG │   └── 5003.JPG ├── IMG_1003.JPG │   └── 5003.JPG └── IMG_1004.JPG └── 5003.JPG After executing the script (./script.sh), this will be the tree: $ tree Pictures/ Pictures/ └── DCIM └── 101APPLE ├── IMG_1002.JPG ├── IMG_1003.JPG └── IMG_1004.JPG Edit: To rename *.PNG files back to *.JPG, use: for name in "$HOME/Pictures/DCIM/101APPLE"/*.PNG do mv -i "$name" "${name%PNG}JPG" done
Moving and renaming hundreds of .jpg files, all named 5003.jpg
1,308,650,780,000
Is there a way to invoke or know a different clock or time on the CLI. The thing is I'm on UTC+5:30 time but at times need to know time in different time-zones. If there is a CLI way in which this can be known would be helpful.
As several others have mentioned, the TZ environment variable is what affects the output of date. However, in most cases you won't want to leave TZ changed; you will just want to see the time in that timezone, leaving your environment unaffected afterwards. For that purpose, the best tool to use is env. From the env man page: NAME env - run a program in a modified environment SYNOPSIS env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...] So in this case, the command you want would look something like: env TZ='EST' date There are other formats acceptable for the timezone—many of them. If you have particular requirements, it's a good idea to read man timezone which explains the valid timezone formats.
how to get a different time-clock on CLI apart from your time-zone?
1,308,650,780,000
Is there a way to capture all commands executed in a bash or even better in sh? I need a wrapper script that gets called everytime a command is executed. Like when I type cd /home/ I want my wrapper script command_wrapper.sh to be called. Inside I want to cancel the command or call another command. #!/bin/sh #command_wrapper.sh echo "you tried to call the command [$@]" It also would be okay if the script only gets called when a command doesn't exist. Is this possible?
You can use the DEBUG trap to do this. In this trap, $BASH_COMMAND contains the command last executed. trap 'echo "you tried to call the command [$BASH_COMMAND]"' DEBUG Note that, if you are executing commands as part of your prompt or $PROMPT_COMMAND, the trap will run on these as well. You can add checks to see if $BASH_COMMAND is the same as $PROMPT_COMMAND to avoid some of these. As for executing when the command doesn't exist, in bash, if a function named command_not_found_handle exists, that function is executed any time that a command is not found. It's executed with all of the arguments from the command line you tried to execute passed as arguments, so you can do something like this: command_not_found_handle() { echo "you tried to call the command [$@]" } This results in: $ foo bar you tried to call the command [foo bar] $
bash on command event (or shell)
1,308,650,780,000
I'm using bash, and I want to be able to execute a script just by typing its name as a command, same as pwd for example. Is there a specific directory where I need to save my script to, or any other system files I need to edit to achieve this?
You have to install that script in one of the directories of $PATH. Use (echo $PATH) to see the directories of $PATH That means either copy the script to Or make a symbolic link to the script inside one of the directories of $PATH Or append the script directory to $PATH export PATH=$PATH:<script directory>
How can I create new shell commands?
1,308,650,780,000
I am trying to write a command along the lines of the following: vim -c "XXXXXX" myFile Instead of the "XXXXX" I want to supply some commands to vim to add some text to an arbitrary point in the file, both by specifying an exact line number and, in a different scenario, by searching for a specific line and then insert on the line above. What I am trying to do is a sort of clever "append" where I can append lines to a code block or function inside a script. Ultimately I am aiming to have a setup script which will go and alter maybe a dozen system files. Ideally it would only involve one -c flag and ideally it would be readable to anyone that can understand normal mode commands - in my head I was originally thinking something like "ggjjjiInsertingOnLine4:wq" once I can get it into normal mode.
Command line ranges can be use to select a specific line that needs to be edited. Then substitute pattern can be used to perform the edit (append). For example, to append text "hi" at the begining of line 3: vim -c "3 s/^/hi/" -c "wq" file.txt To append text "hi" at the end of line 3: vim -c "3 s/$/hi/" -c "wq" file.txt To find more options and explanations: vim -c "help cmdline-range" Some more examples To find a search string "hi" and append string " everyone" on line 3: vim -c "3 s/\(hi\)/\1 everyone/" -c "wq" file.txt To find a search string "hi" and prepend a string "say " on line 3: vim -c "3 s/\(hi\)/say \1/" -c "wq" file.txt In case the line number is not known, To append first occurrences of string "hi" on every line with " all": vim -c "1,$ s/\(hi\)/\1 all/" -c "wq" file.txt To append all occurrences of string "hi" on every line with " all": vim -c "1,$ s/\(hi\)/\1 all/g" -c "wq" file.txt For more info about substitutions: vim -c "help substitute"
How do I use vim on the command line to add text to the middle of a file?
1,308,650,780,000
How do you view a sql.gz file as plain text SQL from the command line? I want to read the SQL statements stored in a .sql.gz file, from the command line on the server. I've tried tar -xzvf, but get tar: This does not look like a tar archive. cat returns garbage because it's compressed.
gunzip -c <filename> | less or zcat <filename> | less
How do you view a sql.gz file as plain text SQL from the command line?
1,308,650,780,000
When I run in my terminal: alias a list with all my aliases (defined in ~/bashrc and ~/.bash_aliases files) will be displayed on my terminal. That's nice and as expected! But when I run: bash -c "alias" there is no output, so no aliases. First I thought that ~/.bashrc file is not sourced in the second case, so I ran: bash -c ". ~/.bashrc && alias" but, stupor, again there is no output... Strangely enough, when I run: bash -c ". ~/.bash_aliases && alias" only the aliases defined in ~/.bash_aliases will be displayed. Can someone make some light and make me understand what exactly it is happening here?
You need an interactive shell for alias definitions: bash -i -c "alias"
Why are aliases missing inside of bash command?
1,308,650,780,000
I can paste the contents of a file into a command using cat and backticks: ls `cat filenames` Is there a way to do the reverse - to turn a string into a (temporary) filename? gcc -o cpuburn `uncat "main(){while(1);}"`.c
You can pass the source code on gcc's standard input. Since gcc isn't given a file name, you need the -x option to let it know what the output language is. You can pass input via a here string (most convenient for a single line), a here document, or a pipe. gcc -o cpuburn -x c - <<<'main(){while(1);}' If you need a file name on the command line, you can use process substitution to generate the content. That doesn't give you control over the file name (it'll be something like /dev/fd/42). gcc -o cpuburn -x c <(echo 'main(){while(1);}') If you really need control over the temporary file's name, you'll need something more complex where you manually create and remove the temporary file. src=$(mktemp XXXXXXXXXX.c) trap 'rm "$src"' EXIT INT TERM HUP echo 'main(){while(1);}' >"$src" gcc -o cpuburn "$src" (This answer assumes that your shell is bash, zsh or ksh93.)
Is there a command which creates a temporary file containing the arguments passed to it?
1,308,650,780,000
I have a flash usb drive and up till now it has worked well. Recently I recorded iso to it using dd. Now I want to delete it. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ....... sdb 8:16 1 14.6G 0 disk └─sdb1 8:17 1 14.5G 0 part /media/alex/ARCH_201404 sr0 11:0 1 1024M 0 rom $ mount /dev/sdb1 on /media/alex/ARCH_201404 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2) When I did this $ sudo dd ibs=4096 count=1 if=/dev/zero of=/dev/sdb1 1+0 records in 8+0 records out 4096 bytes (4.1 kB) copied, 0.00053675 s, 7.6 MB/s it seemed to succeed but when I explored the usb flash all the files were still there. When did this: sudo rm -r /media/alex/ARCH_201404/* and I got the error: .................. rm: cannot remove ‘/media/alex/ARCH_201404/loader/entries/uefi-shell-v1-x86_64.conf’: Read-only file system rm: cannot remove ‘/media/alex/ARCH_201404/loader/entries/uefi-shell-v2-x86_64.conf’: Read-only file system rm: cannot remove ‘/media/alex/ARCH_201404/loader/loader.conf’: Read-only file system ..................... What can I do about it?
You only zeroed the first 4kb of the partition. Usually all file systems keep a few unused blocks at the start of their partition in order to give space to boot loaders that might be installed on the partition itself. I think that at least 16 blocks are always kept unused. You copied, with dd, a file system of type ISO 9660, so you have 2048 bytes blocks. ISO 9660 reserve about 32kb for boot loaders, as explained here: http://wiki.osdev.org/ISO_9660#System_Area So, in order to really delete the content of the partition, you may need to delete at least the first 1Mb.
Unable to remove the files from an usb drive (neither by dd /dev/zero nor by rm -r)
1,308,650,780,000
Somebody helped me run a java program with the following (working) line of code. The zookeeper-3.4.5.jar exists in the working directory. What it is the meaning of the .: syntax here? Does that just mean current directory? I would have written this as java -cp "zookeeper-3.4.5.jar" but I'm not 100% sure this would do the same thing as the line below. java -cp .:zookeeper-3.4.5.jar org.zookeeper.LsGroup
The . is the current directory. The : is the path separator, used to separate multiple paths in a single option/variable under *nix. This command line therefore adds both . and zookeeper-3.4.5.jar to the Java classpath.
What is the meaning of the .: in this linux statement?
1,308,650,780,000
If you were trying to run a package that is not installed, for example: me ~: gparted The program 'gparted' is currently not installed. You can install it by typing: sudo apt-get install gparted How can I run the sudo apt-get install gparted line as a command? Not typing it, obviously.
You can enable it by adding this to your .bashrc export COMMAND_NOT_FOUND_INSTALL_PROMPT=1 Giving you: $ foo The program 'foo' is currently not installed. You can install it by typing: sudo apt-get install blah-blah Do you want to install it? (N/y) If you get a python error as in: ... File "/usr/lib/python3/dist-packages/CommandNotFound/CommandNotFound.py", line 217, in install_prompt answer = answer.decode(sys.stdin.encoding) AttributeError: 'str' object has no attribute 'decode' You can: Apply this patch for Ubuntu from here. or: Modify CommandNotFound.py by adding four spaces at beginning of lines 215,216,217 (note: not tabs): ... 213 else: 214 answer = raw_input(prompt) 215 4 spaces if sys.stdin.encoding and isinstance(answer, str): 216 4 spaces # Decode the answer so that we get an unicode value 217 4 spaces answer = answer.decode(sys.stdin.encoding) 218 if answer.lower() == _("y"): ... Another way, to expand on Greg Hewgill's comment could be: $(!! 2>&1 | tail -n1) I tried to wrap it in a script, but that didn't work out of the box as you won't get the error message. If one want to get it in a script, a rather hacky way could be: x=$(/usr/lib/command-not-found $(tail -n1 "$HOME/.bash_history") 2>&1 | tail -n1) echo "$x" For some options look at: /usr/lib/command-not-found -help an/or check out the script itself.
Run last line of error message as a command
1,383,332,844,000
I want to define a new ls command to do something like this The first when i use ordinary ls command: $ls undergrade.AI undergrade.KDD undergrade.AI2 undergrade.micro undergrade.ANN undergrade.OOT undergrade.autoMata undergrade.OS undergrade.bulletin undergrade.parallelAndDistributedProgramming undergrade.CG undergrade.pop undergrade.CLP undergrade.PPL undergrade.comNet undergrade.researchingMethod undergrade.comOr undergrade.SA undergrade.comSec undergrade.SE undergrade.DBMS undergrade.softwareTesting undergrade.discrete undergrade.webApps code.ajax code.cc code.erl code.html code.js code.pl code.py code.sage code.x3d code.c code.cl code.go code.jquery code.lisp code.pro code.roo code.sh code.xml And for a new ls that i want: $ls_new undergrade code I want it to show only the first word when i type ls_new. Why do i want this command? Because i named my folders in pattern of category.subj that so too much output when i use ordinary ls. then i need some command that can filter, group and show them but only the category name.
This should do what you want: ls|sed 's/\..*//'|sort -u|column If your shell has the width of your terminal in $COLUMNS, then this is better: ls|sed 's/\..*//'|sort -u|column -c $COLUMNS
How to use `ls` to group and show only a first word of directory name?
1,383,332,844,000
I'd like to know how many average lines are written to a file per minute.
You could do: tail -fn0 the-file | pv -lri60 > /dev/null That will give you a number of lines per second though. Otherwise: { cat > /dev/null while sleep 60; do wc -l done } < the-file (beware that won't be exactly accurate as the sleep 60 won't guarantee that is done exactly every 60 seconds).
What command will generate average lines per minute?
1,383,332,844,000
In Linux I always cd to a longish path and then run the script: cd /scratch/someDir/someOthernestedDir/ ./shellscriptName.sh How can I avoid achieve typing this longish path and then executing the command with a single step? Some thing like the below from any path should do what I want: executeMyCommand P.S: I am using C-shell. [subhrcho@slc04lyo bin]$ echo $0 csh
There are three main ways of running your script without needing to specify the full path. Add the directory containing your script to your $PATH. You will then be able to execute the script by name from any directory, just like any other program. If you are using csh, add this to your ~/.cshrc: set path = ($path /scratch/someDir/someOthernestedDir/) Place a link to your script in a directory that is already in your path. For example /usr/bin: ln -s /scratch/someDir/someOthernestedDir/shellscriptName.sh /usr/bin Make an alias as @EightBitTony suggested, add this line to your ~/.cshrc: alias executeMyCommand '/scratch/someDir/someOthernestedDir/shellscriptName.sh'
How can I execute a shell script that exists in a longish path with a single command without first cd'ing to the directory?
1,383,332,844,000
I would like to configure bash so that when I execute command (preferably from a list of commands, not any command) without an argument, it takes the argument of previous command. So, for example I type emacs ~/.bashrc, then when I enter source, bash executes source ~/.bashrc. I know it's possible but I don't know where to look for such options.
You can press Space then Meta+. before pressing Enter. This has the advantage that you can use it even with commands that make sense when applied to no argument. For source, use . to type less. If you're old-school, you can use !^ instead to recall the first argument from the previous command, or !$ to recall the last argument, or !* to recall all of them (except the command name). You can get exactly the behavior you describe by writing functions that wrap around each command. The last argument from the previous command is available in the special parameter $_. make_wrapper_to_recall_last_argument () { for f do eval <<EOF; done function $f { if [ \$# -eq 0 ]; then set -- "\$_"; fi command $f "\$@" } EOF } make_wrapper_to_recall_last_argument source .
Configure bash to execute command with last argument when no argument was provided
1,383,332,844,000
Possible Duplicate: creating abbreviations for commonly used paths I'm new to the Linux platform. Is there any way to rename the commands available in Linux. For example, I use the clear command a lot and instead of typing it every time I want to rename it as just c. Is this possible ?
You can create an alias, as you have figured that out, or just use the key combination Ctrl+L to clear the screen contents.
Change command name in Linux [duplicate]
1,383,332,844,000
Does anyone know if there is a way reproducing the same behaviour many text editors provide for colour-highlighting the syntax operators such as brackets or curly brackets. It would be very useful for complex one-liners in the terminal. Example of the functionality I am talking about (from VIM).
When writing complex one liners in bash, it is handy to use readline's edit-and-execute-command (bound to C-xC-e by default in emacs mode). Hitting C-xC-e opens current commandline in the editor of your choice with all its fancy features. After saving it, bash will execute the contents as shell commands. Alternatively, issue bash's builtin fc to open last command in the editor.
Is there a way of color-highlighting paired brackets in shell (bash)?
1,383,332,844,000
I was playing around with Pushover, and had the thought that it would be cool if I could use it as an argument on any random command, so that it would run a pushover script at the end of the task, regardless of what that task was. I have no idea if it's possible, or how I would go about it, but I'd like to learn. This question on the RasPi Stack Exchange site is what got me thinking about it. But I think there are many things it would be useful for, like letting you know when that compile job is finally finished, and maybe if it was successful. I had the thought that it could look something like: $ apt-get -b source packagename -pushover "Compile job complete." The thought being that the argument '-pushover [enter message text here]' after any command would execute the pushover script, and use their API to notify you via their app. So I guess the question is, is it possible to do in this fashion? If so, where do I start? If not, are there better ways to accomplish the same thing, without being limited by what command you are running? I'm not locked on the idea of using it as a command argument, but I do want a way to run it with any command, without writing a separate script for each one. I am new to Linux, so if it is a non-starter idea, I'll take that answer, too, provided there are logical explanations of why it won't work to go with it.
You do it the other way around: $ pushover-notify "This is my message" command arg1 arg2 Your script pushover-notify could be something like this: #!/bin/sh TOKEN=your_token USER=your_user MSG="$1" COMMAND="$2" shift 2 if "$COMMAND" "$@" ; then # here run your send-message script, with message "$MSG". for example: curl -s \ -F "token=$TOKEN" \ -F "user=$USER" \ -F "message=$MSG" \ https://api.pushover.net/1/messages else # here send some message indicating failure, or don't do anything. for example: curl -s \ -F "token=$TOKEN" \ -F "user=$USER" \ -F "message=command failed: $COMMAND $@" \ https://api.pushover.net/1/messages fi
What would it take to add a command to run a script at the completion any given random task?
1,383,332,844,000
Possible Duplicate: What do the numbers in a man page mean? I see functions referred to with numbers in the parenthesis in documentation. What does this mean? It takes one argument?
Unix manual pages come in "sections"; see man man for what they mean (on most platforms; I assume yours will document it in there.) Section 1 is "user commands", and that means "the manual page from section 1 for ls". You will find that crontab(1) and crontab(5) are an example of where you have more than one page under a single name in different sections. To access it from the command line run man 1 ls, or man 5 crontab. You can also use man -a crontab to go through the page of that name in all sections where it is present. (Why is this? Because when man pages are printed as books, the sections are how the content breaks down into useful references. Not that you often see that any more, but way back when...) Depending on the operating system the sections are broken down differently, Wikipedia's entry for man page has a nice explanation. But for instance, on BSD, Linux and UNIX - section '3' is reserved for library functions (particularly those in the standard C library). So if you're writing C code you can fine-tune your section lookup to make results a bit quicker. man 2 printf, or man -s 2 printf yields the C version and keeps you from having to wade through the man page for /usr/bin/printf which would otherwise come up first since section one will yield a hit first. Man Page Section List for BSD, Linux, UNIX variants: (by way of wikipedia) General commands System calls Library functions, covering in particular the C standard library Special files (usually devices, those found in /dev) and drivers File formats and conventions Games and screensavers Miscellanea System administration commands and daemons
What is the significance of the "1" in ls(1)? [duplicate]
1,383,332,844,000
With the following bash command that crops an image, I would like to loop through from 02 to 18, using a single line command. convert TOS28_Page_02.jpg -crop 990x1500+0+0 TOS28_Page_02a.x.jpg
for((i=2;i<19;++i));do printf "convert TOS28_Page_%02d.jpg -crop 990x1500+0+0 TOS28_Page_%02da.x.jpg;" $i $i;done|sh
Crop images in single line
1,383,332,844,000
Is there an easy way to see how lines are changing in a file? For example, assume I have this "my.log" file: cpu1 cpu2 cpu3 cpu4 5 3 3 6 5 3 3 6 5 0 3 6 3 0 6 6 5 3 3 0 From the command line, I want to enter something like "cat my.log | showchanges" and then see something like this: cpu1 cpu2 cpu3 cpu4 5 3 3 6 " " " " " 0 " " 3 " 6 " 5 3 3 0 Ideally, "showchanges" would greedily treat any stretch of whitespace as a column separator to do this, but I am very flexible on the details. I just want to easily see a change when there are many columns. Also, it would eventually be nice to omit lines where there are no changes at all.
awk '{ bak=$0; for(i=1; i<=NF; i++)$i=($i==tmp[i]?"-":$i) split(bak, tmp) }1' infile cpu1 cpu2 cpu3 cpu4 5 3 3 6 - - - - - 0 - - 3 - 6 - 5 3 3 0 To keep the records indentation (fields width of 4): awk '{ bak=$0; for(i=1; i<=NF; i++)$i=sprintf("%4s", ($i==tmp[i]?"-":$i)) split(bak, tmp) }1' infile cpu1 cpu2 cpu3 cpu4 5 3 3 6 - - - - - 0 - - 3 - 6 - 5 3 3 0
Show changes between consecutive lines of a file
1,383,332,844,000
I noticed the error this morning, but I don't think I have changed anything last night, so I am very confused right now. Perhaps I updated some utilities on my system and it somehow broke the back compatibility. Basically I got a math: Error: Missing operator error when using tab completion. Say I type fish, and hit Tab to get the suggestions like fish_config and fish_add_path (here is an asciinema screencast in case you want to see it in action: https://asciinema.org/a/L3xr32eVMGHuCY0Gjr19gFzCu) [I] ~ $ fishmath: Error: Missing operator 'Wed Dec 31 18:00:00 CST 1969 - 1655913830' ^ [I] ~ $ fish_config fish (command) fish_add_path fish_breakpoint_prompt fish_clipboard_copy …and 29 more rows The tab completion does work, but the error looks very annoying. Looks like I am trying to evaluate a data string or something. How do I diagnose the bug? I am on macOS Monterey. Here is my ~/.config/fish/config.fish. set -px PATH /opt/local/bin /opt/local/sbin set -px PATH $HOME/.local/bin set -px PATH $HOME/bin set -px PATH /Applications/MacPorts/Alacritty.app/Contents/MacOS set -px PATH $HOME/Foreign/drawterm set -px PATH $HOME/google-cloud-sdk/bin set -x XDG_CONFIG_HOME $HOME/.config set -x PIPENV_VENV_IN_PROJECT 1 set -x PLAN9 /usr/local/plan9 set -px PATH $PLAN9/bin if test -e $HOME/.config/fish/sensitive.fish source $HOME/.config/fish/sensitive.fish end if status is-interactive # Commands to run in interactive sessions can go here alias vi='/opt/local/bin/nvim' set -gx EDITOR /opt/local/bin/nvim source /opt/local/share/fzf/shell/key-bindings.fish end set -g fish_key_bindings fish_hybrid_key_bindings alias matlab='/Applications/MATLAB_R2021b.app/bin/matlab -nodisplay' zoxide init fish | source direnv hook fish | source # The next line updates PATH for the Google Cloud SDK. if [ -f '/Users/qys/google-cloud-sdk/path.fish.inc' ]; . '/Users/qys/google-cloud-sdk/path.fish.inc'; end
The error goes away after I remove the line set -px PATH $PLAN9/bin. I guess it was because I accidentally shadowed some system utilities with its counterpart in Plan 9 from User Space. Another workaround is to use set -ax PATH $PLAN9/bin instead. By using -a, the directory $PLAN9/bin is appended to $PATH (as opposed to prepended when using -p), so that the commands already present in $PATH takes precedence over the Plan 9 ones.
Fish shell reports "math: Error: Missing operator" on tab completion
1,383,332,844,000
Suppose I have a directory structure like this: projects/ project1/ src/ node_modules/ dir1/ dir2/ dir3/ file project2/ node_modules/ dir4/ Starting from projects/ I want to delete the contents of all node_modules/ directories, but I do not want to delete the node_modules/ itself, leaving it empty, without folders or files inside. In the example above the items dir1, dir2, dir3, file and dir4 would be deleted.
The following will delete all files and directories within a path matching node_modules: find . -path '*/node_modules/*' -delete If you would like to check what will be deleted first, then omit the -delete action.
How to recursively delete the contents of all "node_modules" directories (or any dir), starting from current directory, leaving an empty folder?
1,383,332,844,000
I always have been using sort -u to get rid of duplicates until now. But I am having a real doubt about a list generated by a software tool. The question is: is the output of sort -u |wc the same as uniq -u |wc? Because they don't yield the same results. The manual for uniq specifies: -u, --unique only print unique lines My output consists of 1110 words for which sort -u keeps 1020 lines and uniq -u 1110 lines, the correct amount. The issue is that I cannot visually spot any duplicates on the list which is generated by using > at the end of the command line, and that there IS an issue with the total cracked passwords (in the context of customizing john the ripper).
No, they're not the same. For one, sort would sort the list first; and second, uniq -u prints only those lines that are "unique" in each given run, the ones that don't have an identical input line before or after them. $ printf "%s\n" 3 3 2 1 2 | sort -u 1 2 3 $ printf "%s\n" 3 3 2 1 2 | uniq -u 2 1 2 See also: What is the difference between "sort -u" and "sort | uniq"? How is uniq not unique enough that there is also uniq --unique? (this one has an answer with more examples)
Difference between sort -u and uniq -u