date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,654,992,595,000
I want to find screenshot files, having a specific pixel height of 2220 and width of 1080, and want to move them into another folder. That's nothing I can do manually, as the source is 100+k images or so. I've found the following command, but not able to bring it to work: find /Users/myuser/Desktop/daten/JPG -name "*.jpg" -exec bash -c "sips -g pixelHeight -g pixelWidth {} | grep -E '2220‘ >/dev/null" \; -exec mv {} /Users/myuser/Desktop/screenshots \; Error message: bash: -c: line 0: unexpected EOF while looking for matching `'' bash: -c: line 1: syntax error: unexpected end of file Thank you for your help. UPDATE: fixed the command and removed the blank in the folder name to: find /Users/myuser/Desktop/daten/JPG8 -name "*.jpg" -exec bash -c "sips -g pixelHeight -g pixelWidth {} | grep '2220' >/dev/null" \; -exec mv {} /Users/myuser/Desktop/screenshots \; .. but still not working well - no files have been moved.
Just to close this question: I have managed to move and finally remove all screenshots with the following command: find ./JPG* -name "*.jpg" -exec bash -c "sudo exiftool -csv -s -ImageSize {} | grep > /dev/null 'x2220'" \; -exec mv {} ./screenshots/ \;
Find images with specific pixel height/width and move them in another directory
1,654,992,595,000
Is it possible to get either the free or used ports between a specific range? If yes, whats is the command? Thank you in advance!
Read man lsof. sudo lsof -iTCP:20-26 Will show in-use ports.
Get free/used ports in a specific range
1,654,992,595,000
I have a series of 297 directories named as "dir000', 'dir001' and so on, each of which contains a text file called "config", which is a csv file with 3 columns and 256 rows. I have generated 25 random numbers in the range 1 to 256, and from all these files in each directory, I am required to remove those exact 25 rows. For instance, if my generator gave me a series of random numbers a = [145,11,140,119,183,178,225,131,1,65,213,115,207,41,194,221,10,205,6,57,224,108,44,85,211], I want to delete exactly these rows from each of the ASCII files("config") in each directory. Can anyone let me know how this can be achieved using command line? I am using Ubuntu 16.04 distribution.
The following uses perl's -i option for in-place editing of the input files. #!/usr/bin/perl -i use strict; # Parse array of random numbers from the first argument. my $arg1 = shift; # remove [, ], and any whitespace. $arg1 =~ s/\[|\]|\s+//g; # split $arg1 on commas, build an associative array # (aka "hash") called %a to hold the numbers. # The hash keys are the line numbers, and the value for # each key is just "1" - it doesn't matter what the # value is, the only thing that matters is whether the # key exists in the hash. my %a; map $a{$_} = 1, split(/,/, $arg1); # Loop over each input file. while (<>) { # Print each line unless the current line number $. is in %a. print unless defined $a{$.}; # reset $. at the end of each file. close(ARGV) if eof; } Save it as, e.g., delete-lines.pl and make it executable with chmod +x delete-lines.pl, and run it like: $ a="[145,11,140,119,183,178,225,131,1,65,213,115,207,41,194,221,10,205,6,57,224,108,44,85,211]" $ ./delete-lines.pl "$a" textfile*.txt If textfile1.txt, textfile2.txt, textfile3.txt all contain the following before execution: I have a series of 297 directories named as "dir000', 'dir001' and so on, each of which contains a text file called "config", which is a csv file with 3 columns and 256 rows. I have generated 25 random numbers in the range 1 to 256, and from all these files in each directory, I am required to remove those exact 25 rows. For instance, if my generator gave me a series of random numbers a = [145,11,140,119,183,178,225,131,1,65,213,115,207,41,194,221,10,205,6,57,224,10 8,44,85,211], I want to delete exactly these rows from each of the ASCII files("config") in each directory. Can anyone let me know how this can be achieved using command line? I am using Ubuntu 16.04 distribution. Then they will all contain this after execution: of which contains a text file called "config", which is a csv file with 3 columns and 256 rows. I have generated 25 random numbers in the range 1 to 256, and from all these For instance, if my generator gave me a series of random numbers a = [145,11,140,119,183,178,225,131,1,65,213,115,207,41,194,221,10,205,6,57,224,10 Can anyone let me know how this can be achieved using command line? I am using Ubuntu 16.04 distribution. i.e. lines 1, 6, 10, and 11 have been deleted from each of them - because those are the only line numbers in the files that are in the array of random numbers. BTW, the %a hash contains the following: { 1 => 1, 6 => 1, 10 => 1, 11 => 1, 41 => 1, 44 => 1, 57 => 1, 65 => 1, 85 => 1, 108 => 1, 115 => 1, 119 => 1, 131 => 1, 140 => 1, 145 => 1, 178 => 1, 183 => 1, 194 => 1, 205 => 1, 207 => 1, 211 => 1, 213 => 1, 221 => 1, 224 => 1, 225 => 1, } The next step is to run it on lots of files named "config" in your numbered directories: find dir[0-9]*/ -type f -name config -exec ./delete-lines.pl "$a" {} + This assumes that the array of random numbers is still in shell variable $a. You can use another variable name if you like, or just provide it as a quoted string - as long as you provide the array as the first argument to the perl script (with all subsequent args being filenames), it will work. If you don't want to save a stand-alone script, you can run it as a one-liner: $ find dir[0-9]*/ -type f -name config -exec perl -i -e \ 'map $a{$_} = 1, split(/,/, ($ARGV[0] =~ s/\[|\]| +//g, shift)); while (<>) {print unless defined $a{$.}; close(ARGV) if eof}' \ "$a" {} + But why would you? it would just be ugly and difficult to read and edit. Writing a temporary throwaway script in your favourite editor is easier and more convenient than trying to edit and debug a script on the shell command line.
Deleting lines of a series of ASCII (.csv) files having a certain row number
1,654,992,595,000
I work on Geany because it is lightweight and simple. But one future I really miss from using sophisticated and heavy code editors is prettifying the code with a keystroke. But Geany allows external scripts to run on the text in the editor through "custom commands". So I am on the look out for CLI program that can prettify a webpage containing HTML + CSS + JS? Not just HTML or JS. Prettify the webpage if it contains either HTML or CSS or JS or all of them.
First install prettier "globally" using either npm or your Package manager. Using npm: npm install -g prettier To "prettify" CSS using "custom commands" with Geany: Config - One time only! Edit -> Format -> Send selection to -> Set custom commands Click Add Enter prettier --stdin-filepath temp.css in Command field. Enter CSS in Label Click OK. Now every time you want to "prettify" CSS: Select the CSS text you want to prettify: Go to / pressEdit -> Format -> Send selection to -> CSS [recommended] To "prettify" any source code file directly using prettier: npx prettier --write . Read the "Usage" section in the document for more information. Caution: Prettifying, Beautifying or Formatting your source code files modifies them.
Is there a command line application that can prettify text containing HTML + CSS + JS?
1,654,992,595,000
I tried opening files with vim using vim $(cat filelist) as suggested from this earlier question. Suppose I have the following file: ~/Workspace/bar/foo.cpp Executing vim $(cat filelist) from ~/Workspace correctly opens foo.cpp when filelist contains bar/foo.cpp. However, the command does not open the file when filelist contains ~/Workspace/bar/foo.cpp. I want to know why using the absolute path causes the command to fail.
This is due to the order in which the different types of expansions are performed in a shell. The bash manpage says: Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion. Replacing the ~ is tilde expansion, while your $(...) is command substitution. Now you see that after the command substitution is performed, there is no more tilde substitution. With real absolute paths (starting at file system root /) it would work. But you can perform the expansion by yourself with sed: vim $(sed "s_~_${HOME}_g" filelist)
Opening files in vim with a filelist not working
1,654,992,595,000
Basically, I'd like to use a launcher to open a new tab with a program running in it when I already have a terminal open (obviously). To do this I use xfce4-terminal --tab --drop-down -x You'll notice I'm also using the --drop-down Which is essential to my ideal set up but am unsure if it matters to my question, but I included it just in case. Anyway, what is bothering me is that when I have no terminal open at all and I use the above command/click launcher, what essentially amounts to a useless empty tab opens as well as the desired tab with the program running in it. Is there any way to prevent this empty tab?
A simple script like this can be used to prevent the empty tab: #!/bin/bash c=$(ps -e | grep -c xfce4-terminal) if [ $c -gt 0 ] then xfce4-terminal --tab --drop-down -x $1 else xfce4-terminal --drop-down -x $1 fi Assuming the script is named xfce4termtab, the launcher command would be either xfce4termtab program or, with an argument xfce4termtab 'program arg' Notes: The script's permissions need to be set to make it executable. If the script isn't located in a directory on your PATH, you'll need to provide the full path to the script in the launcher command.
Is there a way to prevent an empty tab from showing up when using the --tab switch in a launcher for xfce4-terminal?
1,654,992,595,000
I know how to view the current processes on my Ubuntu machine. For example, I can leave a ping running: ping localhost Then do: $ ps -ef | grep ping Which shows: user1 2875 1231 0 Feb08 ? 00:00:03 /usr/libexec/gsd-housekeeping user2 96834 43257 0 14:21 pts/4 00:00:00 ping localhost root 96837 63560 0 14:21 pts/1 00:00:00 grep --color=auto ping But what about the host command? I left it running in a loop: for i in {1..50000}; do host localhost; done Then do: $ps aux | grep host But all I get is: root 98021 63560 0 14:24 pts/1 00:00:00 grep --color=auto host Yes, I even tried as root! Of course, I can see the name resolver, but that's always on: $ ps -ef | grep resolv systemd+ 1015 1 0 Feb08 ? 00:00:17 /lib/systemd/systemd-resolved Should I not also see host from /usr/bin/host? By the way, same thing with ls. I know these commands are sort of special, kind of "built-in", but I thought they would be treated like any other process? Unlike cd there's an actual executable. I ran strace on it: execve("/usr/bin/host", ["host", "localhost"], 0x7ffdf51ee518 /* 58 vars */) = 0 brk(NULL) = 0x559cc809a000 arch_prctl(0x3001 /* ARCH_??? */, 0x7ffd20032c70) = -1 EINVAL (Invalid argument) access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) ... Don't see anything that would explain it. What am I missing?! Thank you for helping to improve my understanding!
The ping runs once: it has its own internal timer to repeat the task, but the same process stays there until the count expires, or you kill it, depending on options. Host is an external command, not a shell built-in, so this is nothing to do with sub-processes. But it runs to completion 50,000 times. The probability of it being in the process table when ps | grep is looking for it is probably 1%. If you run the ps in a loop too, you might see a few hits eventually. It's possible with two diagnostic processes running, the creation of host process might never synchronise with the ps because of some scheduler constraint.
How to view process info for "host" and similar commands
1,654,992,595,000
I have some .tta files I downloaded from internet. I can play them on VLC locally, but they cannot be played from certain media player app, for example, an android app. So here I need to convert "tta" files to "mp3" or "wav" files. Since they're high resolution sound, I'd like to know how to convert them to wav (or flac) rather than to mp3, but if you know I'd like to know both of the ways. So, is there any way to do that? Thanks.
for i in *.tta; do ffmpeg -i "$i" "${i%.tta}.wav" done
Convert TTA file to MP3 or WAV file?
1,654,992,595,000
The documentation mentions no command-line arguments for launching the setup program. The Windows version appears to have a dedicated executable.
It’s a separate executable on Linux too, chocolate-doom-setup (which is a link to chocolate-setup). Each game variant has its own setup: chocolate-heretic-setup, chocolate-hexen-setup, chocolate-strife-setup.
How to access the setup tool in chocolate-doom?
1,654,992,595,000
The man page for txt2html says: --make_links Should we try to build links? If this is false, then the links dictionaries are not consulted and only structural text-to-HTML conversion is done. (default: true) I want to set this to false. How do I do this? I could not find this information, and have tried several guesses.
The txt2html manual also says Boolean options can be negated by preceding them with no [...] The manual then refers to the Perl package Getopt::Long. In its manual, one may read the following about boolean options: The option does not take an argument and may be negated by prefixing it with no or no-. [...] This means that to invert the sense of the --make-links option, use --no-make-links or --nomake-links.
How to specify boolean value in argument to external command?
1,654,992,595,000
I've create a zip archive using the following command: zip -e myfolder.zip myfolder/ Which prompts for a password and compresses the folder without errors. Now I'm trying to unzip the archive using this other command: unzip myfolder.zip which should supposedly ask for the password I set before, but doesn't - it just extracts an empty folder. I've tried using the -p mypassword option, but with the same results. What could be the problem?
Since you did not use the -r option when creating the archive, your archive contains only the directory, and not the files inside it. Apparently the encryption of a Zip file does not extend to protecting the directory structure, just the contents of the files. As a result, if a zip file contains only directories and no files, the encryption/decryption does nothing at all.
password protected .zip extracts to empty folder (without prompting for password)
1,654,992,595,000
I'm trying to measure the CPU% consumed by my app on a multi-core machine, meaning htop CPU% reports can go over 100%. I'm trying to get a simple read on CPU usage difference when I run my app in one configuration vs. another, but the change is likely less than 1% CPU and I'm seeing the following "107." for my process: I guess they hardcoded the CPU% column to only support 4 characters... is there a way to expand the width of this column so I can see fractional parts of three digit CPU% values? Ideally two digits of precision past the decimal.
I'm pretty sure that is going to be wasted precision*, in the sense that once you get to four significant digits the sampling would have to be accurate to more than 1 part in 1,000 and for five digits it would have to be more than one part in 10,000. It is very unlikely that the sampling is going to be accurate enough to detect that. And that's not even taking into account any inaccuracies because of other processes running on the same machine. Instead, what is usually done is running the code in a profiler, which can give accurate measurements of the performance of the code in isolation. * Like the joke about the museum employee telling people a dinosaur is 50,000,003 years old. Why so precise? Because it was dated as 50 million years old three years ago.
Can htop show more than 4 characters of CPU% data?
1,654,992,595,000
I've created a quantum-random number generator as part of my thesis and I'm trying to test it using the Dieharder test suite. However, I still seem to get a few weak results (not reproducibly on the same test) even though I'm using -a -y 1 -k 2 as my options. The man page indicates that -y should resolve ambiguities to either a pass or a fail, with no weak results. Am I missing something?
According to the manual, the flag to resolve ambiguity is -Y 1, not -y 1, which passes a parameter to the running test. Judging by that, you probably want -a -Y 1 -k 2, not -y. (Disclaimer: I've never used the tool in question, this is just from reading the manual page.)
Dieharder weak results even with RA mode
1,606,501,408,000
I want to join two files on a Linux machine. I want to join the lines only included in the first file. First file is unzipped file (no header, only one column). 1_4 3_4 4_63 6_2 Second file is gz file (with header, 16 columns). CHR POS rsid SNPID Allele1 Allele2 AC_Allele2 AF_Allele2 imputationInfo N BETA SE Tstat p.value p.value.NA Is.SPA.converge 1 4 78 42 850 284 102 478 199 3777 485 2.5 2.4 23 35 336 8 3 74 24 0 2485 21 48 9 77 85 0.5 5.4 42 4312 335 many more lines I tried as below. join -11 -21 <(cat file1 | sort -k1,1) <(zcat file2.gz | sed 1,1d | awk 'NR>1{print $1"_"$2,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16}' | sort -k1,1) | awk '{print $1,$2,$3,$6,$5,$9+$10,$8,$11,$12,$7}' > outfile The output file includes not only the lines included in the first file. Does anybody know what is wrong? Thanks in advance!
You have one error that means you will miss the first line from file2. You have both sed 1,1d which will delete the first line, the header, but also NR>1 in the awk which will again skip the first line. You probably wanted this instead: join -11 -21 <(cat file1 | sort -k1,1) \ <(zcat file2.gz | awk 'NR>1{print $1"_"$2,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16}' | sort -k1,1) | awk '{print $1,$2,$3,$6,$5,$9+$10,$8,$11,$12,$7}' That said, everything else should work as you describe. I tested using these example files: $ cat file1 1_4 3_4 4_63 6_2 and $ zcat file2 CHR POS rsid SNPID Allele1 Allele2 AC_Allele2 AF_Allele2 imputationInfo N BETA SE Tstat p.value p.value.NA Is.SPA.converge 1 4 78 42 850 284 102 478 199 3777 485 2.5 2.4 23 35 336 1 8 78 42 850 284 102 478 199 3777 485 2.5 2.4 23 35 336 And, as expected, I only got one line of output, for 1_4: $ join -11 -21 <(cat file1 | sort -k1,1) \ <(zcat file2.gz | awk 'NR>1{print $1"_"$2,$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16}' | sort -k1,1) | awk '{print $1,$2,$3,$6,$5,$9+$10,$8,$11,$12,$7}' 1_4 1 4 850 42 677 102 3777 485 284 If this is not what you are seeing, please edit your question and include an example we can actually use to reproduce the error.
How to join two files in linux?
1,606,501,408,000
I have a file named file1 with information like below: TCONS_00000011 XLOC_000003 - u q1:MSTRG.39|MSTRG.39.9|4|0.000000|0.000000|0.000000|7468 TCONS_00000012 XLOC_000004 - u q1:MSTRG.41|MSTRG.41.1|2|0.000000|0.000000|0.000000|1270 TCONS_00000013 XLOC_000003 - u q1:MSTRG.39|MSTRG.39.10|2|0.000000|0.000000|0.000000|6835 TCONS_00000014 XLOC_000003 - u q1:MSTRG.39|MSTRG.39.11|2|0.000000|0.000000|0.000000|880 TCONS_00000015 XLOC_000003 - u q1:MSTRG.39|MSTRG.39.12|3|0.000000|0.000000|0.000000|377 TCONS_00000016 XLOC_000005 - u q1:MSTRG.2|MSTRG.2.1|1|0.000000|0.000000|0.000000|709 TCONS_00000017 XLOC_000006 - u q1:MSTRG.4|MSTRG.4.1|1|0.000000|0.000000|0.000000|343 TCONS_00000018 XLOC_000007 - u q1:MSTRG.40|MSTRG.40.1|7|0.000000|0.000000|0.000000|12112 TCONS_00000019 XLOC_000007 - u q1:MSTRG.40|MSTRG.40.2|2|0.000000|0.000000|0.000000|310 TCONS_00000020 XLOC_000007 - u q1:MSTRG.40|MSTRG.40.3|3|0.000000|0.000000|0.000000|538 TCONS_00000021 XLOC_000008 - u q1:MSTRG.42|MSTRG.42.1|9|0.000000|0.000000|0.000000|6331 TCONS_00000022 XLOC_000008 - u q1:MSTRG.42|MSTRG.42.2|5|0.000000|0.000000|0.000000|1311 TCONS_00000023 XLOC_000008 - u q1:MSTRG.42|MSTRG.42.3|5|0.000000|0.000000|0.000000|923 TCONS_00000024 XLOC_000008 - u q1:MSTRG.42|MSTRG.42.4|2|0.000000|0.000000|0.000000|455 TCONS_00000025 XLOC_000009 - u q1:MSTRG.7|MSTRG.7.1|1|0.000000|0.000000|0.000000|232 TCONS_00000026 XLOC_000010 - u q1:MSTRG.6|MSTRG.6.1|1|0.000000|0.000000|0.000000|483 TCONS_00000027 XLOC_000011 - u q1:MSTRG.12|MSTRG.12.1|2|0.000000|0.000000|0.000000|2489 TCONS_00000028 XLOC_000012 - u q1:MSTRG.14|MSTRG.14.1|1|0.000000|0.000000|0.000000|7604 TCONS_00000029 XLOC_000013 - u q1:MSTRG.55|MSTRG.55.1|4|0.000000|0.000000|0.000000|1511 And file2 is like below: XLOC_000005 XLOC_000007 XLOC_000009 XLOC_000010 XLOC_000012 Based on information from file2 if it matches with second column in file1 I want to extract all information from file1. And the output should look like below: TCONS_00000016 XLOC_000005 - u q1:MSTRG.2|MSTRG.2.1|1|0.000000|0.000000|0.000000|709 TCONS_00000018 XLOC_000007 - u q1:MSTRG.40|MSTRG.40.1|7|0.000000|0.000000|0.000000|12112 TCONS_00000019 XLOC_000007 - u q1:MSTRG.40|MSTRG.40.2|2|0.000000|0.000000|0.000000|310 TCONS_00000020 XLOC_000007 - u q1:MSTRG.40|MSTRG.40.3|3|0.000000|0.000000|0.000000|538 TCONS_00000025 XLOC_000009 - u q1:MSTRG.7|MSTRG.7.1|1|0.000000|0.000000|0.000000|232 TCONS_00000026 XLOC_000010 - u q1:MSTRG.6|MSTRG.6.1|1|0.000000|0.000000|0.000000|483 TCONS_00000028 XLOC_000012 - u q1:MSTRG.14|MSTRG.14.1|1|0.000000|0.000000|0.000000|7604 How can I do this linux?
This is probably what you want: awk 'NR==FNR{a[$1]; next} $2 in a' file2 file1
How to get all the matches from a file based on names in another file? [duplicate]
1,606,501,408,000
stty -echo; cat -v; stty echo is a technique to see what key you send to terminal. But I just wonder how this command work? When I remove stty -echo it will print twice what you typed in. I know stty -echo is disabling terminal printing you type. More specifically, my quest is "why can I use ';' connect commands to achieving disabling echo first then opening after cat -v command?" or Is there any correlation with ";" at all?
; just separates commands so they are run one after the other. Here, if you enter that at the prompt of an interactive shell, the terminal device local echo will have been disabled and reenabled by the time you you get back to the prompt as long as you exit cat normally (with Ctrl+D twice, or on an empty line). If cat is interrupted with SIGINT or SIGQUIT (if you press Ctrl+C or Ctrl+\), shells like bash cancel the whole command line, so the stty echo command will not be run, and the local echo won't be reenabled. In the zsh shell, you could do instead: STTY=-echo cat -vt Which is special syntax to change some tty settings only for the duration of a command. That way, the tty settings will be restored even if cat is interrupted. Though zsh always restores the tty local echo by itself anyway. In bash, you could do something similar with a helper function: with_different_tty_settings() ( tty_settings=$(stty -g) # save current settings trap 'stty "$tty_settings"' INT EXIT QUIT set -o noglob local IFS stty $STTY # split $STTY on default IFS characters "$@" ) And call cat as: STTY=-echo with_different_tty_settings cat -vt (contrary to zsh's STTY, it doesn't handle job suspension (with Ctrl+Z for instance) though). If you change it to STTY='-echo -isig', you'll be able to see what character Ctrl+C sends. With STTY='raw -echo', you'd be able to see all characters (and unmodified by the tty line discipline, and as soon as you enter them), but then you wouldn't be able to terminate cat. But you could do STTY='raw -echo time 30 min 0' for cat to exit after 3 seconds of inactivity.
How does `stty -echo; cat -v; stty echo` work to echo special keys?
1,606,501,408,000
I'm using scanimage --batch-prompt command to scan multiple documents one by one. That way it will ask to confirm scan for each page including first one. Problem is, my usual use case is to launch that command when I've already placed first page in scanner and I want it to be processed without pressing any keys. Is it possible to use batch mode in a way that will automatically scan first page but then will wait for user confirmation for all others?
Scanimage does not have such an option. With a simple bash function, you can provide the first enter with an echo and wait for the other enters with cat. That is what echo;cat does. You can test this with: (echo;cat)|sed 's/^/START/;s/$/END/' So that is fed into the STDIN of scanimage.
Automatically scan first page with scanimage in batch mode
1,606,501,408,000
I was newbbbie of crontab command, and while i was investigating this command, i suddenly type some number and made my crontab -e look like this: pi@raspberrypi:~ $ crontab -e no crontab for pi - using an empty one 889 is there any way to set crontab back to default or how to delete them? i just want use crontab to automatic doing my tasks. # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command "/tmp/crontab.QzVh1G/crontab" 23 lines, 898964 characters it show like this after I follow your instruction export VISUAL=vi crontab -eand it seems i can not edit this file except using qa! to exit. is there anything i missed?
Your editor is set to ed. The ed editor is a very basic line editor which will output the number of bytes in the file when you open it. In this case, you crontab file contains 889 bytes (type ,p and press Enter in the editor to see the contents of the file). You most likely don't want to use ed as you editor (or you would have recognized that you had started it). To exit the editor, simply type q and press Enter, or press Ctrl+D. Then run crontab -e again, but with the VISUAL environment variable set to the editor that you most commonly use to edit files on your system. Here is how you may set VISUAL to vi, as an example, but you may use nano or any other terminal editor that happens to be installed. export VISUAL=vi crontab -e You may want to set the value of VISUAL in your shell's startup file (in ~/.bashrc if you're using bash).
Crontab -e simple problem
1,606,501,408,000
I am trying to extract some info from a text block with markers like #@ and #@@. Using the command below with the example file works, but when trying chain it with -e it does not work as expected. Current command (not ideal) sed -n "/^#@/,/#@@/p" file | sed 's/[#@]*//' Reworked command ( does not work) sed -en "/^#@/,/#@@/p" -e 's/[#@]*//' file Desired output text title text line text line File format # # #@ text title # text line # text line #@@ # What i am doing wrong ?
The command sed -en "/^#@/,/#@@/p" -e 's/[#@]*//' file will likely error out, because -en tries to apply expression n to file /^#@/,/#@@/p. If you want to combine -e with other options, you must put the expression argument after the -e like -ne "/^#@/,/#@@/p" or separate them completely like -n -e "/^#@/,/#@@/p" However it looks like you want to apply the substitute command to the addressed lines and then print them, which is really a single expression: $ sed -n '/^#@/,/#@@/s/^#@*//p' file text title text line text line To remove leading whitespace as well: $ sed -n '/^#@/,/#@@/s/^#[@ ]*//p' file text title text line text line
How to chain multiple sed commands properly
1,606,501,408,000
I have a directory tree containing photos & videos, where I've "tagged" some information via the path. What I'd like to do is move all folders of a specific name into a separate folder, but reproduce the paths leading up to each instance of folders of that name. For example, starting with: /Media .../Pics ....../Travel ........./Round-The-World1 ........./Round-The-World2 ............./Japan ............./Europe ................/Italy ................/Spain ....../Home .../Vids ....../Travel ........./Round-The-World1 ........./Round-The-World2 ............./Japan ............./Europe ................/Italy ................/Spain ....../Home Let's say I wanted to move the folders with the exact name "Europe" (i.e. not with "Europe" as a substring - only folders that are named "Europe," and their children). The desired result: /Media .../Pics ....../Travel ........./Round-The-World1 ........./Round-The-World2 ............./Japan ....../Home .../Vids ....../Travel ........./Round-The-World1 ........./Round-The-World2 ............./Japan ....../Home /New .../Pics ....../Travel ........./Round-The-World2 ............./Europe ................/Italy ................/Spain .../Vids ....../Travel ........./Round-The-World2 ............./Europe ................/Italy ................/Spain It's somewhat similar to Extract files with specific file extension and keep directory structure?, but I'm quite a beginner with the Linux cli, and don't want to risk screwing up my organization - so any help would be appreciated :)
One thing you need to take into consideration before looking at the following answer is that you will effectively be recreating the whole path all the way from / This will be a multi-step process, because you are looking to recreate the parent folder structure, and not precisely copy it and its contents. The structure is as follows home/ ├── fol_a │   ├── fol_ABC │   │   └── My Folder │   │   └── file 2 │   └── fol_XYZ │   └── My Folder │   ├── file1 │   └── fol_extra │   └── file 3 └── my test 11 directories, 3 files in other words the full path of file1 is /home/fol_a/fol_XYZ/My Folder/file1. Our goal is to move folders My Folder and their contents, to the my test folder such that their paths become /home/my test/home/fol_a/fol_XYZ/My Folder/* and /home/my test/home/fol_a/fol_ABC/My Folder/* 1. Rebuild directory structure and copy appropriate folders and their contents Running from the parent folder of the tree branch where folders of the same name are located (in our example it would be fol_a , but in your example it would be /Media ) , you can do the following: NOTE: export these variables , as DEST and SOURCE will be necessary for feeding variables into bash -c of find DEST=target path SOURCE=name of folder to copy, with its full path recreated BRANCH=the common branch of the same-named folders you wish to copy export DEST="/home/my test" export SOURCE="My Folder" export BRANCH="/home/fol_a" find "$BRANCH" -type d -name "$SOURCE" -exec bash -c 'mkdir -p "$DEST""$(realpath "{}")" ; cp -r "{}"/* "$DEST""$(realpath "{}")"' \; NOTE: If you get a message like cp: cannot stat './[FOLDERNAME]/*': No such file or directory **it is not an error, it just means that ** cp -r [FOLDERNAME]/* command happened to execute on a folder that was empty Check the recreated folder tree and contents in your destination folder To quickly check the folder structure and the files to see if the copying went correctly you can run tree "$DEST"/.. At this stage(before removing) your overall folder tree will look like so: home/ ├── fol_a │   ├── fol_ABC │   │   └── My Folder │   │   └── file 2 │   └── fol_XYZ │   └── My Folder │   ├── file1 │   └── fol_extra │   └── file 3 └── my test └── home └── fol_a ├── fol_ABC │   └── My Folder │   └── file 2 └── fol_XYZ └── My Folder ├── file1 └── fol_extra └── file 3 14 directories, 6 files 2. Remove the folder tree from the old common branch Finally (and after double checking all the files are actually within appropriate folders etc.) if everything looks correct , and you are looking to truly move fol_d and its contents (and not just copy) you can finish it off with the following From the same previously declared common branch (in the above example it was /home/fol_A) find "$BRANCH" -type d -name "$SOURCE" -exec rm -r "{}" \; NOTE: You might get messages like find [$SOURCE] :No such file or directory but this will be more or less normal when deleting contents and directory using find and -exec directive. One last check is to run tree and the final output becomes: home/ ├── fol_a │   ├── fol_ABC │   └── fol_XYZ └── my test └── home └── fol_a ├── fol_ABC │   └── My Folder │   └── file 2 └── fol_XYZ └── My Folder ├── file1 └── fol_extra └── file 3 11 directories, 3 files
Extract Folders With Certain Name To Parallel Directory Structure?
1,606,501,408,000
I have the same question as this: configure error trying to install wgrib2 in mint: configure: error: C compiler cannot create executables However, I am not sure how to solve the issue. I am not overly familiar with Linux. Here are the outputs from the same commands as the original post: $ type -a cc cc is /usr/bin/cc $ echo $CC gcc $ ls -Alh $(command -v cc) lrwxr-xr-x 1 root wheel 5B May 27 20:46 /usr/bin/cc -> clang What exactly do I need to get the compiler to work? Edit: Here is the error: checking build system type... x86_64-apple-darwin19.6.0 checking host system type... x86_64-apple-darwin19.6.0 checking how to print strings... printf checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/Users/eli.turaskyriskpulse.com/Documents/Misc/wgrib/grib2/libaec-1.0.2': configure: error: C compiler cannot create executables See `config.log' for more details make: *** [/Users/eli.turaskyriskpulse.com/Documents/Misc/wgrib/grib2/lib/libaec.a] Error 77
Your problem is different than the mint Linux user. That person was trying to use a compiler (icc) that was not installed. Intel C compiler (icc) is not a generic name for a C compiler that compiles code for an Intel cpu. You are on a MacOS, and are having a problem compiling the AEC library. You should try the real gnu C compiler (gcc) rather than clang. See: https://bovineaerospace.wordpress.com/2017/08/20/how-to-install-wgrib2-in-osx/ This advice is echoed by other installation pages.
configure error trying to install wgrib2
1,606,501,408,000
I've installed exa via cargo and added the path to my ~/.bashrc file: PATH=/root/.cargo/bin:$PATH as per the post-installation instructions: warning: be sure to add /root/.cargo/bin to your PATH to be able to run the installed binaries Despite this, when I try to run exa I am met with Command 'exa' not found, did you mean: ... ... ... When I run printenv PATH, /root/.cargo/bin appears at the end of the PATH, even though I had added it at the beginning. When I run sudo /root/.cargo/bin/exa, the command runs fine. What is causing this/how can I get it to run properly?
If sourcing the path isn't working it's likely a permissions issue. The sudo version uses a different PATH variable since you are running as root so while permissions aren't an issue you can't find the binary in the path. For a user to "path through" (I'm not sure the technical term for that) a directory, they need to have "execute" permission on that directory. It shows up as an X in the 3rd position when listing the contents of the directory. So for me when I run sudo ls -lash /root some of the results look like this: 4.0K drwx------ 10 root root 4.0K Feb 3 2020 . 4.0K drwxr-xr-x 24 root root 4.0K May 28 16:27 .. The second set of letters and dashes are the permissions. The leading d means it's a directory, the next three letters (rwx) are the read, write, and execute permissions for the owner of the file. The next three (--- in the case of . which is /root and r-x in the case of .. which is /) are for the "group" that is associated with the file. And the final set is for everyone else. If your permissions are anything like mine the whole root folder is only readable, writable, and passable by the root user. While you can probably get around this by adding execute to /root with sudo chmod +x /root it is generally a bad idea to mess with the permissions of the root directory/account and an especially bad idea to do it with individual commands rather than groups or other management tools. What you likely want to do instead, is look for a way to install the binary for all users. There is likely a install method, commonly something like this: sudo make install But you can check the documentation, they may do something else.
Command 'exa' not found
1,606,501,408,000
Suppose i have two applications called firefox and arduino. At first, I typed firefox to the terminal and i still can use the terminal as usual. But when i typed arduino , i couldn't use the terminal anymore and i had to put it as background process. So, What are the difference between them?
It depends on how the program is started. There are various modes in which an application can be started, a couple of them relevant to this question are Daemon mode and Foreground mode. I think when you start your Firefox, the application by default is started as daemon, in daemon mode the application sliently starts running in the background so that no user interaction (just for an example) can hamper it. More about Daemon here. Another type is Foreground, when you start your Arduino application, it is programmed to start in Foreground mode as default (my guess). Foreground mode does exactly what you mentioned, it just stays on the terminal until you kill it with CTRL-C or of some other methods. Foreground method is useful when you want to know what the application actually doing.
Why sometimes when i run a program in terminal, it won't run in the terminal?
1,606,501,408,000
I am pasting my output of the shell script for i in `cat disk.txt`; do echo Server:$i ssh -q -i ~/production_private_key.pem $i "df -h --output=source,size,used,avail,pcent| grep -v tmp" done below. I need to print the above output to different columns like as below. Server IP | File System | Total Size | Used Space | Available Space | Percentage Can someone help here?
Add the IP for each line using xargs: echo 'Server IP | File System | Total Size | Used Space | Available Space | Percentage' for i in ...; do ssh -q -i ~/production_private_key.pem $i "df -h --output=source,size,used,avail,pcent \ | tail -n+2 \ | grep -v tmp" \ | xargs -I{} printf '%s %s\n' "$i" {} done Replacing blank with | should be easy
I need to print output of a shell script in to separate columns
1,606,501,408,000
I installed PyCharm Professional from the Ubuntu Software Store. I am using Ubuntu 20.04 although I don't think that matters much here. I can't use it in the command line. I can't happen to find where it's installed so that I can add the executable launcher to the path. It is working absolutely fine but I'm habituated to open apps from the command line so it would help if I could use something similar to subl . or code ..
Pycharm is installed as a snap in Ubuntu 20.04. Typing pycharm-professional from the cli should launch it. Otherwise you can launch using the full path: $(mount | grep pycharm | awk '{ print $3 }')/bin/pycharm.sh On my system the full path is /snap/pycharm-professional/198/bin/pycharm.sh. Yours will be slightly different and so use the command above.
What is the path of PyCharm in ubuntu when installed from the software store?
1,606,501,408,000
This is my setting logrotate /home/sy/logs/kitxit*/*/tend.log { daily rotate 10 dateext compress delaycompress copytruncate missingok notifempty su apache apache } Output is like below drwxr-xr-x 2 apache apache 4096 Apr 30 13:00 . drwxr-xr-x 6 apache apache 4096 Apr 30 13:00 .. -rw-r--r-- 1 apache apache 21318609 May 2 21:25 tend.log -rw-r--r-- 1 apache apache 4091 Feb 24 03:02 tend.log-20200224.gz -rw-r--r-- 1 apache apache 4065 Feb 25 03:02 tend.log-20200225.gz -rw-r--r-- 1 apache apache 4460 Feb 26 03:03 tend.log-20200226.gz -rw-r--r-- 1 apache apache 4049 Feb 27 03:03 tend.log-20200227.gz -rw-r--r-- 1 apache apache 2619 Feb 28 03:03 tend.log-20200228.gz -rw-r--r-- 1 apache apache 1312 Feb 29 03:03 tend.log-20200229.gz -rw-r--r-- 1 apache apache 1339 Mar 1 03:03 tend.log-20200301.gz -rw-r--r-- 1 apache apache 1305 Mar 2 03:03 tend.log-20200302.gz -rw-r--r-- 1 apache apache 2669 Mar 3 03:02 tend.log-20200303.gz -rw-r--r-- 1 apache apache 70011 Mar 4 03:03 tend.log-20200304 Why am I still seeing old log files and not having new ones? This is a result of debug mode [root@xavs-ken logrotate.d]# logrotate -dv kitxit-tend-sylog reading config file kitxit-tend-sylog Allocating hash table for state file, size 15360 B Handling 1 logs rotating pattern: /home/sy/logs/kitxit*/*/tend/*.log /home/sy/logs/kitxit*/*/sylog/*.log after 1 days (10 rotations) empty log files are not rotated, old logs are removed switching euid to 48 and egid to 48 considering log /home/sy/logs/kitxit2/bola/tend/sql.log log needs rotating considering log /home/sy/logs/kitxit2/bola/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/bola/tend/mem.log log does not need rotating (log is empty)considering log /home/sy/logs/kitxit/bola/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/clpl/tend/sql.log log needs rotating considering log /home/sy/logs/kitxit/clpl/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/mol/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/pola/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/solsa/tend/sql.log log needs rotating considering log /home/sy/logs/kitxit/solsa/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/sario/tend/mem.log log does not need rotating (log is empty)considering log /home/sy/logs/kitxit/sario/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/sasu/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/kilo/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/mixi/tend/mem.log log does not need rotating (log is empty)considering log /home/sy/logs/kitxit/mixi/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/aziz/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit/xondana/tend/tend.log log needs rotating considering log /home/sy/logs/kitxit2/bola/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/bola/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/clpl/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/mol/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/pola/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/solsa/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/entag/sylog/action.log log does not need rotating (log is empty)considering log /home/sy/logs/kitxit/sario/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/sasu/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/kilo/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/mixi/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/aziz/sylog/action.log log needs rotating considering log /home/sy/logs/kitxit/xondana/sylog/action.log log needs rotating rotating log /home/sy/logs/kitxit2/bola/tend/sql.log, log->rotateCount is 10 dateext suffix '-20200503' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding logs to compress failed copying /home/sy/logs/kitxit2/bola/tend/sql.log to /home/sy/logs/kitxit2/bola/tend/sql.log-20200503 truncating /home/sy/logs/kitxit2/bola/tend/sql.log rotating log /home/sy/logs/kitxit2/bola/tend/tend.log, log->rotateCount is 10 dateext suffix '-20200503' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding logs to compress failed glob finding old rotated logs failed ...
The log files in the debug output do not correspond to the logrotate path in the configuration file. Files matching tend.log in the debug output considering log /home/sy/logs/kitxit2/bola/tend/tend.log considering log /home/sy/logs/kitxit/clpl/tend/tend.log considering log /home/sy/logs/kitxit/mol/tend/tend.log considering log /home/sy/logs/kitxit/pola/tend/tend.log considering log /home/sy/logs/kitxit/solsa/tend/tend.log considering log /home/sy/logs/kitxit/sasu/tend/tend.log considering log /home/sy/logs/kitxit/kilo/tend/tend.log considering log /home/sy/logs/kitxit/aziz/tend/tend.log considering log /home/sy/logs/kitxit/xondana/tend/tend.log Logrotate configuration /home/sy/logs/kitxit*/*/tend.log This pattern would need to be amended as follows to match the files being considered, i.e. with another */ in the path /home/sy/logs/kitxit*/*/*/tend.log Since your target files are being referenced in the debug output I would surmise that there is another logrotate snippet somewhere that stopped working around March 4th/5th.
Why am I still seeing old log files and not having new ones?
1,606,501,408,000
This is my first time asking a question. I am newly trying to use command line more and more but this problem is beyond my skill. I want to do a multi part file transfer. 1) I want to take multiple files file_1.md, file_2.md...etc, from original_folderand copy them to target_folder_master 2) I want to take each file, create a new folder based on the name of each file, for instance there should be a folder named file_1 etc within target_folder_master 3) I want to be able to copy each file into its correspondingly named folder 4) and then rename each file in its target folder from its original name to index.md, for instance file_1.md should be renamed index.md with final path ~/file_1/index.md My hope is that this is all automated.
Try this: for file in *.md; do mkdir "/path/to/target_folder_master/${file%.*}" mv "$file" "/path/to/target_folder_master/${file%.*}/index.md" done
Copy Files from one directory to another while creating a new folder for each file, named after the file
1,606,501,408,000
On Windows this is quite easy using Process Hacker. Suppose I want to know how many megabytes my torrent client has recieved and sent on Linux. How would I do this?
You could use nethogs, but it only accumulates statistics since you started it. So if you combine with screen or tmux you could leave it running between terminal sessions and occasionally check it. In this answer nethogs is used in trace mode, which you could output to a logfile and also keep running. Then you could just see the current statistic by looking at the last line in the log at any time. There are other options (like Zabbix or Nagios), but these involve installing more complex "system management" solutions with daemons and/or database backends, with the advantage you can have web dashboards to check status. HTH, ppenguin
Network stats for a program
1,606,501,408,000
I have the following directory structure: top_dir |________AA |_______f1.json |_______f2.json |________BB |_______f1.json |_______f2.json |________CC |_______f1.json |_______f2.json I would like to write a script / command line command to get the following structure new_dir |_______f1_AA.json |_______f2_AA.json |_______f1_BB.json |_______f2_BB.json |_______f1_CC.json |_______f2_CC.json I tried reading into some solutions for renaming files and copying moving files with the same. However, I am not yet able to solve this. Thanks!
Using a loop: mkdir /path_to/new_dir cd /path_to/top_dir for i in */*.json; do cp "$i" "/path_to/new_dir/$(basename "$i" .json)_$(dirname "$i").json" done $(basename "$i" .json) prints the filename without suffix, e.g. f1 $(dirname "$i") prints the directory name, e.g. AA
Copy files with the same name but in different dirs into a new dir while renaming them
1,606,501,408,000
I am facing problems running two commands one after another in bash.  When I run source2() { '/home/ds/Documents/scripts/Untitled Document 1.sh' && imgpath="$(ls | grep "^unsplash")" } source3() { '/home/ds/Documents/scripts/Untitled Document 2.sh' && imgpath="$(ls | grep "^1920x1080" | shuf -n 1)" } source4() { '/home/ds/Documents/scripts/Untitled Document 3.sh' && imgpath="$(ls | grep "^unsplashimg")" } SOURCES=("source2" "source3" "source4") $(eval $(shuf -n1 -e "${SOURCES[@]}")) echo $imgpath The bash script part runs, but the part after && does not and hence echo $imgpath gives no output. When I run individual commands like '/home/ds/Documents/scripts/Untitled Document 1.sh' && imgpath="$(ls | grep "^unsplash")" then I get desired outputs. What am I doing wrong? I have taken hints from How do I set a variable to the output of a command in Bash? How can I store commands as variables and execute them randomly in bash?
Syntax issues aside, it's how you're calling eval: $(eval $(shuf -n1 -e "${SOURCES[@]}")) The outer $(...) mean that the eval happens inside a subshell, then the current shell takes the output and executes that as a command. Because eval runs in a subshell, the contents of the variable will disappear with the subshell. Now, do you need eval? The shuf command will produce a string with the same name as a function. You could write instead: func=$(shuf -n1 -e "${SOURCES[@]}") && "$func" or simply $(shuf -n1 -e "${SOURCES[@]}") In the last case, we do want the shell to execute the output of shuf as a command
Run two commands one after another in bash, via a function, called with `eval`
1,606,501,408,000
I want to generate a list of file names containing n=1 to k, add the string "cat output xyz.pdf" at its end and pass the result as parameter to pdftk. It should execute as this: pdftk file1.pdf file2.pdf file3.pdf cat output xyz.pdf How can I automate this directly in the CLI?
If you are using bash as indicated by your question tag, there's no need for a loop: you should be able to use brace expansion. Ex. for k = 32 pdftk file{1..32}.pdf cat output xyz.pdf If the number of files is very large, this approach may become limited by ARG_MAX (resulting in an "argument list too long" error).
Generate parameters for pdftk with loop in bash
1,606,501,408,000
Suppose I have a Apache/Haproxy log: Jan 28 15:45:18 lict haproxy[48318]: 103.133.5.14:52243 [28/Jan/2020:15:45:08.730] LICT_front~ LICT_back/web2 9320/0/0/212/9532 302 556 - - --VN 24/24/4/1/0 0/0 "POST /exam/Users/login HTTP/1.1" Jan 28 15:45:19 lict haproxy[48318]: 37.111.205.140:23757 [28/Jan/2020:15:45:19.355] LICT_front~ LICT_back/web3 56/0/0/1/57 404 373 - - --VN 14/14/2/0/0 0/0 "GET /favicon.ico HTTP/1.1" Jan 28 15:45:20 lict haproxy[48318]: 10.205.122.17:64022 [28/Jan/2020:15:45:04.084] LICT_front~ LICT_back/web3 15725/0/0/212/15937 302 462 - - --VN 13/13/2/0/0 0/0 "POST /exam/Users/login?lang=bn HTTP/1.1" Jan 28 15:45:27 lict haproxy[48318]: 103.133.5.14:52253 [28/Jan/2020:15:45:27.779] LICT_front~ LICT_back/web2 119/0/0/78/202 404 15377 - - --VN 13/13/2/1/0 0/0 "GET /exam/img/24_dbbl.png HTTP/1.1" Jan 28 15:45:33 lict haproxy[48318]: 103.204.209.118:18949 [28/Jan/2020:15:45:33.374] LICT_front~ LICT_back/web2 392/0/1/1/394 404 373 - - --VN 19/19/2/1/0 0/0 "GET /favicon.ico HTTP/1.1" Jan 28 15:45:37 lict haproxy[48318]: 182.163.96.37:58192 [28/Jan/2020:15:45:37.010] LICT_front~ LICT_back/web2 116/0/1/252/369 302 570 - - --VN 18/18/3/2/0 0/0 "POST /exam/Users/login HTTP/1.1" Jan 28 15:45:44 lict haproxy[48318]: 10.205.122.17:64040 [28/Jan/2020:15:45:44.109] LICT_front~ LICT_back/web3 27/0/0/80/113 404 15373 - - --VN 16/16/3/0/0 0/0 "GET /exam/img/24_dbbl.png HTTP/1.1" Jan 28 15:45:44 lict haproxy[48318]: 203.188.251.226:51821 [28/Jan/2020:15:45:44.319] LICT_front~ LICT_back/web3 449/0/0/230/679 302 462 - - --VN 15/15/3/0/0 0/0 "POST /exam/Users/login?lang=bn HTTP/1.1" Jan 28 15:45:47 lict haproxy[48318]: 103.254.86.107:33762 [28/Jan/2020:15:45:47.444] LICT_front~ LICT_back/web3 119/0/1/160/280 200 425 - - --VN 15/15/3/0/0 0/0 "POST /exam/applicantProfiles/savePersonalInfo HTTP/1.1" Jan 28 15:45:48 lict haproxy[48318]: 182.163.96.37:58197 [28/Jan/2020:15:45:48.557] LICT_front~ LICT_back/web2 117/0/1/1/119 404 380 - - --VN 15/15/3/2/0 0/0 "GET /images/invalid.png HTTP/1.1" Jan 28 15:45:49 lict haproxy[48318]: 103.237.38.243:4347 [28/Jan/2020:15:45:49.516] LICT_front~ LICT_back/web3 15/0/1/0/16 404 373 - - --VN 18/18/4/1/0 0/0 "GET /favicon.ico HTTP/1.1" Jan 28 15:45:51 lict haproxy[48318]: 103.237.38.243:4348 [28/Jan/2020:15:45:51.007] LICT_front~ LICT_back/web3 10/0/1/235/246 302 564 - - --VN 16/16/3/0/0 0/0 "POST /exam/Users/login?lang=bn HTTP/1.1" Jan 28 15:45:51 lict haproxy[48318]: 182.163.96.37:58198 [28/Jan/2020:15:45:51.669] LICT_front~ LICT_back/web2 113/0/0/2/115 404 378 - - --VN 18/18/3/2/0 0/0 "GET /images/valid.png HTTP/1.1" Jan 28 15:46:02 lict haproxy[48318]: 103.99.129.15:8789 [28/Jan/2020:15:46:01.696] LICT_front~ LICT_back/web3 687/0/0/201/888 302 590 - - --VN 18/18/4/0/0 0/0 "POST /exam/users/login HTTP/1.1" Jan 28 15:46:06 lict haproxy[48318]: 182.163.96.37:58199 [28/Jan/2020:15:46:05.488] LICT_front~ LICT_back/web2 587/0/1/70/658 302 548 - - --VN 19/19/3/2/0 0/0 "GET /exam/users/logout HTTP/1.1" Jan 28 15:46:11 lict haproxy[48318]: 163.53.149.98:50480 [28/Jan/2020:15:46:11.115] LICT_front~ LICT_back/web2 316/0/1/136/454 302 453 - - --VN 18/18/3/2/0 0/0 "GET /exam/users/[email protected]&&account_code=2368455803 HTTP/1.1" I want to filter this log for error codes like 40x or 50x Jan 28 15:45:19 lict haproxy[48318]: 37.111.205.140:23757 [28/Jan/2020:15:45:19.355] LICT_front~ LICT_back/web3 56/0/0/1/57 404 373 - - --VN 14/14/2/0/0 0/0 "GET /favicon.ico HTTP/1.1" Jan 28 15:45:27 lict haproxy[48318]: 103.133.5.14:52253 [28/Jan/2020:15:45:27.779] LICT_front~ LICT_back/web2 119/0/0/78/202 404 15377 - - --VN 13/13/2/1/0 0/0 "GET /exam/img/24_dbbl.png HTTP/1.1" Jan 28 15:45:33 lict haproxy[48318]: 103.204.209.118:18949 [28/Jan/2020:15:45:33.374] LICT_front~ LICT_back/web2 392/0/1/1/394 404 373 - - --VN 19/19/2/1/0 0/0 "GET /favicon.ico HTTP/1.1" Jan 28 15:45:44 lict haproxy[48318]: 10.205.122.17:64040 [28/Jan/2020:15:45:44.109] LICT_front~ LICT_back/web3 27/0/0/80/113 404 15373 - - --VN 16/16/3/0/0 0/0 "GET /exam/img/24_dbbl.png HTTP/1.1" Jan 28 15:45:48 lict haproxy[48318]: 182.163.96.37:58197 [28/Jan/2020:15:45:48.557] LICT_front~ LICT_back/web2 117/0/1/1/119 404 380 - - --VN 15/15/3/2/0 0/0 "GET /images/invalid.png HTTP/1.1" Jan 28 15:45:49 lict haproxy[48318]: 103.237.38.243:4347 [28/Jan/2020:15:45:49.516] LICT_front~ LICT_back/web3 15/0/1/0/16 404 373 - - --VN 18/18/4/1/0 0/0 "GET /favicon.ico HTTP/1.1" Jan 28 15:45:51 lict haproxy[48318]: 182.163.96.37:58198 [28/Jan/2020:15:45:51.669] LICT_front~ LICT_back/web2 113/0/0/2/115 404 378 - - --VN 18/18/3/2/0 0/0 "GET /images/valid.png HTTP/1.1" How can I do that using grep/awk/sed or other shell scripting tools.
You could match the 11th record with awk: awk '$11 ~ /^[45]0/' logfile Or you could grep for the preceding five numbers separated by '/' plus a space character, the status code and another space character (see HAProxy HTTP log format): grep '[0-9]*/[0-9]*/[0-9]*/[0-9]*/[0-9]* [45]0[0-9] ' logfile or grep -E '([0-9]*/){4}[0-9]* [45]0[0-9] ' logfile
Filtering Apache log for errors codes
1,606,501,408,000
In Section 2.3 Token Recognition under Shell Command Language, what does the io_here token refer to? 2.3 Token Recognition The shell shall read its input in terms of lines. (For details about how the shell reads its input, see the description of sh.) The input lines can be of unlimited length. These lines shall be parsed using two major modes: ordinary token recognition and processing of here-documents. When an io_here token has been recognized by the grammar (see Shell Grammar), one or more of the subsequent lines immediately following the next NEWLINE token form the body of one or more here-documents and shall be parsed according to the rules of Here-Document. When it is not processing an io_here, the shell shall break its input into tokens by applying the first applicable rule below to the next character in its input. The token shall be from the current position in the input until a token is delimited according to one of the rules below; the characters forming the token are exactly those in the input, including any quoting characters. If it is indicated that a token is delimited, and no characters have been included in a token, processing shall continue until an actual token is delimited.
The shell grammar defines io_here as io_here : DLESS here_end | DLESSDASH here_end DLESS is <<, DLESSDASH is <<-, and here_end is the end-of-here-document marker. So the io_here token is the token introducing a here-doc.
What is the "io_here" token in the Shell Command Language referring to?
1,606,501,408,000
Regarding mapping function keys in vi readline, I have read these two stackexchanges: Remap bash vi keys? Custom key bindings for vi shell mode, ie, "set -o vi"? I have a MacBookPro with a touchbar. The function keys are always on, but unlike physical keys, the virtual touch bar function keys are tempermental and frequently inject junk into the commands i'm typing (this is particularly a problem when attempting to type an underscore...i get a lot of F9, F10, and maybe some F11). I don't use these keys...so i wish i could disable them. But, let's say that i could make them simply go to the end of the line. This is one of my many attempts to map to go to the end of the line (when in insert mode): set editing-mode vi $if mode=vi set keymap vi-insert "<F9>": end-of-line $endif the result of typing "asdf" at a prompt is as follows: TT->~$ [] (arg: 20) I have placed "[]", above, where the cursor remains after pressing <F9>, in case that is any help. The variations i have tried are as follows: "<F9>": end-of-line <F9>": end-of-line 20: end-of-line "20": end-of-line "arg: 20": end-of-line (arg: 20): end-of-line "(arg: 20)": end-of-line Update: the following .inputrc is now working to "ignore" : set keymap vi-insert "\e[20~":redraw-current-line
This is insane but true...i was on a new server today and having forgotten completely about this question (and the answer buried in comments), I was actually googling for how to do this today. I'm posting my answer for myself or for anyone else that is having problems disabling function key input in vi commandline: create or edit your ~/.inputrc file to disable through (MacOS, Ubuntu, CentOS at least) use the following: set keymap vi-insert "\e[19~":redraw-current-line "\e[20~":redraw-current-line "\e[21~":redraw-current-line "\e[22~":redraw-current-line "\e[23~":redraw-current-line As @mosvy indicates, redrawing the current line prevents the annoying "(arg: 20)" or "(arg: 21)" from ruining your command line input.
How do I remap function keys in readline bash vi (vi shell mode)?
1,606,501,408,000
I wrote an alias for cross-compiling. alias cross_compile="make CROSS_COMPILE=x86_64-buildroot-linux-uclibc- -C /home/jamal//buildroot-2019.05/output/build/linux-4.19.16 M='$PWD' modules" But the PWD is not being evaluated each time i call cross_compile from terminal, it is set to a static directory. How can i make sure the pwd is being picked up each time i call cross_compile.
You need to invert all single-quote to double-quote, and all double-quote to single-quote. This defers the expansion of PWD until the alias is invoked. Shortened example: Paul-) alias cross_compile='echo linux-4.19.16 M="${PWD}" modules' Paul-) Paul-) alias cross_compile alias cross_compile='echo linux-4.19.16 M="${PWD}" modules' Paul-) Paul-) cross_compile linux-4.19.16 M=/home/paul modules Paul-) Paul-) cd Sand* Paul-) pwd /home/paul/SandBox Paul-) cross_compile linux-4.19.16 M=/home/paul/SandBox modules Paul-)
alias for cross compiling kernel module
1,606,501,408,000
I have two web servers, and I want to know the top 10 ips sorted by the number of requests. The webservers are Apache based, so I need to look at an access.log file. The problem is since these files are huge, I really wish to not transfer them locally, thus I would like to know if there is a way to do this in streaming. I do have ssh access to these machines. One way I though, would be running something like this: awk "{ print $1 }" access.log | sort | uniq -c | sort -n | tail On both the machines, and then somehow locally combine the results but this is obviously wrong.
Since the concern is only with the filesize conceptually all that is needed is { ssh server2 cat /path/to/access.log cat /local/path/to/access.log } | awk '{print $1}' | sort | uniq -c | sort -n | tail however there are a number of things that can be done to improve the speed. First only send the ip addresses across the network, to reduce the bandwidth. { ssh server2 awk '{print $1}' /path/to/access.log cat /local/path/to/access.log } | awk '{print $1}' | sort | uniq -c | sort -n | tail Second take advantage of awk's hashing to remove the need to sort. This replaces an order n*lg(n) with an order n. This uses an associative array called seen to count how many times each ip address is seen, and at the end print out the count and the address. { ssh server2 awk '{print $1}' /path/to/access.log cat /local/path/to/access.log } | awk '{seen[$1]++} END {for (i in seen){print seen[i],i}}' | sort -n | tail Third reverse the sort, again to reduce the amount of data that needs to flow { ssh server2 awk '{print $1}' /path/to/access.log cat /local/path/to/access.log } | awk '{seen[$1]++} END {for (i in seen){print seen[i],i}}' | sort -rn | head Depending on the data, it would probably make sense to pre-process the data on the remote web server. (seen array renamed to s to save typing). Here the data being sent is count and address pairs. We then add them up locally in a third awk process. { ssh server2 awk '{s[$1]++}END{for (i in s){print s[i],i}}' /path/to/access.log awk '{s[$1]++}END{for (i in s){print s[i],i}}' /local/path/to/access.log } | awk '{s[$2]+=$1}END{for (i in s){print s[i],i}}' | sort -rn | head Untested of course.
Top 10 IP by request across two nodes
1,606,501,408,000
I'm running kali linux and until I was using the root account, everything was fine, but then I made a personal account (name: koumakpet) and used that terminal instead of koumakpet@kali: ~$ there was only $ As you can see in the image, I was trying to change my prefix by PS1='prefix' but that didn't went quite as expected, it seems like it can't detect the variables such as '\u' in string and neither it can detect the colors. I have also noticed that pressing up arrow (to see the last thing I typed) will not actually show you the last command, but instead, it just write ^[[A (same with down arrow: ^[[B) How am I supposed to set the terminal prefix to what should be default koumakpet@kali: ~$ and enable the colors
That PS1 syntax is specific to the bash shell. Presumably, that new user has been assigned a different login shell. Use chsh to change the login shell to /bin/bash (and logout+login again), or adapt that PS1 syntax to that of user's login shell. ps shows the shell in question is sh. I suppose that's the default shell used by whatever application you used to create that account. /bin/sh is the only shell you're sure to find on any Unix-like system, so that's a sensible default.
How to setup a terminal prefix
1,606,501,408,000
It happens quite often to me, that while entering a complex command I realize that I need to enter some other commands first. Being a vi user, I'd love to 0 D, enter a different command and later paste the deleted command line. Unfortunally, nobody (including myself) bothered to implement c&p-Registers in the vi mode of zsh. So right now I Insert some x at the beginning of the command to make it fail, do my other stuff, fecth the old command from the history and remove the x. And each time I ask myself: Is there some easier way to do the same thing? Not a duplicate! Please note that I mentioned I‘m using vi mode. None of the answers of the other question works for vi mode. They are only for emacs mode, even if they don‘t mention, so they are misleading. Please reopen so people can find the correct answer.
Enable the interactivecomments shell option with setopt interactivecomments and use the # action in normal/command mode on the command line (i.e. press Esc followed by #). This inserts a # in front of the line, immediately submits the line (which will be ignored since it's a comment), and adds it to the command line history. This works on a line by line basis, i.e. it does not work too well with multi-line commands unfortunately. Using the # action on a line that is already commented out (e.g. fetched from the command line history) removes the # from the start of the line and submits it. The interactivecomments shell option is by default unset in interactive shells, but set in non-interactive shells.
zsh with vi-mode: How to keep a command line for future use without executing
1,606,501,408,000
I am trying to get the same functionality as on a Linux where the last argument can be inserted into a another command easily.
If you're using bash or zsh, the shortcut !$ works in the same fashion. For example: [user@localhost ~]$ echo "test" >> new_file [user@localhost ~]$ cat !$ cat new_file test
Whats the equivalent of the Linux Alt + '.' on MacOS?
1,606,501,408,000
I'm installing Nvidia drivers and have to click through screens like this (not this specific screen, but this is the installer and has some OK and some Yes/No questions during install): Is there a way to automate this? (my goal is to eventually do this via puppet)
Yes, I've done this before. Took me a few days to get it to work. NVIDIA.....run -s for silent mode. I found this option by using the Advanced help feature: ./NVIDIA....run -A Note: Run it manually and choose all of the defaults to be sure that this is what you want (usually, that will be the case). Caveat: The machine NOT be running in graphical mode for this to run. Then, reboot into graphical mode! (i.e. 2 reboots for this method to work). EDIT 1: There are some options that can be specified on the command line to over ride the default values. NVIDIA...run -A > /tmp/NVIDIA_Help.txt is what I ran to find them. -X or --run-nvidia-xconfig will run the x-config utility. --x-sysconfig-path= is the path where the X config files will be installed. Check here for other options you may be interested in. EDIT 2: My .run files is named NVIDIA-Linux-x86_64-390.67.run. The firs part of this file is a script. The remaining is an imbedded tarball (on Linux). When I look at this file, in the first 10 lines or so is an entry that reads skip_decompress=1082 \n size_decompress=42. The first 1081 lines of this file are the script that decompresses the tarball and executes the installation script called ./nvidia_installer. You should also see a function (mine is called catDecompress) that reads the file from line 1082 through the end and untar's it. On my .run file, it looks like this: tail -n +${skip_decompress} $0 | head -n ${size_decompress} Later, when this function is called, it is directed to an output file. There is an option to uncompress this for you. I'm including it here so that you will understand what it does so that you can re-do it later. Once you have this decompressed, you can change the install options in nvidia-installer to suit your needs (change the defaults to what ever you want), then re-compress the file and append it at the end of the .run script. NOTE: There is an MD5 checksum in the header of the .run file. You will have to update that as well.
How to automate selections when installing via CLI
1,570,313,247,000
So, i installed i3 with the wrong commands on my mint 19.2 with xfce4 and got a broken version and since my system is set for "open without the account screen" its stuck there, i3 only showed me some error at status and nothing else, fixed it with sudo apt-get update && sudo apt-get install i3-wm i3status i3lock suckless-tools now i have the working status at bottom but nothing else, i cant use any key or can't write anything, my best bet is the alt+ctrl+f4 commandline, is there a way to get out of this or did i blow another virtual machine up?
The default i3 key bindings should be active. To go back to xfce, exit i3 with the keyboard combination ALT+SHIFT+E That should log you out and take you to your display manager. From there you can select xfce and log in.
Stuck in broken i3 wm and don't know how to get it back to xfce
1,570,313,247,000
So, I have a file with a list of names, like Thomas Newbury Calvin Lewis E. J. Frederickson Lamar Wojcik J.C. Lily Lillian Thomas And I'm eventually going to try and split these into a long list of first and last names, but before I do that, I want to turn "E. J." into "E.J." and I'm having trouble figuring out how to do that with bash. I know "[A-Z]+. [A-Z]+." matches "E. J." but I don't know what command allows me to remove a space only in the context of being between two dotted letters?
I think this will do with GNU sed: sed -E 's/^([A-Z]+\.)[[:blank:]]([A-Z]+\.)/\1\2/' file
removing a character in a certain context (using shell script)
1,570,313,247,000
I have a directory which contains a huge number of XML files. They are labelled as filename_date_time_checksum.xml, which means I've got thousands of files which are identical but separated by checksum. Is there a command line I can run where if filename_date_time_*.xml exists, then retain the last modified version and delete the others? Example: uk_3345_20190905_1600_b4ec24da7c59c1d889fb22ad9fad34aca882102e.xml uk_1552_20190905_1605_1a31fd97541bf300d5bf4c0c4a349e00eee5a8fb.xml uk_1552_20190905_1605_3d307e3ffbb3259a47a1bc1690c17fd291fe2cb0.xml uk_1552_20190905_1605_7da5fa3b26cbe04eb01c6308c7b680fb4eb2e463.xml uk_1552_20190905_1605_b4ec24da7c59c1d889fb22ad9fad34aca882102e.xml uk_1552_20190905_1605_d01c541fc8db736d223a21a29d9766532140fdb8.xml uk_1552_20190905_1605_fac6793f2f7e5374157c5d08ee555fcf1bbbf5f2.xml uk_3345_20190905_1600_1a31fd97541bf300d5bf4c0c4a349e00eee5a8fb.xml uk_3345_20190905_1600_d01c541fc8db736d223a21a29d9766532140fdb8.xml The files can be generated at anytime. If the files are generated uk_3345_20190905_1600_d01c541fc8db736d223a21a29d9766532140fdb8.xml on 1st Sept 2019 13:44 & uk_3345_20190905_1600_b4ec24da7c59c1d889fb22ad9fad34aca882102e.xml on 2nd Sept 2019 09:00 I want to only retain the most recent file that was generated. The only attribute of the file I need to use is the modification date.
This is untested: # find the *latest* file for each prefix declare -A mtime name stat -c "%Y %n" *xml | while read -r time filename; do prefix=${filename%_*} if (( $time > ${mtime[$prefix]:-0} )); then mtime[$prefix]=$time name[$prefix]=$filename fi done # put the filenames into an associative array for easy lookup declare -A keep for filename in "${name[@]}"; do keep[$filename]=1 done # look at teach file to determine its fate for file in *xml; do if [[ -v keep[$file] ]]; then echo "# keep $file" else echo "rm $file" fi done Or, this pipeline should output the files you want to keep: paste <( printf "%s\n" *.xml) \ <( printf "%s\n" *.xml | cut -d _ -f 1-4) \ <( stat -c '%Y' *.xml) | sort -k2,2 -k3,3rn | awk '!seen[$2]++ {print $1}'
Linux command-line to find duplicate files and only retain most recent
1,570,313,247,000
A user can execute an executable by : sudoing which allows a user to run an executable as the owner or by setting the execute bit, chmod u+x (or should it be chmod a+x ?). So what is the real difference between the two given that they have the same effect, that is, to allow someone else than the owner to run the executable?
I suspect you meant to ask specifically about chmod o+x, to enable other (i.e. someone who is neither the user nor a member of the specified group) users to execute the file. chmod a+x is a superset of chmod o+x since it turns on the execute permission for all 3 (user, group, and other). The difference then is the context in which the program will run. With sudo the program runs in the context of the specified user; without sudo the program runs in the context of the current user. For some scripts this might not matter at all, but if anything involving user permissions is involved, it matters. Maybe it would help to explain this with a hypothetical malicious script that will delete all files in the user's home directory: If the user alice runs sudo -u bob deleteHomeFiles.sh then the files in bob's home directory would all be deleted. On the other hand if alice ran deleteHomeFiles.sh directly, the files in alice's home directory would be deleted.
Difference between chmod +x vs sudoing an executable
1,570,313,247,000
my file.txt looks like this variant_id pval_nominal 1_752721_A_G_b37 2.23485e-05 1_900397_C_T_b37 3.04603e-05 1_928297_G_A_b37 2.12455e-05 I am trying to remove everything after the 2nd underscore in the first column so that it looks like this: variant_id pval_nominal 1_752721 2.23485e-05 1_900397 3.04603e-05 1_928297 2.12455e-05 The reason why I ask everything after the 2nd underscore in the first column to be removes is that instances in the first column can look like this: 1_1025672_GCA_G_b37 I was trying to use this command: awk -F _ '{print $1 (NF>1? FS $2 : "")}' file.txt > file2.txt but file2.txt looks like this: variant_id pval 1_752721 1_900397 1_928297 How to run this command so that 2nd column is returned as well? Thanks
Try this, sed 's/_[A-Z].* / /g' file variant_id pval_nominal 1_752721 2.23485e-05 1_900397 3.04603e-05 1_928297 2.12455e-05
How to remove everything after the 2nd underscore but keep the other columns?
1,570,313,247,000
I want to pass a string in hexa : 'c3:87:80:00' by instance to a binary I've tried : ./<bin> "$(python -c "print 'c3:87:80:00'")" and ./<bin> "$(printf 'c3:87:80:00')" I've also copy paste the string result of it from an hexa converter but it doesn't work better
Replace \xc3:\x87:\x80:\x00 instead of c3:87:80:00. Where \x represent hex data
pass a string with non printable char in hexa to a
1,570,313,247,000
Trying to insert output from this: kubectl get pods | grep -Eo '^[^ ]+' | grep portal Into this: kubectl exec -it <here> portal bash Tried: kubectl exec -it `kubectl get pods | grep -Eo '^[^ ]+' | grep portal ` portal bash But no luck.
kubectl exec -it "$(kubectl get pods | grep -Eo '^[^ ]+' | grep portal)" bash Or, even more: kubectl exec -c portal-container -it "$(kubectl get pods | grep -Eo '^[^ ]+' | grep portal)" bash
Pass command-line result as an argument to next command
1,570,313,247,000
I'm trying to concatenate multiple files together that are in different directories using the following command: ~$ find . -name ‘*.text’ -exec cat {} >> combined.text \; However it doesn't seem to be working as I am getting a response as: find: missing argument to `-exec' Is there something that I have may missed? Thank you!
You are using unicode quotes: ‘’ instead of normal quotes (''). Try this command instead: find . -name '*.text' -exec cat {} + >> combined.text However, if combined.text already exists, that will print a warning since combined.text will be created before launching find so will be found by the find command: $ find . -name '*.text' -exec cat {} + >> combined.text cat: ./combined.text: input file is output file You can avoid that with: find . -name '*.text' ! -name combined.text -exec cat {} + >> combined.text
Can't seem to concatenate multiple files in different directories
1,570,313,247,000
I have a private key file with some extra nonsense in there, and want just the text of the key. so: nonsense -----Begin Key----- keep this1 keep this2 keep this3 -----End Key----- nonsense should become -----Begin Key----- keep this1 keep this2 keep this3 -----End Key----- EDIT: I don't want to just remove the actual word "nonsense." It could be anything in there before and after the key text.
How about sed -e '/Begin Key/ s/^[^-]*//' -e '/End Key/ s/[^-]*$//' Ex. $ sed -e '/Begin Key/ s/^[^-]*//' -e '/End Key/ s/[^-]*$//' file -----Begin Key----- keep this1 keep this2 keep this3 -----End Key-----
SED or AWK to remove everything before the first dash and after the last
1,570,313,247,000
I have the file dir1.txt that contains the names of the following directories: 2 3 4 Directory 2 contains files 2_1.txt and 2_2.txt Directory 3 contains files 3_1.txt and 3_2.txt Directory 4 contains files 4_1.txt and 4_2.txt Each file contains two lines. Then I have created the following nested loop: #!/bin/bash input="dir1.txt" while IFS=read -r line do for j in "$line/*" do sed -e '$s/$/\n/' $j #cat $j; echo done >> output.txt done < "$input" Basically, I want to have a blank line between the concatenated files. With the above loop, I am only getting a blank line between the last file content in dir 2 and the first file in dir 3, as well as last file content in dir 3 and first file in dir 4 but I also want a blank line between the concatenated content of the files in the same directory. I have tried with cat $j; echo (commented out above) but to no avail. Tried with a nested for loop, again - I am getting the same outcome. I think my logic is wrong.
Your logic is correct, but I had to make a few modifications to get it working. Added a missing space after IFS (otherwise error) Changed the quoted "$line/*" to "$line"/* (otherwise sed: can't read 2/*: No such file or directory) Quoted $j (only for better style) Both the sed and the cat/echo version do what they should. #!/bin/bash input="dir1.txt" while IFS= read -r line do for j in "$line"/* do sed -e '$s/$/\n/' "$j" #cat "$j"; echo done >> output.txt done < "$input"
Concatenate the Content of Files from Various Directories with a Blank Line in Between
1,570,313,247,000
I have deleted a user using userdel command, but I forgot to delete user files. Now I want to delete those files, how can I find and delete them?
If you have ways of finding the userid they used to have (for example because you have one file/directory you know they owned, like their home directory), you can use find / -uid (userid) to find all files owned by that user id. You could use find / -uid (userid) -delete to delete them all, but I strongly advise against it without first reviewing what you'd delete. (In all likelihood, it's just their home directory plus some stuff in /tmp.) If you have no way of finding their userid, you can use find / -nouser to find all files belonging to users that don't exist in the system anymore and take an educated guess from the result about file they owned.
Deleted a user but forgot to delete home directory and user files
1,570,313,247,000
How does one combine two commands like the ones shown below into one command with one output file? first command: printf '%s\n' {001..500} input > output second command: sed 's/^/PREFIX /; s/$/ SUFFIX/' input > output
I realise you've answered your question, but a simpler solution would be to put the prefix and suffix in the printf command. printf 'PREFIX %s SUFFIX\n' {001..500} > output (I'm not sure if the input part should be there. It's absent in your answer.)
How to combine two text formatting commands into one?
1,570,313,247,000
I'd like to open a URL in the default internet browser from my application. My application gets compiled for MS-Windows, Linux, Mac and Solaris. One way to achieve that on Windows is using the shell command start, on Linux using the xdg-open, on Mac there is open. As a bonus, all these commands can also open any file in their default applications. However I can't find any similar reliable command (or API) on Solaris. I've heard about sdtwebclient but since it is not on my machine running Solaris 10 (SunOS 5.10), it seems not to be safe to assume that it is usually there. Any suggestions how to launch the default browser in a (more or less) reliable way on any Solaris machine?
sdtwebclient will be installed under /usr/dt/bin if CDE is installed, which limits it to Solaris 10 and older machines. For Solaris 11 and later, use xdg-open just as you would on Linux.
How to start the default browser (and/or any default application) from command line on Solaris?
1,570,313,247,000
So I am trying to figure out how to add a title after the timestamp in my xclip script here. I would like it to grab about 24 characters worth of text from the beginning of each selection and save it like: $timestamp_$24-character-long-title-of-start-text.txt Or instead of start text, would it be possible to have it grab the most used word(s) in the selection? Is this possible? If not, what is? Here's my current code: #!/bin/sh # # _ _ _ _ _ _ # __ __ __ | |(_) _ __ ___ ___ __ _ __ __ ___ ___ ___ ___ | | ___ __ | |_ (_) ___ _ _ ___| |_ # \ \ // _|| || || '_ \|___|(_-</ _` |\ V // -_)|___|(_-</ -_)| |/ -_)/ _|| _|| |/ _ \| ' \ _ (_-<| ' \ # /_\_\\__||_||_|| .__/ /__/\__,_| \_/ \___| /__/\___||_|\___|\__| \__||_|\___/|_||_|(_)/__/|_||_| # |_| # # Save Selected Text Script # XFCE4: Applications > Settings > Keyboard # Attach this script to a custom keyboard shortcut to be able to save selected text xclip -i -selection primary -o > /location/to/save/$(date +"%Y-%m- %d_%H-%M-%S")_$SOME_START_TEXT_OF_SELECTION_PREFERABLY_ONLY_24_CHARACTERS_OF_TEXT.txt
I use this script to save all kinds of useful text clips, code snippets, useful articles, everything from all over the web. It saves in drive space and is a super fast and easy way in doing so. This allows me to come back to the information later in the event I want to see or go through it again. However, just using a simple timestamp for the filename doesn't always make it so easy in trying to re-locate a specific text file. Even if it's one you saved the same day. Which comes to the reason why I asked this question, trying to add some addition info to the filename in which can hopefully represent what's inside. At the same time keeping it professional and clean looking for the user as well as the system. I know that the additional filename will help me a lot in locating the text file I am looking for. NEW SCRIPT WITH BETTER FILENAME IDENTIFICATION: #!/bin/sh # Save Selected Text Script # XFCE4: Applications > Settings > Keyboard # Attach this script to a custom keyboard shortcut to be able to save selected text xclip -o > "/mnt/SB_5TB_HDD/LOGS/save/$(date +'%Y-%m-%d_%H-%M-%S')_$(xclip -o | cat -s | perl -pe 's/\r?\n/ /' | perl -pe 's/\ /_/g' | sed 's/__/_/g' | cut -c1-30).txt" bash -c 'notify-send "Save Selected Text - Success!"' # break down of commands used to achieve desired filename: # replaces multiple line breaks with single one # cat -s # # replaces line break with a space # perl -pe 's/\r?\n/ /' # # replaces spaces with underscores # perl -pe 's/\ /_/g' # # replaces 2 underscores with 1 # sed 's/__/_/g' # # only uses first 30 characters of text # cut -c1-30 USE EXAMPLE: When selecting all the following text, and executing the above script... Preferably with a simple keyboard shortcut... Recipe for Poop Popsicles things youll need your own poop your moms favorite popsicle trays lol ok im done blah blah blah blah and ... blah. 1 more blah. This will automatically save a file that has all of the above selected text inside, along with a filename with a title like the following, and not just a boring old timestamp: 2019-01-27_00-41-58_Recipe_for_Poop_Popsicles_tr.txt
How to grab x characters long of start text from xclip selection and append to filename?
1,570,313,247,000
While I working on a script I need to print available SSID only. I tried this: sudo iwlist wlp2s0 scan | grep ESSID I got output like this: ESSID:"CoreFragment_5G" ESSID:"dlink" ESSID:"REDWING LABS" ESSID:"Hitachi" ESSID:"COMFAST" ESSID:"Yash Shah" ESSID:"CoreFragment" ESSID:"Appbirds_Technologies" ESSID:"20096641" ESSID:"REDWING LABS_5G" But I want to print name only. How to filter this command?
There are many ways to do it, using awk: sudo iwlist wlp2s0 scan | grep ESSID | awk -F '"' '{print $2}' Or using cut: sudo iwlist wlp2s0 scan | grep ESSID | cut -d '"' -f2 These commands will give you the names without ".
How to print SSID only?
1,570,313,247,000
I'm reviewing a repo and I'd like to make some comments, it would add more content like adding comments, make some changes ..etc. How to change content of files of previous commit and do push, I'm aware that commit history would be changed and all affected files would be changed also, but would git compare the affected files with further commits (--> Master) and do change aggressively?
What you are asking for is to rewrite history n to 0 commits back. This is generally a bad idea as it would make you repo out of sync from the remote and any other repo that is based on it. This would further complicate things so that others wouldn't be able to merge anymore and would require any other repo to delete their branch and pull down the newly modified one. In which case, you might as well just start a new branch and add the comments to that. In any case, this'll get a little messy. To do this, you have your merges that you would be reviewing (as an example, we'll use commits A, B and C) and then go back to A, branch off that (we'll call that branch review, and the original pull-request): ...A --B --C (pull-request) \ A' (review) git checkout HEAD{3} git checkout -b review Then do your comment modifications and check them in. git add . # or specify the specific files git commit -m "message" --author="original author" Or if you want the same message/author and don't want to type it out, you can use the following, which I would either put into a script or an alias git command: git add $(git diff-tree --no-commit-id --name-only -r <sha-of-A>) git commit -m "$(git rev-list --format=%B <sha-of-a>)" --author="$(git rev-list --format=%an <sha-of-A>)" Could also be done automatically by retrieving the appropriate sha from the appropriate parent, but I'm not exactly sure how to distinguish between the branch parent and the merge parent atm. Next merge B into review git merge <sha-of-B> Then do your comment modifications and check them in. (see above). Keep doing this till you're done and you have: ...A --B --C (pull-request) \ \ \ A'--B'--C' (review) You can then merge back into your original branch if you wish or just give that review branch back to the person from which you are reviewing.
Git: change content of previous commit and push [closed]
1,570,313,247,000
I am truing to run maitreya_textclient (an application to list astrological information in text mode) and I get the following error: WARN: datadir does not exist Fatal error: cannot open Yoga config directory /usr/share/maitreya6/../xml/yogas On Debian "testing" repository Maitreya comes as version: 7.0.7-1+b1 so it seems the textclient points to a wrong directory, since I got /usr/share/maitreya7/ on my system. Does anyone on here know how and where to fix this? The GUI Application runs without any problem.
Although posting a bug report would be best, in the interim you could try fixing up the broken directory paths to point the missing version 6 directory at the version 7 one that exists cd /usr/share ln -s maitreya7 maitreya6 I can't test this, though, so I don't know if you'll find your yogas in there at the right place.
Maitreya 7 error: linked to "config directory /usr/share/maitreya6/"
1,570,313,247,000
For curiosity, I wanted to read a GNU screen session's named pipe. $ screen -ls There is a screen on: 59750.hello (Detached) 1 Socket in /var/run/screen/S-gergely. And indeed there is a named pipe: S-gergely $ ls -l összesen 4 prw-------. 1 gergely gergely 0 nov 21 11.06 59750.hello I tried to read it with tail -f, cat and other things to no avail. Does (Detached) mean that there is no flux of data through this named pipe? Only when the screen is active? UPDATE: it does not work even when that screen is active Can I read the data with some standard Unix command-line tool?
When data was read from pipe it was gone from pipe. I don't think that it's possible to have multiple readers which can read same data at same time. So when screen is attached, probably screen process read data before tail/cat that you use.
How to read screen's named pipe?
1,570,313,247,000
Is there any nice single line command to revoke all the user's privileges (read/write permission on each account) except a user executing the command and root? I want this because I want to restrict all the access from all users at specific time point. Of course, I can do this with commands such as chown and chmod, but these commands need to work with each account.(e.g. chmod 000 $FOR_EACH_ACCOUNT)
NOTE: if you are thinking about this because you think your system has been hacked or something else unlawful is going on in it, stop reading this and search using keywords "linux gathering forensic evidence". There are some special steps you should follow if you need to absolutely "freeze" the state of a system for legally binding evidence, and revoking file permissions is not the right tool for that job. But if it's something less serious than that, read on... Instead of modifying file permissions, you probably should think in terms of stopping user sessions and disabling other network services. Create a file named /etc/nologin and no new logins will be accepted, except by the root user. Then kick out any existing sessions, ideally using the HUP signal so that editors and similar programs get a chance to save their work one last time before dying. (Otherwise you'll find one of your users had a long-running session and just lost a long document or week's worth of research data, and will be very unhappy with your actions.) But if you must instantly block all users' write operations, how about remounting the filesystem containing the users' home directories in read-only mode? mount -o remount,ro /home
Revoke all users's privileges (read/write on their home directory) except a user executing the command and root
1,570,313,247,000
I went to open my terminal and it said this: Gillians-iPhone:~ milo$ I know for a fact that this is not my computer name. I am running mac osx 10.13.4. I am on a public wifi network at a hotel. This just recently started happening. I set my laptop up as Milo’s MacBook Air.
You have got the name of the hostname of the previous user by some DHCP behaviour. I would not be surprised that after a while, depending on the Wifi setup, that your name comes back to normal. Nonetheless, one strategy to minimize this, in the present, and in the (near) future, is to configure the DHCP service to (try to) for the hostname. Go to System Preferences->NetWork->Wi-Fi->Advanced->TCP/IP, and fill the field "DHCP Client ID" with your hostname.
Terminal showing different name
1,570,313,247,000
I check the manual of mpstat it states: The mpstat command writes to standard output activities for each available processor, processor 0 being the first one. Global average activities among all processors are also reported. The mpstat command can be used both on SMP and UP machines, but in the latter, only global average activities will be printed. If no activity has been selected, then the default report is the CPU utilization report. However, I didn't get the idea what does m mean in mpstat? is it multiple?
It's unclear exactly what the M in mpstat means. NOTE: mpstat is part of the sysstat package and so is part of a family of *stat tools: $ rpm -ql sysstat | grep /bin/ /usr/bin/cifsiostat /usr/bin/iostat /usr/bin/mpstat /usr/bin/nfsiostat-sysstat /usr/bin/pidstat /usr/bin/sadf /usr/bin/sar /usr/bin/tapestat It's likely the case that the M stands for one of the following: multi-processor multiple-processors microprocessor machine monitor Given the top of the mpstat.c source code describes it as this: mpstat: per-processor statistics I'd be inclined to go with the multiple. This seems to be consistent with the source code if you glance through it, given it goes out of its way to deal with both single (UP) and multiple CPUs (SMP). Example comments from code: Structures used to save CPU and NUMA nodes CPU stats Compute CPU "all" as sum of all individual CPU (on SMP machines and look for offline CPU. Read total number of interrupts received among all CPU. What is the highest processor number on this machine? NOTE: One thing I find curious with this tool, is that if the M is meant to represent multi*, this word never actually appears in the source code, mpstat.c. References Wikipedia - mpstat sysstat utilities
What does "m" mean in mpstat?
1,570,313,247,000
Is it possible to run an opengl application like glxgears from the command line without starting a desktop environment? It should directly go to exclusive full screen mode.
It is not possible to run an application meant for X in the command line. But like @cylgalad said, you can have any desktop environment and put that application to run exclusively. Try to install a lightweight desktop environment, like xfce or fluxbox.
Running OpenGL app without desktop
1,570,313,247,000
No matter what directory I enter, the terminal always shows me the root directory which is "Nidas-MBP" Nidas-MBP% cd Projects Nidas-MBP% ls 09-Selector-Exercise-Starter.zip My Little Form 09_Selector_Exercise_Starter Prefix Free File Blog Recursion Practice Callbacks Themes Callbacks-Exercise Todo-Vanilla Copywriting css3-contact-form.zip Freelancer Theme webpack-deepdive Frog Chase Nidas-MBP% cd webpack-deepdive Nidas-MBP% ls es6-todomvc Nidas-MBP% I have tried adding the following command to end of the ~/.bashrc file and the ~/.profile file but the terminal still remained unchanged. PS1='[\u \W$] ' When I run echo "$PS1" it says%m%# I found two lines PS1=[ \W]\$ PS1='[ \W]$ ' inside ~/.bash_profile, so I replaced them both with PS1='[\u \W$] ' and typed source ~/.bash_profile. In response, my terminal started saying [\u \W$] instead of Nidas-MBP. I have no idea what I should do now to bring it back to the way it used to be.
I had no idea there was a difference in commands between bash and zsh. Apparently, I was supposed to type PS1='%m %1d$ ' instead. So I did that inside the ~/.zshrc file and it works now. https://superuser.com/questions/1108413/zsh-prompt-with-current-working-directory
My MacOSX terminal doesn't show the current directory
1,570,313,247,000
we have script that print all bad wsp files ./print_bad_wsp_files.sh ./aaaa/rrr/aaaa/fff/ooo/min.wsp ./aaaa/rrr/aaaa/fff/ooo/p50.wsp ./aaaa/rrr/aaaa/fff/ooo/min.wsp ./aaaa/rrr/aaaa/fff/ooo/p50.wsp # ls -ltr drwxr-xr-x 5 root root 36 Aug 14 14:58 aaaa is it possible to pipe the script so I will get the ls -ltr results ? of each file? I did that until now ./print_bad_wsp_files.sh | ls -ltr but it give only drwxr-xr-x 5 root root 36 Aug 14 14:58 aaaa while the expected results should be -rw-r--r-- 1 graphite mo 17308 Oct 11 2017 ./aaaa/rrr/aaaa/fff/ooo/min.wsp -rw-r--r-- 1 graphite mo 13508 Oct 11 2017 ./aaaa/rrr/aaaa/fff/ooo/p50.wsp -rw-r--r-- 1 graphite mo 27208 Oct 11 2017 ./aaaa/rrr/aaaa/fff/ooo/min.wsp -rw-r--r-- 1 graphite mo 19208 Oct 11 2017 ./aaaa/rrr/aaaa/fff/ooo/p50.wsp
All you may need here is xargs: ./print_bad_wsp_files.sh | xargs ls -ltr xargs will read the output from the script and execute ls -ltr on all of them (potentially grouped in bunches, as many as will fit in each call to ls). Note that if there are multiple calls to ls, each ls will sort its own list of files (by reverse time) separately.
how to pipe ls -ltr after list of files to capture date and time
1,532,672,904,000
Could you please explain what each option on this ls command does: ls -td -- */? The result of such command would look like below: $ ls $ ls -al total 4 drwxr-xr-x 5 root root 68 Jun 4 09:58 . drwxrwxrwt. 13 root root 4096 Jun 4 10:05 .. drwxr-xr-x 5 root root 36 May 31 15:48 05-31-2018 drwxr-xr-x 5 root root 36 Jun 4 09:45 06-04-2018 drwxr-xr-x 2 root root 6 Jun 4 09:56 06-05-2018 -rw-r--r-- 1 root root 0 Jun 4 09:58 test $ ls -td -- */ 06-05-2018/ 06-04-2018/ 05-31-2018/ # To get latest folder created: $ ls -td -- */ | head -n 1 06-05-2018/ I have no ideas what each option would do with ls command.
-td is the two options -t and -d written together. -t tells ls to sort the output based on time, and -d asks to show directories named on the command line as themselves, instead of their contents. The -- option is as far as I know not explicitly documented for many commands that do support it and it has become a slightly obscure syntax. It finds it's origins in the getopt function and is use to delimit the end of the options and the start of the parameters. You would mainly use that -- syntax to use parameters that would otherwise look like options. A good illustration is trying to manipulate files that start their names with a hyphen such as a file called "-rm -rf" Create it with touch -- '-rm -rf' ls -la total 0 -rw-r--r-- 1 herman wheel 0 Jun 4 16:46 -rm -rf ls -la * ls: illegal option -- usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...] ls -la -- * total 0 -rw-r--r-- 1 herman wheel 0 Jun 4 16:46 -rm -rf and rm -i * rm: illegal option -- m usage: rm [-f | -i] [-dPRrvW] file ... unlink file versus rm -i -- * For the meaning of command line options in general, this very basic nugget: Nearly all Linux commands come with an online manual explaining their usage and various options that modify their behaviour. Than manual can be accessed with the man command i.e. man ls Try man man for an explanation of the manual.
What is -- and -td options on ls command?
1,532,672,904,000
I'm working in a method of splitting a single large PDF file (which represents monthly settlements of a credit card). It is builded for printing but we'd like to split that file into single ones, for posterior use. Each settlement has a variable lenght: 2 pages, 3 pages, 4 pages... So we need to "read" each page, find the "Page 1 of X" and split the chunk 'till the next "Page 1 of X" appears. Also, each resulting splitted file has to have an unique Id (contained also in the "Page 1 of X" page). While I was R&D-ing I found a tool named "PDF Content Split SA" that would do the exact task we needed. But I'm sure there's a way to do this in Linux (we're moving towards OpenSource+Libre). Thank you for reading. Any help will be extremely useful. EDIT So far, I've found this Nautilus script that could do exactly what we need, but I can't make it work. #!/bin/bash # NAUTILUS SCRIPT # automatically splits pdf file to multiple pages based on search criteria while renaming the output files using the search criteria and some of the pdf text. # read files IFS=$'\n' read -d '' -r -a filelist < <(printf '%s\n' "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS"); unset $IFS # process files for file in "${filelist[@]}"; do pagecount=`pdfinfo $file | grep "Pages" | awk '{ print $2 }'` # MY SEARCH CRITERIA is a 10 digit long ID number that begins with number 8: storedid=`pdftotext -f 1 -l 1 $file - | egrep '8?[0-9]{9}'` pattern='' pagetitle='' datestamp='' for (( pageindex=1; pageindex<=$pagecount; pageindex+=1 )); do header=`pdftotext -f $pageindex -l $pageindex $file - | head -n 1` pageid=`pdftotext -f $pageindex -l $pageindex $file - | egrep '8?[0-9]{9}'` let "datestamp =`date +%s%N`" # to avoid overwriting with same new name # match ID found on the page to the stored ID if [[ $pageid == $storedid ]]; then pattern+="$pageindex " # adds number as text to variable separated by spaces pagetitle+="$header+" if [[ $pageindex == $pagecount ]]; then #process last output of the file pdftk $file cat $pattern output "$storedid $pagetitle $datestamp.pdf" storedid=0 pattern='' pagetitle='' fi else #process previous set of pages to output pdftk $file cat $pattern output "$storedid $pagetitle $datestamp.pdf" storedid=$pageid pattern="$pageindex " pagetitle="$header+" fi done done I've edit the Search Criteria, and the Script is well placed in the Nautilus Script folder, but it doesn't work. I've try debugging using the activity log from the console, and adding marks on the code; apparently there's a conflict with the resulting value of pdfinfo, but I've no idea how to solve it.
I've made it. At least, it worked. But now I'd like to optimize the process. It takes up to 40 minutes to process 1000 items in a single massive pdf. #!/bin/bash # NAUTILUS SCRIPT # automatically splits pdf file to multiple pages based on search criteria while renaming the output files using the search criteria and some of the pdf text. # read files IFS=$'\n' read -d '' -r -a filelist < <(printf '%s\n' "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS"); unset $IFS # process files for file in "${filelist[@]}"; do pagecount=$(pdfinfo $file | grep "Pages" | awk '{ print $2 }') # MY SEARCH CRITERIA is a 10 digit long ID number that begins with number 8: #storedid=`pdftotext -f 1 -l 1 $file - | egrep '8?[0-9]{9}'` storedid=$(pdftotext -f 1 -l 1 $file - | egrep 'RESUMEN DE CUENTA Nº ?[0-9]{8}') pattern='' pagetitle='' datestamp='' #for (( pageindex=1; pageindex <= $pagecount; pageindex+=1 )); do for (( pageindex=1; pageindex <= $pagecount+1; pageindex+=1 )); do header=$(pdftotext -f $pageindex -l $pageindex $file - | head -n 1) pageid=$(pdftotext -f $pageindex -l $pageindex $file - | egrep 'RESUMEN DE CUENTA Nº ?[0-9]{8}') echo $pageid let "datestamp = $(date +%s%N)" # to avoid overwriting with same new name # match ID found on the page to the stored ID if [[ $pageid == $storedid ]]; then pattern+="$pageindex " # adds number as text to variable separated by spaces pagetitle+="$header+" if [[ $pageindex == $pagecount ]]; then #process last output of the file # pdftk $file cat $pattern output "$storedid $pagetitle $datestamp.pdf" pdftk $file cat $pattern output "$storedid.pdf" storedid=0 pattern='' pagetitle='' fi else #process previous set of pages to output # pdftk $file cat $pattern output "$storedid $pagetitle $datestamp.pdf" pdftk $file cat $pattern output "$storedid.pdf" storedid=$pageid pattern="$pageindex " pagetitle="$header+" fi done done
Splitting a single large PDF file into n PDF files based on content and rename each splitted file (in Bash)
1,532,672,904,000
I'm using an CLI Arch Linux and I want to run an Shell/Bash Script to show the status of my Battery with acpi directly on the String Prompt(PS1). I create the following Shell Script to Show me the Battery Status: # Permition Acess: chmod +x loop.sh # run .sh: ./loop.sh i=true #COLOR: ORANGE='\e[33m' STOP='\e[0m' while ($i = true) do printf ${ORANGE} echo $(clear) echo $(acpi -b) sleep 1 printf ${STOP} done My Idea is to Connect the Script on PS1 to keep showing the Battery Status always update! My Current PS1 is: PS1='[${OR}USER: \u ${B}TIME: \t ${C}DIR: \W ${RED}$(__git_ps1 " (%s)")]\n[${LG}$(acpi -b)${R}]\n\$ I'm calling the acpi but he only update when I use some command
there is no portable way to do what you want, but a shell specific method will probably work. The prompt variables (PS1, PS2, etc.) have two specific and distinct types of evaluation that is mostly portable: assignment expansion which is exactly like any other variable assignment expansion which will allow for subcommand expansion is not suitable for battery monitoring as this expansion only happens once, and prompt expansion which might not allow command expansion but is expanded at every prompt display. Note that there neither method provides a possibility for continuous battery monitoring, the best case is battery status when the prompt was last displayed. Now for the non portable methods which will probably do what you want. Bash has two methods for executing arbitrary commands at prompt time: PROMPT_COMMAND and shopt promptvars. PROMPT_COMMAND is easy and straightforward, just set it to the command to run before showing the prompt. the shopt promptvars is more complicated as the quoting is more complicated. The main disadvantage is that both methods are bash specific, other shells will differ.
CLI Battery Status Update on Prompt
1,532,672,904,000
I have many files in my directory. I'd like to listing, copying or moving file containing 'abc' AND 'xyz' in their name. How do I do this pattern matching with AND? The normal command: ls *abc* *xyz* only work with OR.
Use this way. ls *abc*xyz* *xyz*abc* Using in mv or cp, you just need specify the target directory with -t option since you are looking the files with wildcards and can be more than a file to copy/move: cp -t /path/to/dest *abc*xyz* *xyz*abc* Or use find like: find \( -name '*abc*' -a -name '*xyz*' \) which is same as find -name '*abc*' -name '*xyz*' as documented in man find: expr1 expr2 Two expressions in a row are taken to be joined with an implied "and"; expr2 is not evaluated if expr1 is false. expr1 -a expr2 Same as expr1 expr2. expr1 -and expr2 Same as expr1 expr2, but not POSIX compliant. You can add -exec ... to the command above to do whatever you want to do on the files found. find \( -name '*abc*' -a -name '*xyz*' \) -exec do-stuffs {} +
Matching many pattern in file listing command ls? [duplicate]
1,532,672,904,000
I have just started learning unix and came across very basic doubt in command line arguments. Suppose if in my script i do: echo $@ #Now this prints all the command line arguments args=$@ #Args array will take the command line argument array from $@ echo $args Here i have doubt in last statement. echo arrayname as it should print only first index element but why is it showing the complete array? If i take a normal array in unix and say array name is ARR,now if i use echo ARR,it will show me first element and not all elements.So why the behaviour is different with args above?
It's printing every element because you have set a variable and not an array. To set an array you would need to do: args=($@)
Basic doubt in echo statement
1,532,672,904,000
Hi I got interest in a solution posted here & put the suggested function in a bash script leaving at ~/.bin so, a dir under path. Then performed $chmod +x verbteacher.sh for easy calling from $ anywhere in the command line but it does not work. I tried to, kind of, re-open the question & tried also the suggestion of following the above mentioned answer, kind of closely & putting the function in the .bashrc file but it still does not work for me (& it seems is not the best (practice (I'm sorry))) so just hereby asking for some more help. I would appreciate it.
If you want the conjugations (on link address, after /conjugations/ you may choose the language you want to, in my case I choosed french /fra/ ) values, write this on the last line of .bashrc: conjfra () { curl -s "http://api.ultralingua.com/api/2.0/conjugations/fra/$1" | jq -r '.[] | {tense: .conjugations}' } Merci! EDIT: My bad! sorry for that, I forgot to add the tab space onset on second line, now is correct.
Looking for an e.g. bash solution to check English verb conjugation
1,532,672,904,000
I'm trying to manipulate files in a directory whose filepath includes a directory that starts with "$", for instance: git rm path/to/file/$dollarsigndirectory/anotherdirectory/file.format I'm getting the following error: fatal: pathspec 'path/to/file//anotherdirectory/file.format' did not match any files EDIT: I've already tried using \$dollarsigndirectory and it simply says there isn't a directory called '\$dollarsigndirectory' I've tried to troubleshoot but can't figure out why the '$' would make the directory invisible. Thanks!
You can surround the path with single quotes so that the $ is not expanded.
macos - terminal deleting parts of filepath beginning with "$" [duplicate]
1,532,672,904,000
I was trying to follow this tutorial: https://www.shellhacks.com/check-website-availability-linux-command-line/ When using the curl -Is http://www.shellhacks.com | head -1command, I am unable to get 200 OK for any website at all. It is either 302 Moved Temporarily, 301 Moved Permanently or 307 Temporary Redirect. I am looking to check if a particular website can process requests. When I read about 3xx, it says it is sort of a relocation. But then, doesn't that mean that my particular website can't process requests? It seems like the location it is relocated to would be processing my requests instead. How should I consider the 3xx cases?
Actually you are able to obtain 200 OK HTTP response, but you can't eventually see it with head -1. The crucial option is -L: -L, --location (HTTP/HTTPS) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages will be shown. $ curl -LIs http://www.shellhacks.com HTTP/1.1 301 Moved Permanently Server: nginx Date: Tue, 13 Mar 2018 12:58:31 GMT Content-Type: text/html; charset=iso-8859-1 Connection: keep-alive Location: https://www.shellhacks.com/ X-Page-Speed: on Cache-Control: max-age=0, no-cache HTTP/1.1 200 OK Server: nginx Date: Tue, 13 Mar 2018 12:58:31 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Link: <https://www.shellhacks.com/wp-json/>; rel="https://api.w.org/" Set-Cookie: qtrans_front_language=en; expires=Wed, 13-Mar-2019 12:58:31 GMT; Max-Age=31536000; path=/ X-Page-Speed: on Cache-Control: max-age=0, no-cache
Can't get 200 OK when checking URL?
1,532,672,904,000
Is there a way to get GNU coreutils ls (or any other open-source ls) to omit the trailing symbol (* for executable, / for directory, etc.) only when output is piped? the GNU ls has a --color[=WHEN] option accepting auto to automatically show colors when output is not piped, and omit the control sequences for colors when output is piped. I am looking for identical behavior regarding trailing symbols indicating filetype.
Presumably you have an alias for ls that's unconditionally adding the -F (or --classify) option. I would work around that by creating a wrapper function that tests whether the stdout is a terminal or not; only add the -F option if the output is a terminal. function ls { if [ -t 1 ] then command ls -F "$@" else command ls "$@" fi } Adjust the other default options as you like.
Coreutils (or otherwise) `ls`: don't append symbol indicating type when piped
1,532,672,904,000
I am working on some tshark filters , need to split a pcap on timebasis for particular SIP and DIP, I have tried editcap but it can go with time basis only I cannot pass IPADDRESS to editcap, I saw tshark can do this [root@ids01 snort-1]# tshark -r snort.log.1518688921 -w /tmp/pcap_tshark.pcap -Y "(frame.time >= "" Feb 17, 2018 16:00:00"") && (frame.time <= ""Feb 17, 2018 16:01:00"") && ip.addr==192.0.0.7" tshark: "17" was unexpected in this context Please see bold one for error what is the issue with filters , I am using centos 7.
The problem is with usage of quotes, you need backslash to preserve double quote of the filter, try this: tshark -r snort.log.1518688921 -w /tmp/pcap_tshark.pcap -Y '(frame.time >= "Feb 17, 2018 16:00:00") && (frame.time <= "Feb 17, 2018 16:01:00") && ip.addr==192.0.0.7' or this which also permits to use variables instead of hard coded time values - for example inside a script: dbeg="Feb 17, 2018 16:00:00" dend="Feb 17, 2018 16:01:00" tshark -r snort.log.1518688921 -w /tmp/pcap_tshark.pcap -Y "(frame.time >= \"${dbeg}\") && (frame.time <= \"${dend}\") && ip.addr==192.0.0.7"
tshark filters in Centos 7
1,532,672,904,000
How do I delete lines after "/test1/end" that does not contain test1 test_long_sentence.txt: 20 /test1/catergory="Food" 20 /test1/target="Adults, \"Goblins\", Elderly, Babies, \"Witch\", Faries" 20 /test1/type="Western" 20 /test1/theme="Halloween" 20 /test1/end=category **This is some unwanted data blah blah blah** 20 /test1/Purpose= 20 /test1/my_purpose="To create a fun-filled moment" 20 /test1/end=Purpose ... Expected Output: 20 /test1/catergory="Food" 20 /test1/target="Adults, \"Goblins\", Elderly, Babies, \"Witch\", Faries" 20 /test1/type="Western" 20 /test1/theme="Halloween" 20 /test1/end=category 20 /test1/Purpose= 20 /test1/my_purpose="To create a fun-filled moment" 20 /test1/end=Purpose ... I tried : grep -A1 'end' test_long_sentence.txt| sed 'test1/!d' test_long_sentence.txt > output.txt
Try: $ awk '/test1/{f=0} !f{print} /test1\/end/{f=1}' sentence.txt 20 /test1/catergory="Food" 20 /test1/target="Adults, \"Goblins\", Elderly, Babies, \"Witch\", Faries" 20 /test1/type="Western" 20 /test1/theme="Halloween" 20 /test1/end=category 20 /test1/Purpose= 20 /test1/my_purpose="To create a fun-filled moment" 20 /test1/end=Purpose How it works When awk starts, any undefined variable is, by default, false. So, when awk starts f will be false. Awk will then read each line in turn and perform the following three commands: /test1/{f=0} For any line containing test1, we set variable f to false (0). When we are in a range of lines that we want to print f will be set to false. !f{print} If f is false, print the current line. /test1\/end/{f=1} For any line that contains test1/end, set f to true (1). This signals that we should not print the lines that follow until we reach a line that contains test1. Using variables awk -v a="test1" -v b="test1/end" '$0~a{f=0} !f{print} $0~b{f=1}' sentence.txt
Delete line after keyword1 if keyword2 does not exist
1,532,672,904,000
As I understand bash is a program like python interactive shell, which receives command(or commands) by input stream, executes them by calling Linux API functions, and give execution result to output stream. Terminal is also a program that provides us some features like command history and highlighting, internally it uses shell(bash). But does applications(like Nautilus) uses /bin/bash or they communicate with linux using it's API?
Yes, programs may well use the shell, either explicitly or implicitly. See e.g. Stéphane's answer to an unrelated question. Their answer says, for example, that if the program uses the C library functions execlp() or execvp() to run a command, upon execve() returning ENOEXEC it will typically invoke sh on it ("it" being a shell script without an explicit interpreter specified, which is the context for that question). sh is a shell. An application that uses system() to execute a utility will also typically invoke a shell. I can't say anything specifically about Nautilus, but if it allows you to execute scripts of any kind, it most likely uses a shell for doing so. The rest of the application will probably use libraries for the GUI elements and other libraries for events, filesystem operations etc. These libraries are most likely written in C or a similar language and uses the C library, some of which interfaces with the operating system kernel for some operations. I highly doubt that the file manager itself is written in any sort of shell scripting language though, although it may well use shell scripts for startup or other operations.
Do programs like Nautilus uses shell?
1,532,672,904,000
For my systems programming course, I'm supposed to go through a piece of sample text and replace the most frequent word with another phrase. Unfortunately, I am only allowed to use the commands tr grep egrep sed awk uniq wc as well as piping. I have gotten so far as to find the most frequent word and wish to use it in SET1 for tr so that I can replace it with the other phrase. In order to do so I imagine that I have to filter out the line/word that is relevant with something like grep or sed. My question is then how I would pass that in as the first set for tr so that I could replace the phrase. I have no experience with awk.
You most likely don't want to use tr to do that, as tr only works on individual characters (or bytes): $ echo abc | tr cab taxi axt I would recommend taking a look into sed and especially the s/// (substitute) operator instead. As for passing the output of a program to the command line of another, the keyword is command substitution. (I won't go into further detail since this was homework...)
Using the output of a command as a set for tr
1,532,672,904,000
I am trying to get the distribution name and version number to enter into conky.  I am currently using the following rpm --query centos-release resulting in centos-release 7-4.1708.e17.centos.x86_64 How do I pare that down to just centos 7-4.1708.e17? After trying all the suggestions I ended up entering this into my conky ${font Roboto:bold:size=8}${goto 95}${color1}Distribution $alignr ${execi > 60 a=$(rpm --query centos-release) a=${a#centos-release } a=${a%%\.centos.*} echo "$a"} with this result centos-release-7-4.1708.e17
With sed: $ rpm --query centos-release | sed 's/^centos-release//;s/\.centos.*//' 7-4.1708.e17 With only shell: #!/bin/sh a=$(rpm --query centos-release) a=${a#centos-release } a=${a%%\.centos.*} echo "$a"
bash command to get distribution and version only
1,532,672,904,000
I've logged into my GoDaddy server through PuTTY, I'm connected as ted67942 which is not the root user. I'm trying to run basic commands like sudo, dpkg, su, etc. but they all return the "command not found" error. How do I either fix this or log in as the root user? I'm trying to install mod_reqtimeout on my webserver echo $PATH returns the following: /usr/local/jdk/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/cpanel/composer/bin:/opt/puppetlabs/bin:/opt/dell/srvadmin/bin:/usr/local/bin:/usr/X11R6/bin:/home/ted67942/.local/bin:/home/ted67942/bin
GoDaddy seems to provide a somewhat restricted environment which does not include unlimited root access. According to GoDaddy documentation, the WebHost Manager (WHM) GUI includes a feature called "EasyApache (Apache Update)". Within EasyApache, select the gear icon ("Customize Profile"), then "Next Step" and "Exhaustive Options List". There will be a list of Apache and PHP modules that can be added to Apache configuration in GoDaddy's environment. This is probably the only way to add Apache modules in GoDaddy's environment, because it ensures that only modules that are deemed acceptable by GoDaddy can be used. If mod_reqtimeout is not listed there, you should probably contact GoDaddy's support and describe your needs.
Run basic administrative commands on GoDaddy
1,532,672,904,000
Is the CLUI (Command Line User Interface) and GUI (Graphical User Interface) utilize different TTYs or both of them share the same TTY? I understood in the past that they both share the same TTY but I might be wrong. I got a bit confused when reading about that and saw different phrasings that made see the CLUI/GUI-TTY issue a bit confusing one. I understand what is a TTY machine from history (60s/70s) but don't know if a modern virtual TTY "bases" both CLUI and GUI, or if there is TTY one per each (one for CLUI and one for GUI), and my question is if there really is. Update due to Sparhawk's comment: By CLI I mean to either the CLUI I run from my GUI-including-distro (like the Debian desktop CLUI) or my other no-GUI distro, like the Debian server CLUI or Ubuntu WSL.
CLUI: command line user interface GUI: graphic user interface These things mean what they mean, no more. These definitions don't include anything about a tty. For example, the cmd.exe on windows is also a CLUI, although it doesn't use any tty device (it is conceptionally nonexisting on Windows). Tty means a virtual teletype console writer, which is the traditional name of the pseudo virtual terminals on the Unixes. By default (after boot), a character console runs on them, but you can connect anything to them. The best thing to understand the ttys, if you think on them as network sockets: Processes can listen on them, and also connect them. In addition, there are various kernel APIs for the user interaction: for example, if a virtual terminal closes unexpectedly, changes its size, activates or deactivates, then the processes attached to them, get different signals. It is up to them, what they do with it. For example, an X server running on tty7, if you switch to character console (alt/ctrl/f1), then it deinitializes the video card and switches back to character mode. Other processes, for example, a command shell, can totally different things.
Is the CLUI and GUI different TTYs? [duplicate]
1,532,672,904,000
I'm having a hard time figuring how to pass the output of one command to another as an argument. Specificly, I want to pass the list the extended attributes of a file in FreeBSD, with lsextattr, and pass its output to rmextattr to remove all the extended attributes. Yes, I need to do this because rmextattr don't have a recursive option... I'm trying something like this without luck: # lsextattr -q user some_file.txt | rmextattr user "$1" some_file.txt rmextattr: some_file.txt: failed: Attribute not found I think lsextattr is working correctly, but can't pass its output to rmextattr correctly!! # lsextattr -q user some_file.txt DosStream.com.apple.lastuseddate#PS:$DATA DosStream.AFP_AfpInfo:$DATA Please, help......
IIUC, rmextattr can only take one extended attribute at a time. So you will have to loop over the extended attributes that lsextattr returns and remove each one; something like this: for attr in $(lsextattr -q user some_file.txt) ;do rmextattr user $attr some_file.txt done (untested - I don't have access to a FreeBSD system at the moment). In response to the question in the comment: for file in $(find ...) ;do for attr in $(lsextattr -q user $file) ;do rmextattr user $attr $file done done I don't know what your criteria are for the files you want to consider, but you can experiment with find until you get exactly the list you want and then plug the resulting command into the $(find ...) part of the outer loop.
How to pass the output of previous command to next as an argument
1,532,672,904,000
I have this find command that compresses png files. find /path/to/folder -mtime -1 -mtime +0 -exec pngquant --ext .png -v --force 256 {} \; I've also tried using mmin like so find /path/to/folder -mmin -1440 -mmin +0 -exec pngquant --ext .png -v --force 256 {} \; The -1 and +0 in -mtime -1 -mtime +0 are variable, and can sometimes be -5 and +4, or so have you. This command never returns results. When I remove the mtime +0 or mmin +0 it brings the expected results, but I need to be able to control the value so I can pass values above 0 such as -5 and +4. How should I alter my find command to target files in a 24 hour period? I am using Ubuntu 14.04 if it matters.
If it's in variable you could do as following. find /path/to/src -type f -mmin -$((60 * $hourP)) -mmin +$((60*$hourN)) -exec pngquant --ext .png -v --force 256 {} \; also better to use + in place of \; for exec termination operator to accomplish like pngquant a b c rather than pngquant ;pngquant b; pngquant c For an example: hourP=5 hourN=4 find /path/to/src -type f -mmin -$((60 * $hourP)) -mmin +$((60*$hourN)) -exec pngquant --ext .png -v --force 256 '{}' +
Using mtime or mmin to find files in last day
1,532,672,904,000
I have created a servicefile and I kept it in the /etc/systemd/system. It is starting the service as a daemon at the start of the system. I don't want it to start at the start of the system. I want to start the service when I will run a command to start the service. Thank You.
Extract from the Debian systemd documentation Show status of the service "example1": systemctl status example1 Enables "example1" to be started on bootup: systemctl enable example1 Disable "example1" to not start during bootup: systemctl disable example1 Start a Service example1 systemctl start example1
How to start a service in linux after running command not at start of the system?
1,532,672,904,000
Using "jbossapp" user I'm using this command to find ".stat" files which are created more than 3 minutes. find /opt/jboss/* -mmin +3 -name "*.stat" Recently there is a folder created in /opt/jboss/ directory with root user now while using this command I'm getting 'permission denied' from the particular folder which is interrupting the search how do I exclude the particular folder which is having root privileges.
Use find /opt/jboss/* -type f -mmin +3 -name "*.stat" 2>/dev/null The 2>/dev/null will redirect the Standard Error Output to special file /dev/null to avoid displaying any errors. Also we add -type f to look for fIles only. To excluding a directory use like below find /opt/jboss/* -path /path/to/exclude -prune -o -type f -mmin +3 -name "*.stat" 2>/dev/null also you can use -not -path as well. find /opt/jboss/* -not -path /path/to/exclude -type f -mmin +3 -name "*.stat" 2>/dev/null Even if you want exclude the find result from users who owned by root, use as follow: find . \! -user root .....
How to find a file which is created after 5 mins exclude one sub directory which is owned by root user
1,532,672,904,000
i am at fickurthe first one and trying to move the inner fickur to digital with only one command line, but cant seem to figure this one out. here is what i have tried so far: mv -v fickur/ ./klockor/armbandsur/digital results: mv: rename fickur/ to ./klockor/armbandsur/digital: No such file or directory
If you are inside the dir fickur (the rectangular one in the drawing), do mv fickur/ ../armbandsur/digital/
How to move a file from a subdirectory to another subdirectory located outside the first one?
1,532,672,904,000
I have ten files text1.html...text10.html. There are numbers 1234567890 in each file. How I can change 1234567890 with 0987654321 in each file from the terminal without opening files?
#!/bin/bash for i in `seq 1 10`; do sed -i 's/1234567890/0987654321/' text$i.html done If you're lazy, here it is in a for loop. ;)
Changing text from the Linux command line
1,532,672,904,000
I am trying to add a right click menu to odrive, using nautilus-actions and the sync agent. However, after setting the script up with the path "$HOME/.odrive-agent/bin/odrive" and parameters sync "%f" (Like shown in the documentation). This does not work, and setting it to show output gives me "$HOME/.odrive-agent/bin/odrive" sync "\"/home/username/odrive-agent-mount/Dropbox/Documents.cloudf"\"" The proper command is supposed to be "$HOME/.odrive-agent/bin/odrive" sync "$HOME/odrive-agent-mount/Dropbox/Documents.cloudf" How do I make it so that the \ is removed from within nautilus-actions
When using %f, don't add double quotes around it. Doing so will prompt the application to escape the double quotes in the string (that's where the backslashes comes from).
Nautilus-Actions Is Adding Backslash
1,532,672,904,000
I am looking for a command line tool that "converts" a pdf file (whose size is larger than a4) into a single pdf file that consists of multiple a4 pages. The new pdf file, when printed, should look like the original content without scaling the original. Searching the internet, I found pdfposter. Yet, it seems to require the size of the input pdf file which I don't know. So, is there a tool that does that.
I wrote a tool that can do what you want. I explained it over at askubuntu: https://askubuntu.com/a/1155892/394569 It's called plakativ and its main interfaces is a GUI. But it also has a command line interface that allows you to do what you want including specifying borders to glue the pages together. If you find a bug, please post it at https://gitlab.mister-muffin.de/josch/plakativ
Command line tool to create a pdf file with a4 sized pages from a poster pdf
1,495,530,814,000
For encryption i am using openssl aes-256-cbc -a -salt -in abc.txt -out abc.txt.encso now how to encrypt a file from my desktop where the file is in server like \10.113.123.15
If you have SSH access to the host: $ ssh username@server openssl aes-256-cbc -a -salt -in abc.txt -out abc.txt.enc This will connect to the host server as username and run the specified command. The openssl command will write to standard output if no output file is specified, which means you may store the result in a local file with $ ssh username@server openssl aes-256-cbc -a -salt -in abc.txt >abc.txt.enc
How to encrypt a file which is on a server [closed]
1,495,530,814,000
When opening files in Firefox from konsole, the autocomplete function only works for certain file extensions like html, htm and the like. For other extensions, I have to type out the full name instead. How can I configure firefox (or konsole) to give me the same behavior for other files? Specifically I'm asking for md files.
It will depend on the shell you are using. For example for bash you will find completion functions inside the /etc/bash_completion.d directory. You probably have one regarding firefox listing the extensions it looks after, and you should change it to add your .md extension.
Enable command line completion of *.md files for firefox in konsole
1,495,530,814,000
As I was doing a project, I came to know about how command line can be read using ncurses and GNU's readline library. However I could not find either in Ubuntu (16.04). I am curious to know how Ubuntu processes the command as the user types? For eg: how does it detect up/ down arrow being pressed, how is Tab detected etc?
Ubuntu the operating system does not read command lines. Some programs read command lines. For example, bash is a command interpreter (also known as a shell) which reads command lines. When the shell is interactive and reads command lines from a terminal it uses the GNU readline library. $ sudo apt-get -y install libreadline6-dev readline-doc Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libtinfo-dev The following NEW packages will be installed: libreadline6-dev libtinfo-dev readline-doc 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 299 kB of archives. After this operation, 1,233 kB of additional disk space will be used. ... $ sudo dpkg-query -l '*readline*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============-============-============-================================= un libreadline-co <none> <none> (no description available) un libreadline-gp <none> <none> (no description available) un libreadline4 <none> <none> (no description available) ii libreadline5:a 5.2+dfsg-3bu amd64 GNU readline and history librarie un libreadline5-d <none> <none> (no description available) ii libreadline6:a 6.3-8ubuntu2 amd64 GNU readline and history librarie ii libreadline6-d 6.3-8ubuntu2 amd64 GNU readline and history librarie un libterm-readli <none> <none> (no description available) un libterm-readli <none> <none> (no description available) un php-readline <none> <none> (no description available) ii php7.0-readlin 7.0.13-0ubun amd64 readline module for PHP ii readline-commo 6.3-8ubuntu2 all GNU readline and history librarie ii readline-doc 6.3-8ubuntu2 all GNU readline and history librarie un tcl-tclreadlin <none> <none> (no description available) $ cat trl.c #include <stdio.h> #include <stdlib.h> #include <readline/readline.h> int main(void) { char * line = readline("Enter some text: "); if (line) { printf("You have entered \"%s\"\n", line); } return EXIT_SUCCESS; } $ gcc -Wall trl.c -o trl -lreadline $ ./trl Enter some text: Some text to be read by readline() You have entered "Some text to be read by readline()"
How reading command line happens in Ubuntu [duplicate]
1,495,530,814,000
how can I print difference line number? example- compare File 1 vs File2 and print line no. for differnce record present in file 2. In file1: userD user3 userA user1 userB and In file2: user3 userB userX user1 user7 expected result:- difference in file2 is for line number 3,5
bash-4.1$ cat file1 userD user3 userA user1 userB bash-4.1$ cat file2 user3 userB userX user1 user7 bash-4.1$ awk 'NR==FNR{Arr[$0]++;next}!($0 in Arr){print FNR}' file1 file2 3 5
compare two files and print unmatched line number?
1,495,530,814,000
sort -k 2 filename.txt means that it sorts by the second key but what would this following command mean: sort -k 3.3,3.5 for this data: For example: I got this--> Man in Winter England 1980.12.02 Richrd Fritz Scottland 1960.12.18 Max Winter GB 1955.12.09 Luther Arnold England 1990.05.12 Sebastian Kalle USA 1980.12.14 How can I get the solution for that list with this command: sort -k 3.3,3.5 data.txt? And how can you sort it by when you have 2 decimal numbers in general?
Let's assume there are no tabs in the input. The interpretation of the command is pretty tricky: sort -k3.3,3.5 means "sort by a substring from the third field from the third to the fifth character", but the counting begins at the first whitespace before the field, as mentioned in man sort: KEYDEF is F[.C][OPTS][,F[.C][OPTS]] for start and stop position, where F is a field number and C a character position in the field; both are origin 1, and the stop position defaults to the line's end. If neither -t nor -b is in effect, characters in a field are counted from the beginning of the preceding whitespace. Run the sort under LC_ALL=C to avoid locale influencing he sort order. Note how the order changes if you add one more character, i.e. LC_ALL=C sort -k3.3,3.6 Here's a short Perl script that shows what part of the input is used for sorting: #!/usr/bin/perl use warnings; use strict; use feature qw{ say }; my $field_index = 3; my $start = 3; my $stop = 5; # Change to 6 to explain the different order. while (my $line = <>) { chomp $line; my @fields = $line =~ /(\s*\S*)/g; my $length_before = 0; $length_before += length $fields[$_] for 0 .. $field_index - 2; my $from = $start - 1 + $length_before; my $to = $stop + $length_before; $_ > length $line and $_ = length $line for $from, $to; substr $line, $to, 0, '>>'; substr $line, $from, 0, '<<'; say $line; } Output for 3.3,3.5: Luther Arnold << >>England 1990.05.12 Man in << >>Winter England 1980.12.02 Max Winter << >>GB 1955.12.09 Richrd Fritz << >> Scottland 1960.12.18 Sebastian Kalle << >> USA 1980.12.14 Output for 3.3,3.6: Richrd Fritz << >>Scottland 1960.12.18 Sebastian Kalle << >>USA 1980.12.14 Luther Arnold << E>>ngland 1990.05.12 Max Winter << G>>B 1955.12.09 Man in << W>>inter England 1980.12.02
sort command understanding the logical sequence
1,495,530,814,000
I want to pipe part of a very large text file into downstream work (python). Basically, I want get all the odd lines and first n characters of the even lines, but I still want to keep the line order. The reason is that the even lines are very very long, but I only need first few characters. This can make reading the file into python much faster.
Here is a solution in awk: $ cat testfile foo asdkjasjdka bar kjsdksjdkssd $ awk -v n=2 'NR % 2 == 1 { print } NR % 2 == 0 { print substr($0, 1, n) }' testfile foo as bar kj
pipe part of a text file into downstream work (python)
1,495,530,814,000
With the cal command I noticed you can use: cal -3 Which displays current month, one before, and one behind. Is there an easy way to show another number? The following doesn't work: cal -5 Are the only options single month, 3 months, or full year? Or is there a simple way to show an amount of months above 3 but below 12?
I hate to answer my own question. It was so obvious after reading man cal. cal -A 4 The above displays 5 months, but not in the same way cal -3 does (with current month in the middle). Instead it starts with the current month and adds 4 ahead.
Display Set Number of Months with cal
1,495,530,814,000
I'm maintaining a server which runs mailman. In it I find a crontab which looks like the following: 0 8 * * * list [ -x /usr/lib/mailman/cron/checkdbs ] && /usr/lib/mailman/cron/checkdbs 0 9 * * * list [ -x /usr/lib/mailman/cron/disabled ] && /usr/lib/mailman/cron/disabled ... When I type list I get No command 'list' found .. My searches for "crontab list", "linux list command", "mailman cron list" bring up results for listing things. What does list in crontab do ? What command is list refering to ?
Lines in the system crontab (which is what I think you're looking at) have six fixed fields plus a command, in the form: minute hour day-of-month month day-of-week user command This is different from the per-user crontab which lacks the user field. My guess is that list is the mailman user on that system. This user is usually called mailman, but for whatever reason someone thought list was better (more generic?).
What does list in crontab do?
1,495,530,814,000
I have a folder "all_images/" with more than 1000 image files named as "Image1.tif", "Image2.tif" and so on.. I have a text file "extract_images_list.txt" which is a list of images that I want to extract from this folder. Example: Image23.tif Image100.tif Image248.tif I want to move only those files mentioned in my text file to another folder "extract_images/" I could only think of rm (Image1|Image2|Image3|...|...|....|) where I would provide the images that I don't want. Is there a better way of doing this?
With the caveat that this solution can't possibly handle things like the Line Feed character being in a filename: mkdir extract_images 2>/dev/null while IFS= read -r file; do mv "$file" extract_images done < extract_images_list.txt This goes through extract_images_list.txt line-by-line by reading them into the file variable (the -r argument is required to make it treat backslashes as literal backslashes, and IFS= makes it not strip whitespace), then moves each line to the extract_images directory.
subset files in a folder based on a list
1,495,530,814,000
Input: X Y 1 11 1 12 2 21 2 22 Desired Output: 11 12 21 22 I want to transpose the CSV file by the value of column 1. In this example, for X = 1, transpose (11, 12)^T to (11, 12); for X = 2, transpose (21, 22)^T to (21, 22)
perl perl -lane ' push @{$rows{$F[0]}}, $F[1] if $. > 1 } END { $, = " "; print @{$rows{$_}} for (sort keys %rows); ' file awk, assumes input is sorted on column 1: awk ' NR == 1 {next} NR == 2 {key = $1} $1 != key {print ""; key = $1} {printf "%s ", $2} END {print ""} ' file
Transpose CSV file by the value of a column
1,495,530,814,000
xev | awk -F'[ )]+' '/^KeyPress/ { a[NR+2] } NR in a { printf "%-3s %s\n", %5, %8} When I use xev there is only a certain bit of information I want. The natural response of using xev to get keycode info looks like this... KeyPress event, serial 48, synthetic NO, window 0x1600001, root 0xf6, subw 0x0, time 754405, (348,566), root:(349,620), state 0x0, keycode 40 (keysym 0x64, d), same_screen YES, XLookupString gives 1 bytes: (64) "d" XmbLookupString gives 1 bytes: (64) "d" XFilterEvent returns: False KeyRelease event, serial 48, synthetic NO, window 0x1600001, root 0xf6, subw 0x0, time 754488, (348,566), root:(349,620), state 0x0, keycode 40 (keysym 0x64, d), same_screen YES, XLookupString gives 1 bytes: (64) "d" XFilterEvent returns: False The result of the AWK script would only return: 40 d This made me want to learn AWK :) So after learning about NR and doing a few tutorials, I am now trying to figure this out. First the -F is just divides by fields in this case '[ )]+' I think this is regex for 1 or more of spaces or closing parenthesis. I do not understand this. I do not see any spaces before prenthesis. Also, I do not know what a space in a regex box does here, because I have only learned about whitespace tools such as \s. So I wanted to see what fields dispay with $5 and %8 because it didnt look right in my analysis and I was confused!! echo "state 0x0, keycode 12 (keysym 0x33, 3), same_screen YES," | awk '{print $8}' same_screen echo "state 0x0, keycode 12 (keysym 0x33, 3), same_screen YES," | awk '{print $5}' (keysym edit: So what is this printf "%-3s %s\n", $5, $8}?? Why is the output so different then my echo example above? Obviously, this is coming from the magic of {a[NR+2] NR in a}. Some sort of an array and a for loop. I look at NR+2 and it makes me think: since when AWK starts NR starts on 1 and adding 2 would make it the third line. This looks right since all of the info I want is on the third line. What is going on with a[NR+2]? for NR in a printf... ? I understand printf I understand for loops. The way NR is used here baffles me. I guess the real question is what is happening with 'a'? Is this a predefined thing I don't know about?
You seem to have correctly deduced what {a[NR+2]} NR in a { ... }} does; /^KeyPress/ {a[NR+2]} creates an (empty valued) element in array a with index NR+2, when the start of line NR matches the string KeyPress NR in a is therefore true for the line two lines below where /^KeyPress/ matched In that respect, it could perhaps have been written more transparently as awk -F'[ )]+' '/^KeyPress/ {n=NR+2} NR==n { printf "%-3s %s\n", $5, $8}' A possibly more tricky question is why the fields to be printed are $5 and $8 rather than $4 and $7; that's because the treatment of initial whitespace is different when using a non-default field separator: from the Default Field Splitting section of the GNU awk manual: Fields are normally separated by whitespace sequences (spaces, TABs, and newlines), not by single spaces. Two spaces in a row do not delimit an empty field. The default value of the field separator FS is a string containing a single space, " ". If awk interpreted this value in the usual way, each space character would separate fields, so two spaces in a row would make an empty field between them. The reason this does not happen is that a single space as the value of FS is a special case—it is taken to specify the default manner of delimiting fields. If FS is any other single character, such as ",", then each occurrence of that character separates two fields. Two consecutive occurrences delimit an empty field. If the character occurs at the beginning or the end of the line, that too delimits an empty field. The space character is the only single character that does not follow these rules.
Deciphering this AWK script
1,495,530,814,000
By default when we copy and paste a file in same directory Ubuntu creates duplicate file and renames it to origname (copy).ext. But I want to rename all those files such that the files like those names be renamed to origname_copy_02082016.ext means with todays date at the end just before extension. How can I do that with regex and rename command ?
There are several rename(1)s out there, and they use different sets of options. Assuming your rename(1) supports Perl expressions, this should work: rename -n "s/ \(copy\)/_copy_$(date +%d%m%Y)/" * The -n option shows you what rename(1) would do without actually renaming anything. Remove -n when you're happy with the result.
find files containing space and specific string in their filename and rename it