date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,574,835,605,000 |
I'm looking the cflags for the turion X2 M500 processor
I'm doing
grep -m1 -A3 "vendor_id" /proc/cpuinfo
and the output is:
vendor_id : AuthenticAMD
cpu family : 16
model : 6
model name : AMD Turion(tm) II Dual-Core Mobile M500
|
If your build environment is on the turion x2 machine:
Step 1: Assuming GNU gcc/g++, -mtune=native -march=native will build code optimized for the machine on which the compiler is run. I don't know how to tell the compiler to dump the chosen flags, but
If you want to cross-compile, you'll need to examine the capability flags for your processor:
Step 1: On your turion x2 machine: cat /proc/cpuinfo | grep ^flags | sort -u
Step 2: Find the documentation for the -march directive and examine the possible cpu families for which directives exist. For AMD cpus (I have a Turion x2 L310 notebook, but, sadly it's running Windows 10), look at the k8 or k8-sse targets. According to the documentation for the 4.5.3 GNU compiler, those have the following characteristics:
k8, opteron, athlon64: AMD K8 core based CPUs with x86-64 instruction set support. (This supersets MMX, SSE, SSE2, 3DNow!, enhanced 3DNow! and 64-bit instruction set extensions.)
k8-sse3, opteron-sse3, athlon64-sse3: Improved versions of k8, opteron and athlon64 with SSE3 instruction set support.
To find out what architectures your compiler really supports, do this command (I'm assuming c++ is the target language):
g++ --help=target
Hopefully that's helpful.
| what are the cflags of this microarchitecture? |
1,574,835,605,000 |
When submitting jobs, I'm getting Exit codes returned, but I have to hit the return key for them to be printed to the log.
1. prompt_line/location sas query.sas &
2. [1] 66682
3. prompt_line/location
4. [1]+ Exit 1 nice -n 19 opt/sas/sashome/server/SASFoundation/9.4/sas 99query.sas
5. prompt_line/location
I submit the sas code 'query.sas' (line 1).
(Line 2) is put to the command line instantly confirming I've submitted the job and a new prompt line (line 3) is put to the command line instantly also.
I then have to hit the return key for the Exit status to be put to the command line (line 4) and a new prompt line is put to (line 5).
Where 'sas' is an alias for
nice -n 19 opt/sas/sashome/server/SASFoundation/9.4/sas
Is there a way to have a new prompt line put to the command line when an exit status is returned?
|
If you are using a sh-compatible shell as your interactive shell (such as bash), you may use set -b:
Report the status of terminated background jobs
immediately, rather than before the next primary prompt.
This is effective only when job control is enabled.
In bash, this is equivalent to set -o notify.
| New prompt line not automatically put to command line upon exit code |
1,574,835,605,000 |
Rawinput:
➜ datatest tree
.
├── a
│ ├── README.md
│ ├── code
│ └── data
│ └── apple.csv
├── archive.sh
├── f
│ ├── README.md
│ ├── code
│ └── data
│ ├── Archive.zip
│ ├── a.csv
│ ├── b.xlsx
│ └── c.xlsx
└── toolbox
└── tool.py
7 directories, 9 files
Output:
➜ datatest tree
.
├── a
│ ├── README.md
│ ├── code
│ └── data
│ ├── Archive.zip
│ └── apple.csv
├── archive.sh
├── f
│ ├── README.md
│ ├── code
│ └── data
│ ├── Archive.zip
│ ├── a.csv
│ ├── b.xlsx
│ └── c.xlsx
└── toolbox
└── tool.py
7 directories, 10 files
In each /data subfolder, files should be compressed to Archive.zip except it already exists Archive.zip(like f folder).
Trying:
I have to check if there is no Archive.zip in any /data subfolder by using tree command-line, then zip -r Archive.zip ./* , which is inconvenient.
Hope:
How do I using a command once or writing a script to achieve this? I'm on OSX(10.12.6).
|
Something like this maybe?
find . -type d -name data \
\! -exec test -f {}/Archive.zip ';' \
-execdir zip -rj data/Archive.zip data ';'
This would locate each data directory (first line).
The \! -exec test -f {}/Archive.zip ';' would filter out any data directory that does not contain any file called Archive.zip.
This line may be replaced by \! -execdir test -f data/Archive.zip ';'.
The last -execdir would execute the given zip command from within the parent directory of the data directory. This would create data/Archive.zip containing the files in data (with no path attached to the archived filenames).
This is similar to my answer to your previous question, but with the test for existance of data/Archive.zip inserted.
| How to compress all files from all subfolders if there is no `Archive.zip` in subfolder? |
1,574,835,605,000 |
For example suppose we have a file called input.txt which contains
100 John Doe LEC05 12356
132 Carol Bon LEC05 156
122 Cavar Liktik LEC01 136
...
This command should find everyone in LEC05 and print out their first names in sorted order in a file called output.txt
The command should be a one-line command (with pipes).
I'm not sure how it would be done.
see if LEC05 | find first name at index 1 | sort < input.txt > output.txt
How do I do the see if LEC05 | find first name at index 1 part?
|
With awk:
awk '$4 == "LEC05" { print $2 }' /path/to/inputfile | sort > outputfile
With grep and cut:
grep 'LEC05' /path/to/inputfile | cut -f2 | sort > outputfile
| One line shell command that finds all students in LEC05 and prints their first name in sorted order |
1,574,835,605,000 |
This question is similar to How to show lines after each grep match until other specific match?
I want to match a particular pattern in lines before another pattern match.
Here I want to get the file from a given host. Each file can have multiple hosts.
Hence I don't have a fix number of lines before I get to the hostfile tag from a given host.
Context:
...
...
<hostfile file:abc.txt>
<host> abc.com <\host>
<host> qwe.com <\host>
<host> xyz.com <\host>
<\hostfile>
...
<hostfile file:xyz.txt>
<host> asd.com <\host>
<\hostfile>
...
...
Example match
Input: xyz.com
Output: abc.txt
Input: asd.com
Output: xyz.txt
Using awk or sed or any other command-line tool.
|
Another awk variation:
/^<hostfile file:/ {
output=substr($2, 6, index($2, ">") - 6);
}
/<host>/ && $0 ~ pattern {
print output
}
Call it such as:
$ awk -v pattern='xyz.com' -f findit.awk contextfile
abc.txt
$ awk -v pattern='asd.com' -f findit.awk contextfile
xyz.txt
| How to match a pattern in lines before another pattern match |
1,574,835,605,000 |
I'm trying to learn how to parse files using the Linux commands and tools. I'm always confused on how to best leverage grep/awk/sed.
Here is a specific use case.
I have a log file that contains the following strings:
Config Server received a Connection Establishment with an invalid public key, closing connection. Agent Identifier: SRV3 Socket IP: 192.168.2.6
Config Server received a Connection Establishment with an invalid public key, closing connection. Agent Identifier: TESTSRV4 Socket IP: 10.1.2.3
Config Server received a Connection Establishment with an invalid public key, closing connection. Agent Identifier: SRV1 Socket IP: 192.168.2.15
Config Server received a Connection Establishment with an invalid public key, closing connection. Agent Identifier: TESTSRV2 Socket IP: 10.1.2.4
My goal is to extract the host name that appears after "Agent Identifier" and the associated IP address for each line and export them to a txt file. What would be the best way of going about it?
|
sed approach:
sed -n 's/.* Agent Identifier: \(.*\) Socket IP: \(.*\)/\1 \2/p' inputfile > host_list.txt
host_list.txt file contents(cat host_list.txt):
SRV3 192.168.2.6
TESTSRV4 10.1.2.3
SRV1 192.168.2.15
TESTSRV2 10.1.2.4
| Extract data from log file |
1,574,835,605,000 |
I have a script that is run like this:
curl https://example.com/install.sh | zsh -
The script needs to read from use input:
read "human_name?Your human name ?"</dev/tty
The problem is the user sees the input and can enter their response but the response is not stored in human_name.
Example:
cat <<EOM | zsh -
read "human_name?Your human name ?"</dev/tty
if [ -z "$human_name" ] ; then echo "tears" ; else echo "HI" $human_name; fi
EOM
Results in:
Your human name ?Pat
tears
Any guidance?
|
You can always read from the terminal by redirecting from /dev/tty, as long as the program is not a background job. If it's a background job, it'll be paused by a SIGTTIN until it gets switched to the foreground.
The problem with your script is not reading from the terminal, but what you do with what you've read. You used a here document with interpolation, so $human_name is interpolated while constructing the here document, and it's empty at the time. You need to either use a here document without interpolation or quote the dollar signs so that the shell you run with zsh - sees and parses them.
cat <<'EOM' | zsh -
read "human_name?Your human name ?"</dev/tty
if [ -z "$human_name" ] ; then echo "tears" ; else echo "HI" $human_name; fi
EOM
| Reading from tty in piped shell |
1,574,835,605,000 |
When looking for txt files, I run this command:
find . -name "*.txt" -print
This gives me a list of all the text files beneath current directory.
However, find . -name *.txt -print gives me the following error:
find: paths must precede expression: mein.txt
Is this the generally expected behavior? What difference do
the quotation marks make?
|
Within a token that is not quoted, it is your shell that will perform
expansion, not the command that you are running.
This means that, when you enter find . -name "*.txt" -print, then
find receives the literal *.txt as one of its parameters, and uses
this pattern as the argument to its -name option, which will match
the names of files found against it before applying -print.
On the other hand, when you enter find . -name *.txt -print, the
shell passes the expanded version of *.txt to find. Several cases
are possible:
There are no files matching *.txt in the current directory: find
receives a literal *.txt (assuming default bash settings);
there is exactly one file matching *.txt in the current
directory; let's say it is a.txt: find receives this file name,
and matches all files named a.txt found starting at the current
directory;
several files match *.txt in the current directory (this appears
to be your case): -name receives the first one as its parameter,
and the others are further path parameters to find, which complains
about not being given all paths before the expression.
This is the expected behavior.
Let's assume the following file hierarchy:
.
├── a.txt
├── b.txt
├── c.txt
└── foo
├── a.txt
├── b.txt
└── c.txt
The actual parameters that find receives in eah case can be observed
by replacing the call to find with printf '%s\n', which will print
each expanded argument on its own line:
$ printf '%s\n' . -name "*.txt" -print
.
-name
*.txt
-print
$ printf '%s\n' . -name *.txt -print
.
-name
a.txt
b.txt
c.txt
-print
As you can see, the second invokation that you posted is equivalent,
given the existing files, to find . -name a.txt b.txt c.txt -print.
| In what way does quoting parameters to `find` matter? [duplicate] |
1,574,835,605,000 |
I have some files under /mainFolder/test:
abcd.log.2017_01_26_23_30.0
abcd.log.2017_01_26_23_35.0
abcd.log.2017_02_20_23_10.0
xyz1.log.2017-02-01
xyz2.log.2017-03-11
From these files, I need a file like abcd.log.2017_01_26_23_30.0. To search, I am trying like:
myRegex="[0-9]{4}_[0-9]{2}_[0-9]{2}_[0-9]{2}_[0-9]{2}.[0-9]{1}"
realPath="/mainFolder/test/abcd.log.2017_01_26_23_30.0"
[[ $realPath =~ $myRegex ]] && echo "It is matching" || echo "Does not match"
After getting the files, I need to extract the dates in format yyyy-mm-dd (that is I need 2017-01-26).
How can I do this?
|
Using capture groups and $BASH_REMATCH to extract bits of strings:
for name in *.log.*; do
if [[ "$name" =~ \.([0-9]{4})_([0-9]{2})_([0-9]{2}) ]]; then
printf '%d-%d-%d from "%s"\n' \
"${BASH_REMATCH[1]}" \
"${BASH_REMATCH[2]}" \
"${BASH_REMATCH[3]}" \
"$name"
fi
done
Output:
2017-01-26 from "abcd.log.2017_01_26_23_30.0"
2017-01-26 from "abcd.log.2017_01_26_23_35.0"
2017-02-20 from "abcd.log.2017_02_20_23_10.0"
If you need the date string in a variable:
for name in *.log.*; do
if [[ "$name" =~ \.([0-9]{4})_([0-9]{2})_([0-9]{2}) ]]; then
datestring="$( printf '%d-%d-%d' "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}" )"
printf '%s from "%s"\n' "$datestring" "$name"
fi
done
In ksh93, replace BASH_REMATCH with .sh.match.
| extract date "2017-01-26"(in yyyy-mm-dd) from files name like "abcd.log.2017_01_26_23_30.0" |
1,574,835,605,000 |
I have accidentally renamed a large number of my files with an .mp4 extension. Luckily the original extension has been preserved within the filename (e.g. simon.says.nfo.mp4)
How would i now remove the mp4 part from just these files?
Due to the naming system I have, I cannot just do a find for *.*.mp4 which would be a lot easier with a find/replace combo.
|
"rename" is a utility that does exactly what you need:
rename 's/.mp4//' *.mp4
man rename for other info
| Change misplaced file extensions |
1,574,835,605,000 |
At certain time, various commands started to give various error messages.
I catched a few of them and then rebooted the system and these commands were working.
$ mkdir x
mkdir: cannot create directory `x':
$ file x.txt
bash: /usr/bin/file:
Input/output error
$ touch x.txt
Segmentation fault
There were other commands giving the same error messages.
ls, px was working.
Some command output "read/only filesystem".
Now it's OK after reboot.
What could I try (to measure or diagnose) to know more about what was happening? If it's rather relating to hardware or security?
|
Next step would probably involve looking at your dmesg | tail -40 output. Then again, it would probably answer with an other Input/output error. I guess you had to physically reboot this machine/commands were failing as well?
Looks like your root filesystem is gone.
Could be a problem with a disk, some sata cable, or a faulty RAID array.
| What could be wrong when mkdir, file and touch command give error messages |
1,574,835,605,000 |
Which option should be used with tree command line tool to get sort from biggest to smallest?
├── [4.0K] types2
│ └── [ 116] types2.go
├── [4.0K] types3
│ ├── [ 689] types3.go
│ └── [ 0] types3.go~
├── [4.0K] web
│ ├── [ 149] index.html
│ ├── [ 647] web.go
│ └── [ 0] web.go~
├── [4.0K] wordcount
│ ├── [ 996] wordcount.go
│ └── [ 773] wordcount.go~
└── [4.0K] zero
├── [ 97] zero.go
└── [5.8K] zero.o
|
Even if the tree tool doesn't support sorting by size directly, you can still do it using tree and sort.
You can use the following command to list all the files and their paths in the given folder and subfolders with their file size and then use the sort tool to sort them by the second column of tree output (that is these sizes, being the first column all the [ symbols).
We use grep here to filter just files with a given extension.
Here's the command:
tree -sifF /opt/aplicaciones/gio/ | grep -v '/$' | grep ".jar" | sort -k2 -rn
Here's a sample output with a somewhat large quantity of files so you can see how it behaves:
[ 89702805] /myapp/first_folder/artifact/this-is-a-file-number-1.jar
[ 89511250] /myapp/first_folder/artifact/this-is-a-file-number-2_22_11_2022.jar
[ 89508457] /myapp/first_folder/artifact/this-is-a-file-number-2.jar
[ 89487284] /myapp/first_folder/artifact/this-is-a-file-number-2_backup.jar
[ 73631126] /myapp/first_folder/artifact/this-is-a-file-number-3.jar
[ 73416714] /myapp/first_folder/artifact/this-is-a-file-number-4.jar
[ 72904056] /myapp/second_folder/artifact/this-is-a-file-number-5.jar
[ 72870839] /myapp/second_folder/artifact/this-is-a-file-number-6.jar
[ 72824807] /myapp/second_folder/artifact/this-is-a-file-number-7.jar
[ 72822778] /myapp/second_folder/artifact/this-is-a-file-number-8.jar
[ 72822392] /myapp/second_folder/artifact/this-is-a-file-number-9.jar
[ 72822125] /myapp/second_folder/artifact/this-is-a-file-number-10.jar
[ 72821288] /myapp/second_folder/artifact/this-is-a-file-number-11.jar
[ 72808348] /myapp/first_folder/artifact/this-is-a-file-number-12.jar
[ 72794504] /myapp/second_folder/artifact/this-is-a-file-number-13.jar
[ 70309496] /myapp/first_folder/artifact/this-is-a-file-number-14.jar
[ 70298847] /myapp/first_folder/artifact/this-is-a-file-number-15.jar
[ 70286111] /myapp/first_folder/artifact/this-is-a-file-number-16.jar
[ 70283872] /myapp/first_folder/artifact/this-is-a-file-number-17.jar
[ 70281102] /myapp/first_folder/artifact/this-is-a-file-number-18.jar
[ 70275702] /myapp/first_folder/artifact/this-is-a-file-number-19.jar
[ 70274483] /myapp/first_folder/artifact/this-is-a-file-number-20.jar
[ 70273588] /myapp/first_folder/artifact/this-is-a-file-number-21.jar
[ 70273058] /myapp/first_folder/artifact/this-is-a-file-number-22.jar
[ 70271031] /myapp/first_folder/artifact/this-is-a-file-number-23.jar
[ 70265460] /myapp/first_folder/artifact/this-is-a-file-number-24.jar
[ 70090818] /myapp/first_folder/artifact/this-is-a-file-number-25.jar
[ 69510384] /myapp/first_folder/artifact/this-is-a-file-number-26.jar
[ 68674140] /myapp/first_folder/artifact/this-is-a-file-number-27.jar
[ 68367619] /myapp/second_folder/artifact/this-is-a-file-number-28.jar
[ 65897101] /myapp/first_folder/artifact/this-is-a-file-number-29.jar
[ 65011678] /myapp/first_folder/artifact/this-is-a-file-number-30.jar
[ 65010373] /myapp/second_folder/artifact/this-is-a-file-number-31.jar
[ 51954261] /myapp/second_folder/artifact/this-is-a-file-number-32__test.jar
[ 48092911] /myapp/second_folder/artifact/this-is-a-file-number-32.jar
[ 43081254] /myapp/second_folder/artifact/this-is-a-file-number-33.jar
[ 23357588] /myapp/third_folder/artifact/this-is-a-file-number-34.jarA
[ 23357588] /myapp/third_folder/artifact/this-is-a-file-number-34.jar
Now, the explanation of the used options:
From the man, about tree:
-s Print the size of each file in bytes along with the name.
-i Makes tree not print the indentation lines, useful when used in conjunction with the -f option.
-f Prints the full path prefix for each file.
-F Append a '/' for directories, a '=' for socket files, a '*' for executable files and a '|' for FIFO's, as per ls -F
About sort:
-k --key=POS1[,POS2] Start a key at POS1 (origin 1), end it at POS2 (default end of line) i.e. Just think about it as columns separated by whitespaces.
-r --reverse Reverse the result of comparisons
-n --numeric-sort Compare according to string numerical value
| How to sort from smallest to biggest with `tree` command line tool? |
1,574,835,605,000 |
for i in *; do echo ${i#*.}; done | uniq -u
Why there are duplicates in the output?
|
Filter adjacent matching lines
...
Note: 'uniq' does not detect repeated lines unless they are adjacent.
You may want to sort the input first, or use 'sort -u' without 'uniq'.
| When printing unique extensions in the directory with the uniq command there are duplicates |
1,574,835,605,000 |
I need to run a program xyz. It finishes execution in a few seconds.
It has some signal handling I need to test. From the shell or a bash script how do I execute the program and while it is executing send it a signal like kill -14.
Currently, the command I am trying, is
/usr/local/xyz input > some_file 2>&1 & PIDOS=$! && kill -14 $PIDOS
Does not seem to trigger the signal handling.
|
That command looks ok. Though when I tried that, it appears the command was too fast. It's as if my test script didn't have time to install the signal handler before it got shot.
A test script:
$ echo '$SIG{USR1} = sub { print "GOT SIGNAL\n" }; sleep 100' > sigtest.pl
Shoot it immediately: (the sleep is there so the next prompt isn't immediately printed)
$ perl sigtest.pl & kill -USR1 $! ; sleep 1
[1] 8825
[1]+ User defined signal 1 perl sigtest.pl
It didn't print anything, but died to the signal. Let's give it some time:
perl sigtest.pl & ( sleep 0.5 ; kill -USR1 $! )
[1] 8827
GOT SIGNAL
[1]+ Done perl sigtest.pl
Now it worked, the signal handler fired. (the signal interrupted the sleep, so the script exited anyway).
| How do I run a process and send it a SIGNAL while its running? |
1,574,835,605,000 |
There are about 10000 files under a given directory. Are there any command that can help me randomly pick 1000 files from it and put them into another directory. The picked files should be removed from the original directory.
|
If you have shuf, it will easily let you do what you want, provided that no filename has a newline character in it, and there are no subdirectories:
mapfile -t sample < <(shuf -n 1000 -e given_directory/*)
mv "${sample[@]}" other_directory
If there are subdirectories, you could get the list of files by using find instead of the glob. Or you could oversample and filter. find will also help you deal with files which could have newlines in their names (which is really a bad idea, but that doesn't necessarily mean that you can ignore the possibility), since you can use the -print0 action combined with the -z flag to shuf. For example,
find given_directory -type f -print0 |
shuf -z -n 1000 |
xargs -0 mv -t other_directory
mv -t is a (very useful) Gnu extension which lets you provide the destination directory at the beginning of the command line, which works nicely with the xargs/find -exec model of putting multiple arguments at the end of the command line.
| Randomly select a proportion of files from a given directory [duplicate] |
1,574,835,605,000 |
I've read a bunch of text files to extract some patterns. I need the line number too but the line number must be removed before the final grep (but can be saved for further processing maybe using variables).
I'll explain my procedure splitting the (long oneliner) command for better understanding:
read file with cat, do some cleaning (remove strange characters & line feeds using sed and tr) and such. Here just an example of many piped cleaning tasks:
cat file | sed 's/,/ /g' | sed '/^$/d'
add line number and tab with nl command & more processing and cleaning
nl -nrz -w4 -s$'\t' | tr '\n\r' ' '
extract the final desired pattern to CSV file
grep -Eio '.{0,0}account number.{0,15}' >> account_list.csv
The issue is I need the line number from step 2 to the very same CSV (other column no matter the order) using the SAME ONE LINE COMMAND (no while or loop allowed) but no success so far.
[EDITED for better understanding]
Take into account the line number I need is the original one prior to file cleaning. This cleaning process sometimes deletes some paragraphs. Imagine a file with a thousand lines, after processing i got one hundred. New line numbering is wrong.
[end edit]
Sample imput after some processing:
0123 the first account number 2345356432 must be used
0345 take it just for billing purposes, not any other.
0657 Meanwhile the second account number 8623525534
0987 user is blocked until the issue is solved with
The desired oputput would be:
2345356432; 0123
8623525534; 0657
or
0123; 2345356432
0657; 8623525534
any hint would be much appreciated
|
Using awk on the original input file, prior to cleaning:
awk '/account number [[:digit:]]+/ { match($0, "account number ([[:digit:]]+)", a); print NR ";" substr($0, a[1, "start"], a[1, "length"]); }' input
This extracts the account number and prints the line number at the start of the line:
1;2345356432
3;8623525534
If you want to extract the pre-processed number instead from the cleaned-up file:
awk '/account number [[:digit:]]+/ { match($0, "account number ([[:digit:]]+)", a); print $1 ";" substr($0, a[1, "start"], a[1, "length"]); }' input
Splitting this up a little:
/account number [[:digit:]]+/ ensures we only process lines matching "account number" followed by a number;
match($0, "account number ([[:digit:]]+)", a) looks for the pattern again and stores the positions and lengths of the matched groups (([[:digit:]]+), the number) in array a;
print NR ";" substr($0, a[1, "start"], a[1, "length"]) prints the record number (i.e. the line number; use FNR if you want to process multiple files), followed by ;, followed by the substring corresponding to the first group: a[1, "start"] gives its starting index, a[1, "length"] its length (this was filled in by match).
All this assumes there's at most one account number per line.
The second variant prints $1 instead of NR, i.e. the first field in the file, which is the pre-processed line number.
| extract line number and pattern in file at the same time |
1,459,582,143,000 |
I'm trying to zip the contents of a site using tar and exclude a bunch of folders and error_log files but it doesn't seem to be working, when it's processing I still see many of the excluded files being processed.
I am in the root of the site and I am trying to tar everything inside of public_html.
Here is the command I am running:
tar -zcvf files-live.tgz public_html/ --exclude='public_html/cache' --exclude='public_html/uplimg' --exclude='public_html/images/listings' exclude='public_html/sets' exclude='public_html/manage' exclude='error_log'
Side Note: error_log exists in several directories but I don't want any of them included.
What am I doing wrong here?
|
You used exclude= instead of --exclude=.
tar -zcvf files-live.tgz public_html/ --exclude='public_html/cache' --exclude='public_html/uplimg' --exclude='public_html/images/listings' --exclude='public_html/sets' --exclude='public_html/manage' --exclude='error_log'
| Excluding files whilst using tar to zip site contents not working [closed] |
1,459,582,143,000 |
ls -l | grep -v '^d' | sort -g -r -k 5 | head -2: I used this command to write into a text file.
I've researched a bit online and I think these commands mean this:
grep: a search that searches for a specific pattern in a string
-v: an option for grep that tell it to find and display all lines that do not match
'': not sure what the single quotes are for
^d: the caret signifies the beginning of a line and the d is the pattern that grep is searching for. This works when using ls -l.
sort sorts the contents of a text file numerically
-g compares according to numerical value not sure what this means exactly
-rreverses the results of the comparison. This would make more sense if I knew that the point of the comparing was for.
-k 5 starts a key at POS1 - not sure what this means, does it mean that something will happen at the 5th character in the first line of my text file?
head -2 displays the first 2 lines of my text file.
Could someone help clarify the parts I am unsure about?
|
You've got most of the technical details, but I think you're missing the semantics of the whole thing.
The single quotes in '^d' keep whatever shell runs that pipeline from treating characters in the regular expression (which is '^d') as "special". For example, $ is the regular expression meaning "end of line". Shells also use $ to mark the next token as a shell variable, whose value is to be interpolated into the string. The ^ is an ancient synonym for |, the pipe stdout of left-hand-side into stdin of right-hand-side. The single quotes keep ^ from being treated specially.
The semantics are to not pass any lines of ls -l output that are marked as directories. ls -l puts a 'd' as the first character of any line pertaining to a directory.
The -g option to sort causes the command to look for representations of numbers in the key field, and sort according to numeric value, not as strings. The default sort order is smallest to biggest, so the -r option causes it to sort biggest to smallest. The -k 5 says to use field #5 as the key field. sort uses whitespace characters (blanks, tabs, etc.) to divide a line of text into "fields" by default. On my Arch Linux box, the 5th whitespace-separated field of ls -l is the size of the file in bytes.
Semantics here are sorting files by size in bytes, biggest first.
You are sorting files (not directories) by size in bytes, and putting the information about the largest two files into the text file.
Editorially, it's generally considered bad form to parse the output of ls; see Why you shouldn’t parse the output of ls(1). Historically ls had different formats on different machines, and a script that assumed, say file size is field 5, would give hard-to-understand problems on another machine. So watch out for that.
| Linux command clarification |
1,459,582,143,000 |
Sometimes a same command is provided by shell-builtin as well as other file/package. Example:
$ type -a printf kill
printf is a shell builtin
printf is /usr/bin/printf
kill is a shell builtin
kill is /bin/kill
And while executing command, I sometime face some difficulties or command doesn't work as expected.
Example from man kill:
-L, --table
List signal names in a nice table.
And If I try from terminal, it doesn't work:
$ kill -L
bash: kill: L: invalid signal specification
This is because kill is executing as shell-builtin and it has no such feature/option.
So, How do I execute command properly to avoid interference from the shell?
Note: Here kill is only used to give an example. You may have difficulties with other command(s) that have interference from shell
|
You can use env your-command to avoid interference from the shell.
Example:
$ env kill -L
1 HUP 2 INT 3 QUIT 4 ILL 5 TRAP 6 ABRT 7 BUS
8 FPE 9 KILL 10 USR1 11 SEGV 12 USR2 13 PIPE 14 ALRM
15 TERM 16 STKFLT 17 CHLD 18 CONT 19 STOP 20 TSTP 21 TTIN
22 TTOU 23 URG 24 XCPU 25 XFSZ 26 VTALRM 27 PROF 28 WINCH
29 POLL 30 PWR 31 SYS
Another way is to use the path of command as follows:
$ which kill
/bin/kill
$ /bin/kill -L
1 HUP 2 INT 3 QUIT 4 ILL 5 TRAP 6 ABRT 7 BUS
8 FPE 9 KILL 10 USR1 11 SEGV 12 USR2 13 PIPE 14 ALRM
15 TERM 16 STKFLT 17 CHLD 18 CONT 19 STOP 20 TSTP 21 TTIN
22 TTOU 23 URG 24 XCPU 25 XFSZ 26 VTALRM 27 PROF 28 WINCH
29 POLL 30 PWR 31 SYS
So, By using env or specifying the path/location of command you can avoid interference from the shell.
| How do I execute command to avoid interference from the shell [duplicate] |
1,459,582,143,000 |
I want to get the arg of a command from a var that has the index in it of the arg that I want. Something like this
# command in terminal, `foo -r -f value_wanted`
index="3"
var=$"$index"
echo $var ## expected output `value_wanted`
I know I can just call it by $3 but that index I have is in the variable.
|
You can achieve that by using the following notation:
echo "${!index}"
If you want to process positional arguments, I suggest to use getopt (not getopts), though.
| get command line agrument by variable in shell script |
1,459,582,143,000 |
I am having a really hard time understanding how umask works.
How is something that is 666 masked with 001 still 666 ?
|
A umask value of 001 says that it is preventing the creation of files with other executable permissions. (Actually not preventing the creation, as preventing the executable-permissions).
A 666 mode in an open statement only permits user+group+other for read and write permissions. So the umask has no effect on that.
However umask does not affect chmod.
| umask permissions that result in no change |
1,459,582,143,000 |
As I am more of a web guy, i need a little help with this one, so I will explain what I am trying to do before I ask.
I am creating batch script that will:
GET request from an external server (json file), receive data, save locally as .json
Use JQ to navigate the json for result[0].title
Use the 'title' as a parameter for a curl request
Once I have the file locally. I would use JQ to find the data in the object
cat file.json | jq '.results[0].title' > $1 &&
curl -i -H "Accept: application/html" -H "Content-Type: application/html" -X GET http://example.com/test/$1 > test.txt
Is is possible to set local variables in command line '$1' for temporary use in a piped command?
Am I waaay off here?
|
While $1 typically has a special meaning (the first parameter passed to a script/function/etc.) you can indeed save the output of commands in variables.
title=$(jq -r '.results[0].title' file.json)
curl -i -H "Accept: application/html" -H "Content-Type: application/html" -X GET http://example.com/test/"$title" > test.txt
The first part runs the commands jq -r '.results[0].title' file.json and save the output (whatever shows up on stdout into the variable title. Then we run the curl command and expand the title variable as part of the URL.
| Create command line variables with PATH |
1,459,582,143,000 |
I am trying to achieve something and in the experiment I came through following case , could somebody explain me how to understand whats happening.
echo " Agent process not running on www.raja.server.local.com" | grep -oh "[*?<com]"
o
c
o
o
o
c
c
o
m
Thank you.
|
From the grep manpage:
-o, --only-matching
Print only the matched (non-empty) parts of a matching line,
with each such part on a separate output line.
In this case, the -h is a no-op.
grep is looking for each of the characters: *, ?, <, c, o, and m in the input string that you piped to it, and printing each one on a new line as it finds it.
$ echo Zcom\?\<\[\*Z
Zcom?<[*Z
$ echo Zcom\?\<\[\*Z | grep -oh "[*?<com]"
c
o
m
?
<
*
$
Also, if you use grep --color -h "[*?<com]" you'll see the same letters in the same order highlighted inside the echoed search string.
| what grep -oh "[*?<com]" do? |
1,459,582,143,000 |
I would like to compress four ISO files using 7z into a new archive called ISOs.7z from the command-line on my Sabayon machine. These are the commands I have tried so far (I know none of these specify the output 7z archive's name, just starting simple with these commands so that I can get the ropes of compressing with 7z):
7z a chakra-2015.11-fermi-x86_64.iso openSUSE-Leap-42.1-DVD-x86_64.iso PCBSD10.2-RELEASE-08-19-2015-x64-DVD-USB.iso Sabayon_Linux_15.11_amd64_MATE.iso
and
7za a chakra-2015.11-fermi-x86_64.iso openSUSE-Leap-42.1-DVD-x86_64.iso PCBSD10.2-RELEASE-08-19-2015-x64-DVD-USB.iso Sabayon_Linux_15.11_amd64_MATE.iso
neither worked. The latter of these gave:
Open archive: chakra-2015.11-fermi-x86_64.iso
ERRORS:
There are data after the end of archive
--
Path = chakra-2015.11-fermi-x86_64.iso
Type = xz
ERRORS:
There are data after the end of archive
Offset = 205312
Physical Size = 10840636
Tail Size = 2144760772
Method = LZMA2:23
Streams = 1
Blocks = 1
Error:
There is some data block after the end of the archive
E_NOTIMPL
System ERROR:
E_NOTIMPL
|
The correct syntax is:
7z a isos.7z *.iso
or
7z a isos.7z chakra-2015.11-fermi-x86_64.iso openSUSE-Leap-42.1-DVD-x86_64.iso PCBSD10.2-RELEASE-08-19-2015-x64-DVD-USB.iso Sabayon_Linux_15.11_amd64_MATE.iso
| Compressing files using p7zip from the command-line |
1,459,582,143,000 |
When I tried to change from super user to common user and therefore typed "exit",
I got an error message, that a job was stopped and I could not log out as root.
So I listed the jobs:
#jobs
[1]+ yes | apt-get install build-essential
I moved the job from the shells background to foreground
fg 1
now I thought, I could simply press ctrl+c to end it,
but each time I pressed ctrl+c, nothing happened except for "^c" being displayed.
When I pressed ctrl+z I came back to the prompt, but the job was still there, so I was caught in the # .
|
^C generates an interrupt signal (SIGINT). It's allowable for programs to mask this signal, and either ignore it completely or 'react' to it in a non fatal way.
For example - you might want a program to stop whatever it's doing right now, but without terminating completely.
So ^C doesn't always work.
^Z to stop the job, and then kill %1 (where %1 is the number on the jobs list) will sent it a terminate signal (SIGTERM) instead which should do the trick.
| unable to remove stopped job with ctrl+c |
1,459,582,143,000 |
p_ash-3.2$ cp some_file1234.html /some/loc/ation/ofthe/some_file1234.html.html_Backu
Have been seeing this intermittent issue with the command-line, while typing longer lines as part of any command, after typing some characters, say about 70, it starts all over again, from the extreme left, even before the -bash-3.X$ consequently the prompt and first few characters of the command itself gets overwritten, why would this happen, any workaround?
|
Could be a carriage return (\r) in the destination file name.
EDIT: Any chance you previously had your terminal settings messed up? This can happen if you by accident cat a binary file on your terminal. One way to fix that is
stty sane
Another reason might be that a SIGWINCH got lost when you resized the terminal window. In that case, another resize usually fixes the problem.
| Why would the prompt get overwritten? |
1,459,582,143,000 |
i am trying execute ssh-copy-id in one port different than 22 (default). I researched and found the command below
$ssh-copy-id -i ~/.ssh/id_rsa.pub "[email protected] -p 22001"
but, when execute the command, i got this error:
/usr/bin/ssh-copy-id: ERROR: ssh: connect to host 192.168.0.1 -p 22001 port 22: Connection refuse
It seems that command dont understand the port.
|
$ ssh-copy-id
Usage: /usr/bin/ssh-copy-id [-h|-?|-n] [-i [identity_file]] [-p port] [[-o <ssh -o options>] ...] [user@]hostname
So in your case simply use:
$ ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22001 [email protected]
Because of your usage of quotes, the -p 22001 part became part of the hostname which explains the error you got.
| ssh-copy-id different port |
1,459,582,143,000 |
I came here across the command grep -E '(^| )ABC( |$)' file1
I am confused how to interpret the regular expression therein, because I know the ^ and $ character as beginning and end signifier, but in here they are not at the very beginning or end because of the bracket, ^ is sometimes used to negate a set of character, but I have seen this usage only in [^....] like expression.
Could someone explain step by step the meaning of this regex ?
|
It means that grep should search for ABC string only at the beginning of the line OR after space, moreover this string has to end with another space OR the end of the line.
In other words someone wanted to search for a strings which form whole words. However this regexp has many issues, namely there could be many other characters before and after word (at least in natural language), i.e. (,),.,;,:,,,..., etc.
Thus, it is better to use -w option of grep, alternatively play with boundaries: \b or \</\>.
| What is the meaning of (^| )ABC( |$) as an extended REGEX ? |
1,459,582,143,000 |
I made a bash func for myself so I can through some words into a tree_hole file, it looks like this in bashrc:
function th { echo -e "\n$@" >> /Users/zen1/zen/pythonstudy/tree_hole; }
export -f th
Thus I can do th Tom, I like your new laptop to cat the whole "Tom ..." sentence to the end of tree_hole file.
But there is a problem. This function seems didn't convert all the part of variables into raw string. So th "hi, Tom" will got hi Tom instead of the intended "hi, Tom", the " is missing. Besides, I can not type a single ' or " or `, which will be misinterpreted by bash and requires more input.
So, is there a way that I can type anything followed a command as raw string? Or how should I improve my function so the special ", ', ` issue could be solved?
|
What you want isn't possible. You don't like the shell behaviour but this part of the shell behaviour cannot be changed.
I guess this would work better for you:
function th { { echo; cat; } >> /Users/zen1/zen/pythonstudy/tree_hole; }
Your function would be called without parameters. cat would read from standard input and append to the file. You could type everything the terminal allows you (i.e. no problems with ", ', newline and so on). You would end the input with Ctrl-D.
| Can I treat all what I typed after a command as raw string? |
1,459,582,143,000 |
On Linux press F9 return a correct
~
On unix(solaris or hpux) return
0~
How to set correct tilde on those systems?
|
Solution found
first must press
CTRL+V and key
in my case is F9
so i did
CTRL+V F9
and return this
^[[20~
Now i know is key 20 and i bind it to tilde
bind '"\e[20~":"~"'
I try if work pressing F9 and return tilde
I put this in $HOME/.profile for permanent change
| Tilde on unix:hpux and / or solaris |
1,459,582,143,000 |
I have installed Scientific Linux on VirtualBox and I am trying to follow some exercises on a book I picked up. The first thing I need to do is:
mount /dev/dvd /media
But when I do that, I get
mount: you must specify the filesystem type
What I would like to do is mount the virtual DVD where I see that the DVD in the Options of VirtualBox has all the necessary files I need, but it won't let me mount. Can anyone tell me if something has been mis-configured or where I can check to see if it has been?
|
I have realized that the reason why /dev/dvd was not being read was because of a setting that I had turned off on VirtualBox. I experimented a little bit and was able to mount the drive after that point.
The setting I had to turn on was the .iso file found in Devices, DVD/CD in VirtualBox. I turned it off earlier because it kept taking me to the installation process.
| Mounting /dev/dvd /media on VirtualBox |
1,459,582,143,000 |
I'd like to install a program from sourceforge sources. Is there a way to download the latest stable source from SourceForge? an example for ntopng would be very welcome.
|
I'm pretty sure you can use wget and SourceForge, always use the url for the latest source.
http://sourceforge.net/projects/$$PROJECT_URL$$/files/latest/download?source=files
Where $$PROJECT_URL$$ needs to be replace by the part of url corresponding to the project.
for ntopng ==> ntop which gives you:
wget http://sourceforge.net/projects/ntop/files/latest/download?source=files
| automatically download latest stable tarball from sourceforge from command line? |
1,459,582,143,000 |
I am using OS X 10.8.5. I am trying to figure out if the terminal is running bash. When I type in the following it says "getent" command not found -- but the error message seems to be coming from bash.
Me$ getent passwd $(whoami) | awk -F: '{print $NF}'
-bash: getent: command not found
Am I running bash?
|
Your shell is bash and you have it trying to run getent. The reason it puts -bash: before getent: command not found is because otherwise it would look like getent was telling you it couldn't find a command.
| Am I running Bash? |
1,459,582,143,000 |
How to create a GUI in shell script without any third party libs (like setup command)
|
You can use whiptail or dialog
Have look at this thread:
Whiptail or dialog
Bash Shell Scripting/Whiptail
| How to create a GUI in shell script without any third party libs just like setup command [closed] |
1,459,582,143,000 |
Is there any method to interact with a program directly from the commmand line?
For example, I've a program, data.o, which produces a file mydata.out and then I want to plot it with gnuplot.
I can open the plotting program with gnuplot & and then I would like the shell to write somehow in this job/background shell I just opened plot mydata.out. (gnuplot is just an example, the question would be similar for executing some Macros or Hotkeys in every program I can open from the command line without leaving it).
|
some programs like gnuplot accept a command file, it may be easier to generate on and then pass it as argument.
some programs don't depend on interaction, for them piping to stdin may work
(
printf "my command\n"
printf "my other command line\n"
) | theProgram and its args
other depend on interaction and mandate that their standard input is a terminal, you have to use expect or equivalent.
| How to interact with program from command line |
1,459,582,143,000 |
I have a file where it contains a list of search patterns (searchPattern.txt). Its contents is similar to the contents below where there are 3000+ of them.
123456
234567
345678
.
.
.
What I wanted to do is to use grep and search directories using the patterns listed in the file if they exist. It would be similar to this kind of command but instead of one search string there are many and its listed in a file.
grep searchPattern.txt diagnostics*
Although the command above doesn't work its just an idea as to what I wanted to happen.
Can this be done with grep? If it can't be done can someone suggest a better way to do this?
|
Try this one:
grep -r -f /path/to/pattern/file diagnostics*
| Grep to search directories for patterns inside a text file |
1,459,582,143,000 |
For some reason (?), often when I write scripts nowadays they do not work, or work only in part, and then I try with . or source and they work perfectly. I'm unsure what is causing this as the scripts are different in a lot of ways, it's hard to isolate what must be sourced in order for the script as a whole to work. Also, I've noticed that this is almost always the case when I move things from .bashrc aliases and functions into scripts.
But to my actual question, in the above situation, what is the optimal way to "swallow" the source dot, so you are still able to use the scripts as one-word commands, not having to hit the dot every time?
|
If you want a command called script that actually sources the script file instead of running it as a separate process, then make a function:
script () { . /path/to/script; }
To make that function permanent, add it to the relevant rc file for your shell (e.g. ~/.bashrc for bash).
| source script as command [closed] |
1,459,582,143,000 |
Possible Duplicate:
What is the easiest way to execute text from tail at the command line?
I was guessing the ouput of a command that fails to execute should be sent to stdout and hence can be piped. It seems I am wrong, so please correct me.
For example,
$easy_install
The program 'easy_install' is currently not installed. You can install it by typing:
sudo apt-get install python-setuptools
So if I want to install with the command suggested, I tried
$easy_install | grep sudo | bash
i.e naively trying to pick out the last line and send it to the shell as a command. I even tried
$easy_install | tail -1 | bash
but get the same blurb without it doing anything. What am I doing wrong?
|
The easiest way I can think of, since the line in question is a command, is:
`$easy_install 2>&1 | grep sudo`
The backticks or $(…) take the output of a command pipe and execute it as if you typed it, returning the output.
Please note that this command won't work if you're missing sudo and trying to install it. But since this is obviously Ubuntu, sudo is usually available. To avoid this, you might want to try your second choice:
`$easy_install 2>&1 | tail -n 1`
Try and wean yourself out of using tail -1, it's being replaced by the standard form tail -n 1. I find this hard myself, but I don't like deprecation warnings. :)
Warning: if $easy_install exists and you don't get this type of output, either of these commands are a massive security risk. You can end up executing arbitrary things. You can protect yourself by being more extravagant:
`$easy_install 2>&1 >/dev/null | grep '^sudo apt-get install'`
This discards stdout, and will only run anything starting with sudo apt-get install which limits things nicely, but is considerably more annoying than just typing sudo apt-get install $package yourself.
| How to act on output from a failed command [duplicate] |
1,459,582,143,000 |
This is the last line in my .bashrc (lines breaks inserted for readability):
STARTTIME=`date +%F-%Hh-%Mm-%Ss-%N`; \
script -q -t 2> /home/USER/logs/$STARTTIME-timing.txt \
-c 'bash --rcfile /home/USER/.bashrc-cp' \
-f /home/USER/logs/$STARTTIME-log.txt; \
gzip -9 /home/USER/logs/$STARTTIME-timing.txt /home/USER/logs/$STARTTIME-log.txt; \
exit 0
There is a /home/USER/.bashrc-cp without this mentioned last line (but it's an exact copy of my .bashrc).
This terminal logging solution works great. There are only two problems:
Q1: If I exit the gnome-terminal with Alt+F4 then the logs aren't gzipped. Why? How can I gzip them in that case?
Q2: I don't want to use a .bashrc-cp file. Are there any solutions for it?
|
Q1
When you ALT+F4 your terminal, it sends a SIGHUP to the shell. The shell then exits and sends a SIGHUP to everything running under that shell. Because the shell exits, it stops processing all commands, so everything after executing script isnt run.
The way to do this is to feed directly into gzip.
STARTTIME=`date +%F-%Hh-%Mm-%Ss-%N`
script -q -t -c 'bash --rcfile /home/USER/.bashrc-cp' \
-f >(nohup gzip -9 > /home/USER/logs/$STARTTIME-log.txt.gz) \
2> >(nohup gzip -9 > /home/USER/logs/$STARTTIME-timing.txt.gz)
What were doing here:
In bash, >(cmd) is special syntax that runs cmd and replaces >(cmd) with the path to a named pipe connected to cmd's STDIN. The nohup is needed so that when the shell quits, gzip doesnt get a SIGHUP and die. Instead it will get EOF on its STDIN so it can flush its buffer and then quit.
Q2
I'm not sure what .bashrc-cp is. If youre trying to avoid a recursive loop you can export STARTTIME (or some other variable) before launching script and then check for its existence. If it exists, dont launch script.
| terminal logging doesn't complete when I close the terminal |
1,459,582,143,000 |
So I'm running a simple alias called vpn which runs a command and has an output, which I put into a .txt file.
If interested to what exactly then here is the alias
alias vpn="docker exec -it qbittorrent curl -sS https://ipinfo.io/json > $HOME/vpnstatus.txt" and the output when I use cat vpnstatus.txt;
{
"ip": "123.123.123.123",
"city": "Star_City",
"region": "On_the_moon",
"country": "far_far_away",
"loc": "some_random_numbers",
"org": "some_random_ips_info",
"postal": "who_knows",
"timezone": "Darkside/Moon",
"readme": "https://ipinfo.io/missingauth"
}
So I thought, well I use grep and get what I want (grep is basically one of the few commands I have ran into and thought I had a clue how to use) but since the output has the same phrase "ip" recurring I get two lines. grep "ip" vpnstatus.txt;
"ip": "123.123.123.123",
"readme": "https://ipinfo.io/missingauth"
Have tried altarring the command eg grep "ip: " vpnstatus.txt but then there is no return value..
My question is how to use grep and obtain the info I'm interested in, which is the first line of the output. I plan to write another alias for the 'grep' part of this and combine the two aliases && and embedded them into my .basharc for easy and visual confirmation upon each login.
|
The string you want to grep for is "ip": , but the string you are using is ip: . The double quotes are processed by the shell. To include the literal double quotes in the pattern, you need to quote them as '"ip": '.
However, the correct (as in "robust") way to parse a JSON document is with a JSON parser, such as jq:
$ curl -sS https://ipinfo.io/json | jq -r '.ip'
xxx.yyy.zzz.70
This returns the decoded ("raw", -r) string value of the top-level key ip from the document returned from curl.
Or, using Miller (mlr):
$ curl -sS https://ipinfo.io/json | mlr --j2c --headerless-csv-output cut -f ip
xxx.yyy.zzz.70
This also parses the JSON from the curl command, extracts (with the cut sub-command) the value of the ip key and outputs a header-less CSV document with the value.
| How to use grep string that has double quotes in it |
1,459,582,143,000 |
I want to check the free and reserved CPU in a linux server.
I found this command mpstat and below is the output:
CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
all 7.13 0.00 2.46 1.73 0.00 0.08 0.00 0.00 0.00 88.59
If %idle is e.g 1.0% it means that I am using 99.00% of the available CPU?
Is there a better way to find this info?
|
If possible, install the htop tool. I love it and have used so many times. It is like top but better and prettier with more options. To get the CPU average do following steps:
execute htop
press f2 to enter setup
press 3 x right arrow key
press 36 x down arrow key to CPU average
press 2 x Enter or optionally move it where you want it to be with the arrow keys
optionally press the Spacebar as many times as you want to change the display format
I would recommend you to explore the htop command, it has so many options :)
You can get the mpstat output explanation by running this:
man mpstat
You will get this output:
CPU Processor number. The keyword all indicates that statistics are calculated as averages among all processors.
%usr Show the percentage of CPU utilization that occurred while executing at the user level (application).
%nice Show the percentage of CPU utilization that occurred while executing at the user level with nice priority.
%sys Show the percentage of CPU utilization that occurred while executing at the system level (kernel). Note that this does not include time spent servicing hard‐ ware and software interrupts.
%iowait Show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request.
%irq Show the percentage of time spent by the CPU or CPUs to service hardware interrupts.
%soft Show the percentage of time spent by the CPU or CPUs to service software interrupts.
%steal Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.
%guest Show the percentage of time spent by the CPU or CPUs to run a virtual processor.
%gnice Show the percentage of time spent by the CPU or CPUs to run a niced guest.
%idle Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request.
| Check server's free and reserved CPU |
1,459,582,143,000 |
I'm getting this error when I start my terminal:
/home/USERNAME/.config/envman/PATH.env:2: permission denied: /home/USERNAME/.local/bin
this is my .zshrc file:
# Enable Powerlevel10k instant prompt. Should stay close to the top of ~/.zshrc.
# Initialization code that may require console input (password prompts, [y/n]
# confirmations, etc.) must go above this block; everything else may go below.
if [[ -r "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh" ]]; then
source "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh"
fi
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:/usr/local/bin:$PATH
# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"
# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="powerlevel10k/powerlevel10k"
# Set list of themes to pick from when loading at random
# Setting this variable when ZSH_THEME=random will cause zsh to load
# a theme from this variable instead of looking in $ZSH/themes/
# If set to an empty array, this variable will have no effect.
# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )
# Uncomment the following line to use case-sensitive completion.
# CASE_SENSITIVE="true"
# Uncomment the following line to use hyphen-insensitive completion.
# Case-sensitive completion must be off. _ and - will be interchangeable.
# HYPHEN_INSENSITIVE="true"
# Uncomment one of the following lines to change the auto-update behavior
# zstyle ':omz:update' mode disabled # disable automatic updates
# zstyle ':omz:update' mode auto # update automatically without asking
# zstyle ':omz:update' mode reminder # just remind me to update when it's time
# Uncomment the following line to change how often to auto-update (in days).
# zstyle ':omz:update' frequency 13
# Uncomment the following line if pasting URLs and other text is messed up.
# DISABLE_MAGIC_FUNCTIONS="true"
# Uncomment the following line to disable colors in ls.
# DISABLE_LS_COLORS="true"
# Uncomment the following line to disable auto-setting terminal title.
# DISABLE_AUTO_TITLE="true"
# Uncomment the following line to enable command auto-correction.
# ENABLE_CORRECTION="true"
# Uncomment the following line to display red dots whilst waiting for completion.
# You can also set it to another string to have that shown instead of the default red dots.
# e.g. COMPLETION_WAITING_DOTS="%F{yellow}waiting...%f"
# Caution: this setting can cause issues with multiline prompts in zsh < 5.7.1 (see #5765)
# COMPLETION_WAITING_DOTS="true"
# Uncomment the following line if you want to disable marking untracked files
# under VCS as dirty. This makes repository status check for large repositories
# much, much faster.
# DISABLE_UNTRACKED_FILES_DIRTY="true"
# Uncomment the following line if you want to change the command execution time
# stamp shown in the history command output.
# You can set one of the optional three formats:
# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"
# or set a custom format using the strftime function format specifications,
# see 'man strftime' for details.
# HIST_STAMPS="mm/dd/yyyy"
# Would you like to use another custom folder than $ZSH/custom?
# ZSH_CUSTOM=/path/to/new-custom-folder
# Which plugins would you like to load?
# Standard plugins can be found in $ZSH/plugins/
# Custom plugins may be added to $ZSH_CUSTOM/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git)
source $ZSH/oh-my-zsh.sh
# User configuration
# export MANPATH="/usr/local/man:$MANPATH"
# You may need to manually set your language environment
# export LANG=en_US.UTF-8
# Preferred editor for local and remote sessions
# if [[ -n $SSH_CONNECTION ]]; then
# export EDITOR='vim'
# else
# export EDITOR='mvim'
# fi
# Compilation flags
# export ARCHFLAGS="-arch x86_64"
# Set personal aliases, overriding those provided by oh-my-zsh libs,
# plugins, and themes. Aliases can be placed here, though oh-my-zsh
# users are encouraged to define aliases within the ZSH_CUSTOM folder.
# For a full list of active aliases, run `alias`.
#
# Example aliases
# alias zshconfig="mate ~/.zshrc"
# alias ohmyzsh="mate ~/.oh-my-zsh"
# To customize prompt, run `p10k configure` or edit ~/.p10k.zsh.
[[ ! -f ~/.p10k.zsh ]] || source ~/.p10k.zsh
# Generated for envman. Do not edit.
[ -s "$HOME/.config/envman/load.sh" ] && source "$HOME/.config/envman/load.sh"
source ~/.config/envman/load.sh
eval "$(zoxide init bash)"
This is the PATH.env file:
export PATH="$HOME/.local/bin:$PATH"
~/.local/bin
And these are the permissions for PATH.env:
-rw-rw-r-- 1 USERNAME USERNAME 51 FEB 21 17:02 /home/USERNAME/.config/envman/PATH.en
|
Here's the error message wrapped to two lines for clarity:
/home/USERNAME/.config/envman/PATH.env:2:
permission denied: /home/USERNAME/.local/bin
It means: "When trying to execute the 2nd line of PATH.env, I got the error: permission denied: /home/USERNAME/.local/bin.
And the second line of your PATH.env is indeed:
~/.local/bin
Since the previous line is export PATH="$HOME/.local/bin:$PATH", it seems that ~/.local/bin is probably a directory, and the error is caused by trying to execute it as a file.
| Permission denied error when starting terminal |
1,459,582,143,000 |
Is there a command-line tool that enables to give a file more attributes than a name?
For example, we have a file called ubuntu.pdf and we give it tags like command-line, shell, nautilus, my_favourite_stuff, whatever_else and a file linux.pdf with tags shell, command-line, ln, best_book
and than I would like to have a command
$ tagged shell
and it will output
file: path_to/ubuntu.pdf, path_to/linux.pdf
related_tags: command-line(2), shell(1), nautilus(1),...
note that related_tags are sorted with a number telling how many files are tagged with particular related tag. It works as a tag cloud in a command-line.
Can we achieve this behavior in command-line (with hardlinks, symbolic-links, shell-scripting, whatever)?
|
Linux file systems do have capabilities for extended attributes, Stephen shows in his answer.
But: many, probably most, graphical editors of various types will not truncate a file, then fill the same file with new content when you click on "safe". They instead make a new file, and rename it to original file name.
In that action, these extended file attributes get lost, because they stick to the actual last file, not the file name.
So, you really can't "safely" implement something like that on filesystem basis alone. You need database, with mapping of file names to tags.
So, while there is databases for e.g. photos or songs, nothing I think for files in general.
Writing that would not be hard - sqlite is easy to use for writing simple databases like this from shell scripts or Python or c++ or Go, but it would be something nobody but you used.
Probably should stick to original desktop search engines, as brought by the gnome in shape of tracker-miner!
| Command-line tool enabling tagging and related tags functionality |
1,459,582,143,000 |
I'm trying to write a very simple bash function that uses a one-line node script to read the value of a key from a JSON file. Here's the current contents of utils.bash:
#!/bin/bash
project_name() {
echo $(node -e "require('../package.json').name;")
}
Yes, I know there are tools around that allow me to access JSON data more directly, but all of the dev team already has node, so it's an existing dependency rather than a new one.
I've made the file executable, but when I source scripts/utils.bash and then attempt to reference the function (from the command line) as so:
PROJECT_NAME=project_name
the output is simply the line I typed. When I try to execute simply project_name, I get a node error:
Error: Cannot find module '../package.json'
The utils.bash script is in the project's scripts folder, and the package.json file is in the parent folder. I'm executing this from the parent folder (although I'd prefer it not matter). I've tried importing './package.json', but that also gives me an error.
The end result I want is to be able to reference the value of the name key in project.json through multiple bash scripts.
How can I do this?
|
Well, the problem with PROJECT_NAME=project_name is it should be PROJECT_NAME="$(project_name)". The quotes in that case are not necessary as it's in an assignment to a scalar variable, but are a good habit to get into as in many other contexts (including your echo $(...)), omitting quotes implies splitting the result on $IFS characters and perform globbing on the resulting words which you do not want here.
The function has the same working directory as where it's run from, and you said you ran it in the parent of scripts. So the function is looking for ./../package.json not ./scripts/../package.json.
So you'll likely want to use ./package.json (not package.json as when the path has no /, node looks in a default search path instead of the current working directory) and figure out why it's giving you an error.
You'll also need node to output that value with console.log(), and remove that echo with $(...) which makes no sense (especially with that $(...) unquoted as seen above). So:
package_name() {
node -e "console.log(require('./package.json').name)"
}
PACKAGE_NAME="$(package_name)" || handle error from node if any
| How can I write a function that returns the results of a one-line `node` script? |
1,459,582,143,000 |
I use wget, rsync and curl to download files regularly (via https or ssh). One issue when I travel is that servers that are fast to access in one region become slow (or even inaccessible) in another. My current strategy is to use mirrors and switch to mirrors that are local to each region.
I was wondering if there are Unix commandline tools that allow downloading a file from multiple mirrors as the source. The mirrors do not have to be used in parallel. They are used as backups, and the fastest one is used if the connection is lost.
The command line should look like:
xxget url1 url2
-- Update --
As pointed out by the accepted answer, the aria2c documentation has an example that does exactly this:
Download from 2 sources:
$ aria2c http://a/f.iso ftp://b/f.iso
|
The aria2c client has that functionality out of the box. It's a rather slim client and widely available.
It does download in parallel by default, but that will only work to your advantage: if the fastest server is done delivering its part, it'll grab parts that a slower server hasn't yet
| How to download using command line tools with a secondary (backup) source? |
1,459,582,143,000 |
I'm using Windows 10 with WSL2. In all the terminals I have tried so far, I quickly encounter a broken command line. After pressing ENTER, the actual command appears differently, parts of the command prompt are deleted, editing does not occur at the cursor position, and random spaces are added, among other issues. Today has been particularly challenging, making it difficult for me to work effectively.
Maybe it happens after scrolling the history with the up/down arrow keys.
Restarting WSL, clearing terminal, restarting terminal don't help.
So after restarting I scroll the history and boom!
Any idea how to fix it? I read there's some RESET command for terminal. How to reset? Maybe that could help...
Here just one example: after executing git log... I return back to the command with the arrow up key but the end of the command prompt is partially destroyed as well as the beginning of the command.
[16:25:20] blade@DESKTOP-VQABTK7:/bytex/site$ echo $PS1; echo;
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
|
This looks like a fairly standard colorized version of a Debian/Ubuntu/Mint default prompt.
\[ # begin non-advancing characters
\e]0; # escape sequence to update terminal title
\u@\h: \w # sets title to <user>@<host>: <working directory>
\a # end terminal title
\] # end non-advancing characters
${debian_chroot:+($debian_chroot)} # if within a Debian chroot, name of the chroot env
\[ # begin non-advancing characters
\033[01;32m # escape sequence to set colors
\] # end non-advancing characters
\u@\h # output <user>@<host> for the prompt
\[ # begin non-advancing characters
\033[00m # escape sequence to set colors
\] # end non-advancing characters
: # output :
\[ # begin non-advancing characters
\033[01;34m # escape sequence to set colors
\] # end non-advancing characters
\w # output current working directory
\[ # begin non-advancing characters
\033[00m # escape sequence to set colors
\] # end non-advancing characters
\$ # output $ if a regular user, or # if root
<space> # output a space after prompt
Note that the terminal title is also correctly counted as non-advancing characters.
But this would result in a prompt that looks like (simulated without colors):
blade@DESKTOP-VQABTK7:/bytex/site$
Where does the timestamp at the beginning come from?
Do you have $PROMPT_COMMAND set?
To produce your prompt cleanly, remove whatever hack currently causes the timestamp prefix to your prompt, and try this PS1 setting instead:
PS1='[\t] \[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
| Command line / prompt is broken, cannot edit (WSL2) |
1,459,582,143,000 |
I have a log file like bellow.
130023432 195047 /media/ismail/SSDWorking/book-collection/_Books/book 1.epub
130023433 195047 /media/ismail/SSDWorking/book-collection/_Books/book 2.epub
130023431 195047 /media/ismail/SSDWorking/book-collection/_Books/book 3.epub
I have a variable, var=130023432
I want to say if first word equals to $var, then print all except the first two words.
So, in this case the output will be:
/media/ismail/SSDWorking/book-collection/_Books/book 2.epub
What I have tried so far is grep -oP "(?<=$var \d+ ).*$'" but it gives error grep: lookbehind assertion is not fixed length
How can I achieve that?
|
With this short awk (sub is sed like, to substitute a pattern or a regex):
var=130023432 awk -v var="$var" '$1==var{sub($1" "$2" ", ""); print}' file
or simply:
awk -v var="130023432" '$1==var{sub($1" "$2" ", ""); print}' file
/media/ismail/SSDWorking/book-collection/_Books/book 1.epub
| If first word is $var then print the whole line except first two words |
1,459,582,143,000 |
Here is an example of what I am trying to do:
I have a folder (called 'dir') that contains the following:
dir
|_sub1.rar
|_sub2.rar
|_sub3.rar
I will cd ~/ to dir and want to run a command that will extract all .rar files and place the contents into a folder with the same name. sub1.rar should be extracted to sub1, sub2.rar should be extracted to sub2, and so on.
|
set -e
cd dir
for rar in ./*.rar
do
[ -f "$rar" ] || continue
dir=${rar%.rar}
mkdir "$dir"
(
cd "./$dir"
unrar x "../$rar"
)
# maybe rm "$rar"
done
Nothing clever here. Assumes you have an unrar command that takes an x option to do the eXtract. Just run a loop over the things matching ./*.rar, make sure it is a file, make a directory, then use a subshell to change directory and extract it.
| Unrar all .rar files in a directory to a folder with the same name |
1,459,582,143,000 |
I am writing to ask what the zero value (:0) means in the FROM column after the command w in bash.
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
Thank you
|
If you're talking/asking about the command w (I don't know a W command):
It usually tells, from where a user is logged in. :0 is the $DISPLAY, when the user is logged in locally.
| What does zero value means when pressing w in bash? |
1,676,672,757,000 |
cdcl() { vlog -source -lint +define+"${1:-DEBUG}" -sv "${2:-*.sv}"; }
Above is my fucnction, i defined it in .bashrc file. Below is my command line
% cdcl 'RANDOM' 'abc.sv'
This give me >> vlog -source -lint +define+RANDOM -sv abc.sv
Is there a way, where i can skip giving the 1st placeholder value and only give 2nd placeholder in command line.
|
Since the function uses the default value expansion with the colon, i.e. "${1:-DEBUG}" and not "${1-DEBUG}", an empty value will also trigger the use of the default value. You can pass an empty value with "" or '', so
cdcl "" abc.sv
should result in the function running
vlog -source -lint +define+DEBUG -sv abc.sv
If the function had used "${1-DEBUG}" instead, it'd only trigger the default value for an unset value, and it's not possible to have $1 undefined while $2 is defined. They have to come in order, the list can't have "holes" in it.
Not that passing an empty string like that is a pretty solution, and you might want to decide on e.g. using getopt within the function to have it take arguments like cdcl -d RANDOM -s abc.sv instead, again with the appropriate default values.
| How to pass 2nd argument when 1st argument is defaulted to its value in command line? |
1,676,672,757,000 |
So I am working on building minimal os using busybox. What I want is I want to run my .net program from BIOS. But I am not sure linux will run .net program or not, so to clear my path I am using C program instead of .net program. I am generating initrd.img file successfully. Now before generating initrd.img file. I want to integrate my hello.c program with init file.
This command I used to read file and which is reading C program code successfully. echo 'cat /etc/hello.c' >> init
Now I want to execute this hello.c. So I tried following command but it not working as cat command.
echo 'gcc -o echo /etc/hello.c' >> init
echo 'chmod +x echo' >> init
echo './echo' >> init
This is the error I am getting:
/init: line 6: gcc: not found
chmod: echo: No such file or directory
/init: line 8: ./echo: not found
|
Your script is failing because you don’t have gcc in your initrd.
You should not ship hello.c in your initrd; you should build the program and ship that instead in your initrd. You should also specify the full path to your program when attempting to run it.
| How to integrate C program with init file? |
1,676,672,757,000 |
So a document contains strings of the form:
9s5s4sKs7h6h4h2d4dAdTd2c3c
6hKhQs6s3s7s5d3d2d9dKdAd4h
5s9sTs8hKhJc4s6c4hJsAc2dKs
Every line is made up out of pairs consisting of digit/uppercase letter and a lowercase letter.
I want to look for all lines that have all pairs with the same lowercase letter grouped (next to each other) (eg. Ks or 2d). So on the first line, the pairs with 's' are all next to each other, then the 'h' pairs, then 'd' and then 'c'.
There are only 4 possible lowercase letters (s,c,d,h), so I made the following regex expression:
^.(.)(.\1)*.(.)(.\3)*.(.)(.\5)*.(.)(.\7)* .*$
but this doesn't account for a line like:
6hKhQs6s3s7s5d3d2d9dKdAd4h
where there are a few 'h' pairs at the beginning, then some other pairs and then an 'h' pair at the end. So this line shouldn't match.
I'm using the grep command for this.
Example:
grep -P "^.(.)(.\1)*.(.)(.\3)*.(.)(.\5)*.(.)(.\7)* .*$" file.txt
Input:
9s5s4sKs7h6h4h2d4dAdTd2c3c
6hKhQs6s3s7s5d3d2d9dKdAd4h
5s9sTs8hKhJc4s6c4hJsAc2dKs
Output:
9s5s4sKs7h6h4h2d4dAdTd2c3c
On this line, we first have the pairs with 's', after that the pairs with 'h', then the pairs with 'd' and finally the pairs with 'c'.
This line 6hKhQs6s3s7s5d3d2d9dKdAd4h doesn't match because there is a group of 'h' pairs at the beginning, then some other pairs and then at the end there is an 'h' pair again. This means that they aren't grouped together and the line shouldn't match.
|
Since the letters are limited to just scdh, you could just filter out lines that have a different letter between two occurrences of a same letter with:
grep -ve 's.*[cdh].*s' -e 'c.*[sdh].*c' -e 'd.*[sch].*d' -e 'h.*[scd].*h'
Or with perl-like regexps if available:
grep -Pv '([scdh]).*(?!\1)[scdh].*\1'
Here, you'd be able to use [a-z] in place of [scdh] for any ASCII lower case letter. Or [[:lower:]] or \p{Ll} for any lower case letter in any alphabet.
| Finding all lines where pairs are grouped together with a regex using grep |
1,676,672,757,000 |
I am trying to get the number of updates from the dnf without using sudo privilages. I guess dnf check-update is the first starting point but any idea how can I do this. It's important the solution is 1 line.
Is something like this possible ?
|
It seems that this works:
dnf check-update| grep -Ec ' updates$'
since in each upgrade line there's a word updates at the end.
| One line command for getting the number of dnf updates |
1,676,672,757,000 |
As a fun side project, I'm building a serverless Todo application on AWS. I do a lot in the terminal, but my knowledge is basic.
The command to add something into my DynamoDB table via the AWS CLI (v2.3.4) is this:
aws dynamodb put-item \
--table-name tasks \
--item \
'{"task_id": {"S": "3495353e-726f-4e0e-b290-8014c03be971"}, "user_id": {"S": "aae30f8e-aabe-4e38-918f-0f5a2223f589"}, "created_at": {"S": "2022-09-09T12:51:05Z"}, "content": {"S": "Clean car"}, "is_done": {"BOOL": false}}' \
--profile personal
Notice that for created_at I'm manually typing in the ISO-8601 date as a string.
Now I know that on linux, in order to get the UTC datetime in the ISO-8601 format I need to run:
date -u +"%Y-%m-%dT%H:%M:%SZ"
My question is, how do I fit that into my DynamoDB put-item command so that I automatically/dynamically get the created_at from my linux system.
What I have tried:
I tried to simply plunk the date command into my DynamoDB command where the created_at value would go like this:
aws dynamodb put-item \
--table-name tasks \
--item \
'{"task_id": {"S": "3495353e-726f-4e0e-b290-8014c03be971"}, "user_id": {"S": "aae30f8e-aabe-4e38-918f-0f5a2223f589"}, "created_at": {"S": date -u +"%Y-%m-%dT%H:%M:%SZ"}, "content": {"S": "Clean car"}, "is_done": {"BOOL": false}}' \
--profile personal
But that doesn't work. The command errors out and it returns with:
Error parsing parameter '--item': Invalid JSON: Expecting value: line 1 column 138 (char 137)
JSON received: {"task_id": {"S": "3495353e-726f-4e0e-b290-8014c03be971"}, "user_id": {"S": "aae30f8e-aabe-4e38-918f-0f5a2223f589"}, "created_at": {"S": date -u +"%Y-%m-%dT%H:%M:%SZ"}, "content": {"S": "Clean car"}, "is_done": {"BOOL": false}}
Update:
I just tried the $(command) method as suggested by Marcus, and I still get the invalid JSON error.
aws dynamodb put-item \
--table-name tasks \
--item \
'{"task_id": {"S": "3495353e-726f-4e0e-b290-8014c03be971"}, "user_id": {"S": "aae30f8e-aabe-4e38-918f-0f5a2223f589"}, "created_at": {"S": "$(date -u +"%Y-%m-%dT%H:%M:%SZ")"}, "content": {"S": "Clean car"}, "is_done": {"BOOL": false}}' \
--profile personal
|
You could use jq to insert the date into the json object:
aws dynamodb put-item \
--table-name tasks \
--item "$(
jq -c '.created_at.S=(now|todate)' << 'EOF'
{
"task_id": {
"S": "3495353e-726f-4e0e-b290-8014c03be971"
},
"user_id": {
"S": "aae30f8e-aabe-4e38-918f-0f5a2223f589"
},
"created_at": {"S":""},
"content": {
"S": "Clean car"
},
"is_done": {
"BOOL": false
}
}
EOF
)" --profile personal
Also note that you can format dates in zsh without having to invoke date:
${(%):-%D{%FT%TZ}} expands to the current time in that format, though you'd have to set TZ=UTC0 to get actual Zulu time. There's also a strftime builtin in the zsh/datetime module.
| Within an AWS CLI command, add the ISO 8601 datetime |
1,676,672,757,000 |
I have this output and would like to convert it into a Prometheus-like format by JQ.
cat /tmp/wp-plugin.txt | jq .[]
{
"name": "akismet",
"status": "active",
"update": "none",
"version": "5.0"
}
{
"name": "performance-lab",
"status": "active",
"update": "none",
"version": "1.4.0"
}
My goal is to get like this using JQ CLI tools
wp_plugins{name="akismet",status="active",update="none",version="5.0"}0
wp_plugins{name="performance-lab",status="active",update="active",version="1.4.0"}1
|
Using jq:
jq -r 'to_entries[] | .key as $k | .value | to_entries | map("\(.key)=\(.value|@json)") | "wp_plugins{\(join(","))}\($k)"' file
or
jq -r 'to_entries[] | .key as $k | .value | to_entries | "wp_plugins{\(map("\(.key)=\(.value|@json)")|join(","))}\($k)"' file
This takes your original JSON file and starts by turning each array entry into a key-value pair using to_entries. The key will be the array index, and the value will be the actual object.
Since we want to create a comma-delimited key-value list with unquoted keys and quoted values, with = between each key and value, we need to process the .value (i.e. the object). We do that by passing it through to_entries again to get a new list of keys and values.
The keys and values are then passed to a string constructor that composes the output in the format you are looking for, prepending the string wp_plugins{ to the start of the comma-delimited list, appending } and the array index to the end.
The output, given the data in the question (when the data is put into an array first):
wp_plugins{name="akismet",status="active",update="none",version="5.0"}0
wp_plugins{name="performance-lab",status="active",update="none",version="1.4.0"}1
A corrected variant that uses a 0 at the end of the output line if the update key is not available, and 1 if it is available:
jq -r '
to_entries[] |
(if .value.update == "available" then 1 else 0 end) as $v |
.value | to_entries |
map("\(.key)=\(.value|@json)") | join(",") |
"wp_plugins{\(.)}\($v)"' file
| Merge jq output into a comma separated string like |
1,676,672,757,000 |
I have tried using jpegoptim and even tried the manpage but am stumped. Here is a file I want to reduce to 50k and even open to reducing quality of the data -
Image:
Filename: shirish.jpg
Format: JPEG (Joint Photographic Experts Group JFIF format)
Mime type: image/jpeg
Class: DirectClass
Geometry: 4624x3468+0+0
Resolution: 72x72
Print size: 64.2222x48.1667
Units: PixelsPerInch
Colorspace: sRGB
Type: TrueColor
Base type: Undefined
Endianness: Undefined
Depth: 8-bit
Channel depth:
red: 8-bit
green: 8-bit
blue: 8-bit
Channel statistics:
Pixels: 16036032
Red:
min: 0 (0)
max: 255 (1)
mean: 119.779 (0.46972)
standard deviation: 60.359 (0.236702)
kurtosis: -1.33094
skewness: -0.2895
entropy: 0.920909
Green:
min: 0 (0)
max: 255 (1)
mean: 115.071 (0.45126)
standard deviation: 63.5402 (0.249177)
kurtosis: -1.51367
skewness: -0.162973
entropy: 0.912909
Blue:
min: 0 (0)
max: 255 (1)
mean: 114.566 (0.449277)
standard deviation: 61.8685 (0.242621)
kurtosis: -1.51015
skewness: -0.101152
entropy: 0.912805
Image statistics:
Overall:
min: 0 (0)
max: 255 (1)
mean: 116.472 (0.456752)
standard deviation: 61.9226 (0.242834)
kurtosis: -1.46062
skewness: -0.184944
entropy: 0.915541
Rendering intent: Perceptual
Gamma: 0.454545
Chromaticity:
red primary: (0.64,0.33)
green primary: (0.3,0.6)
blue primary: (0.15,0.06)
white point: (0.3127,0.329)
Background color: white
Border color: srgb(223,223,223)
Matte color: grey74
Transparent color: black
Interlace: None
Intensity: Undefined
Compose: Over
Page geometry: 4624x3468+0+0
Dispose: Undefined
Iterations: 0
Compression: JPEG
Quality: 92
Orientation: RightTop
Profiles:
Profile-app4: 7600 bytes
Profile-exif: 51509 bytes
Properties:
date:create: 2022-08-20T05:27:53+00:00
date:modify: 2022-08-20T05:27:53+00:00
exif:ApertureValue: 169/100
exif:BrightnessValue: 264/100
exif:ColorSpace: 1
exif:DateTime: 2022:08:20 10:53:10
exif:DateTimeDigitized: 2022:08:20 10:53:10
exif:DateTimeOriginal: 2022:08:20 10:53:10
exif:DigitalZoomRatio: 100/100
exif:ExifOffset: 226
exif:ExifVersion: 48, 50, 50, 48
exif:ExposureMode: 0
exif:ExposureProgram: 2
exif:ExposureTime: 1/50
exif:Flash: 0
exif:FNumber: 180/100
exif:FocalLength: 532/100
exif:FocalLengthIn35mmFilm: 28
exif:ImageLength: 3468
exif:ImageUniqueID: I64ELODR0PM
exif:ImageWidth: 4624
exif:Make: samsung
exif:MaxApertureValue: 169/100
exif:MeteringMode: 2
exif:Model: SM-M526B
exif:OffsetTime: +05:30
exif:OffsetTimeOriginal: +05:30
exif:PhotographicSensitivity: 160
exif:PixelXDimension: 4624
exif:PixelYDimension: 3468
exif:SceneCaptureType: 0
exif:ShutterSpeedValue: 1/50
exif:Software: M526BXXU1BVG4
exif:thumbnail:Compression: 6
exif:thumbnail:ImageLength: 384
exif:thumbnail:ImageWidth: 512
exif:thumbnail:JPEGInterchangeFormat: 782
exif:thumbnail:JPEGInterchangeFormatLength: 50721
exif:thumbnail:ResolutionUnit: 2
exif:thumbnail:XResolution: 72/1
exif:thumbnail:YResolution: 72/1
exif:WhiteBalance: 0
exif:YCbCrPositioning: 1
jpeg:colorspace: 2
jpeg:sampling-factor: 2x2,1x1,1x1
signature: 7841c1e0cbe1aa5b79f0dea6791ffd321e2cf05715a9ab3cd210fd551c9bf355
Artifacts:
filename: shirish.jpg
verbose: true
Tainted: False
Filesize: 3.65599MiB
Number pixels: 16.036M
Pixels per second: 126.41MB
User time: 0.120u
Elapsed time: 0:01.126
Version: ImageMagick 6.9.11-60 Q16 x86_64 2021-01-25 https://imagemagick.org
Now I tried both imagemagick convert as well as jpegoptim to get a file which is around 50k. I need the file at that size as I have to upload it somewhere and there are size contraints.
I got the above exif and photo data using the identify -verbose switch in imagemagick.
Could somebody help me in the above.
I did try the following -
$jpegoptim --size=50k shirish.jpg
Instead of 50kb I got a file that is 200 kb, good but not what I wanted :(
I even tried -
$jpegoptim --max=10 --size=50k shirish-aadhar-2-reduced.jpg
But even that did not produce the desired results.
$ jpegoptim --version
jpegoptim v1.4.7 x86_64-pc-linux-gnu (Jun 20 2022)
Copyright (C) 1996-2022, Timo Kokkonen
This program comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to redistirbute it under certain conditions.
See the GNU General Public License for more details.
libjpeg version: 6b 27-Mar-1998
Copyright (C) 1991-2021 The libjpeg-turbo Project and many others
Looking forward for help. I know I could use some online tool for the same, but would like offline it possible.
|
You can use ImageMagick to strip all metadata and decrease the quality. Check this answer for more info:
quality in 85
progressive (comprobed compression)
a very tiny gausssian blur to optimize the size (0.05 or 0.5 of radius) depends on the quality and size of the picture, this notably
optimizes the size of the jpeg.
Strip any comment or EXIF metadata
in imagemagick should be
convert -strip -interlace Plane -gaussian-blur 0.05 -quality 85% source.jpg result.jpg
or in the newer version:
magick source.jpg -strip -interlace Plane -gaussian-blur 0.05 -quality 85% result.jpg
You can play with -quality number. But compress 16MP image to 50k will produce not so pleasant results.
| How to use jpegoptim to have files only 20kb in size? |
1,676,672,757,000 |
I read this post:
How do I run multiple background commands in bash in a single line?
I understand the answer, but having the option to execute a set of commands through either {} or () makes to create this post.
If the scenario(s) exists: when is mandatory use {} over () - and vice versa - and why?
|
The difference between both is that () create a subshell.
For example, you can try this:
cd $HOME ; ls
The output with those commands will list the files and directories you have for the current user.
Now, using subshell, you can try this:
( cd / ; ls ; ) ; ls
What we are doing here is creating a subshell (cd / ; ls) for changing the current directory to / and then, list its files and directories.
After that, once the subshells ends we list the files of the current directory but this is not the / dir, in this case the current directory is user home folder ($HOME)
Now if you change the () for {} the behavior will be different.
{ cd / ; ls ; } ; ls
Here, the output will list the files and dirs in the / directory for both ls commands.
Let's check another example:
( echo Subshell is $BASH_SUBSHELL ; ) ; echo Subshell is $BASH_SUBSHELL
Those commands will echo respectively:
Subshell is 1
Subshell is 0
As you can see, using the environment variable $BASH_SUBSHELL you can get the current subshell level you are, so, when you use () the BASH_SUBSHELL changes (you can use nested subshell as you want).
And another more example:
( vartmp=10 ; echo var is $vartmp ; ) ; echo var is $vartmp
In this case, the output will be:
var is 10
var is
As you can see,in the second line the $vartmp is empty. This is correct, because when a subshell ends with the execution, all variables, functions and some changes (like modifying a environment variable) will get cleared.
So, when you want to display the $vartmp after subshells ends, the output will be empty because the variable doesn't exist.
You can try changing the () to {} in those commands to check the different behaviors.
| execute set of commands in group by {} vs () [duplicate] |
1,676,672,757,000 |
I'm having trouble adding a user to a group:
# useradd -aG group user
# useradd -a -G group user
# useradd -G group user --append
It tells me that the -a option is an invalid or unrecognized.
Can someone help me add the user to a second group (and not make him belong to it as the primary group)?
Using Fedora 36. Thanks.
|
The useradd command is for adding a new user account. The usermod command is for modifying an existing user account - for example by adding it to a group
usermod -aG group user
| Cannot append user to group |
1,676,672,757,000 |
I'm aware of the which command but when I run it on Java, I get the following path:
$ which java
/bin/java
What I'm looking for, I think, is the Java path I get when I run the following Maven command:
$ mvn -version
Apache Maven 3.6.3
Java version: 11.0.14.1, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "5.13.0-1021-aws", arch: "amd64", family: "unix"
Isn't the latter (/usr/lib/jvm/java-11-openjdk-amd64) the correct Java home path?
If so, which command would directly return it?
Running the java version command doesn't return any paths.
|
readlink -f /bin/java will trace the symlink all the way down to the actual executable. The result you get will be something of the form /usr/lib/jvm/java-11-openjdk-amd64/bin/java. Omit the /bin/java part at the end to get the JDK/JRE home path.
| Which shell command returns Java's home path? |
1,676,672,757,000 |
For some reason I have ended up using the 'local' (portable) version of Firefox, and it can check for updates from its dedicated window (Help - About...).
Can that window be opened with one command?
|
There is no specific Firefox option or URL to open that specific window that I am aware of but you can automate key strokes to open that window using:
xdotool search --onlyvisible --class firefox windowactivate --sync key --delay 500 Alt+h a
That will search for a visible Firefox window, focus the window and send the necessary key strokes.
Adjust the delay between keystrokes if required.
| How to open 'About Mozilla Firefox' (update) window with one command |
1,676,672,757,000 |
I would like to compose a string of command line arguments in a variable, then use those to execute a command. I show a simplified example below. It's is a script called "listfiles." It reads a file called "ignore" of files not to list, then is supposed to list every file except those. What's the best way to accomplish what I'm trying to do?
ignorelist=( `cat "./ignore" `)
for LINE in "${ignorelist[@]}"
do
Ignore="$Ignore -path '$LINE' -prune -o ";
done
find ${Ignore} -print
content of ignore file (two directory names), :
./d1
./d2
output of bash -x listfiles:
+ ignorelist=(`cat "./ignore" `)
++ cat ./ignore
+ for LINE in '"${ignorelist[@]}"'
+ Ignore=' -path '\''./d1'\'' -prune -o '
+ for LINE in '"${ignorelist[@]}"'
+ Ignore=' -path '\''./d1'\'' -prune -o -path '\''./d2'\'' -prune -o '
+ find -path ''\''./d1'\''' -prune -o -path ''\''./d2'\''' -prune -o -print
.
./1
./2
./3
./d1
./d1/4
./d1/d1a
./d1/d1a/5
./d2
./ignore
./listfiles
I would like it so that d1 and d2 and everything under is not included in the output, or so whatever files/directories are in the ignore file are not included in the output. Would storing the whole command in a var and evaling it be better?
|
Always use arrays to store things that are separate. Make sure you build a valid set of options for find. Don't embed extra sets of quotes in the arguments (this is ultimately what makes your command fail).
#!/bin/bash
readarray -t ignorepaths <./ignore
ignoreopts=()
for pathname in "${ignorepaths[@]}"; do
ignoreopts+=( -o -path "$pathname" )
done
find . \( "${ignoreopts[@]:1}" \) -prune -o -print
See also
Using shell variables for command options ...and others.
For /bin/sh:
#!/bin/sh
set --
while IFS= read -r pathname; do
set -- "$@" -o -path "$pathname"
done <./ignore
shift
find . \( "$@" \) -prune -o -print
| find files, exclude directories |
1,676,672,757,000 |
There is a thing called Soft hyphen. Its unicode value is U+00AD and it cannot be seen in my text editor.
Apparently, a few of my files are riddled with it:
$ perl -C7 -ne 'for(split(//)){print sprintf("U+%04X", ord)." ".$_."\n"}'
nc
U+006E n
U+00AD
U+0063 c
U+000A
In between n and c, there is a soft hyphen. If you copy this command echo nc, you will find that it has three characters (not two).
How can I remove all soft hyphens (U+00AD) from my file?
|
Just use sed (I tested with GNU sed, I do not know if non-GNU seds can do it) and copy/paste the character into the sed expression. Here, I copied your echo nc command and ran it, redirecting the output to a file which gave me a test file with the character of interest:
$ perl -C7 -ne 'for(split(//)){print sprintf("U+%04X", ord)." ".$_."\n"}' file
U+006E n
U+00C2 Â
U+00AD
U+0063 c
U+000A
It also added a U+00C2 Â which I don't understand but I don't know unicode, so I assume it's some sort of unicode weirdness. The file looks as expected, there is actually what appears like a space but is in fact the soft hyphen between the a and n:
$ cat file
nc
$ od -c file
0000000 n 302 255 c \n
0000005
Regardless, copy/pasting that apparently white space and feeding it to uniprops, gives:
$ uniprops ''
U+00AD ‹U+00AD› \N{SOFT HYPHEN}
\pC \p{Cf}
All Any Assigned C Other Case_Ignorable CI Cf Format Changes_When_NFKC_Casefolded CWKCF Common Zyyy Default_Ignorable_Code_Point DI Graph X_POSIX_Graph
Latin_1 Latin_1_Supplement Latin_1_Sup InLatin1 Print X_POSIX_Print Unicode
And copying into a sed substitution operator gives:
$ sed 's///g' file | perl -C7 -ne 'for(split(//)){print sprintf("U+%04X", ord)." ".$_."\n"}'
U+006E n
U+0063 c
U+000A
In other words, it correctly removes it. So you can apply that command to all of the affected files:
sed -i 's///g' file1 file2 ... fileN
Try it on a couple of files first (and use -i.bak or i .bak depending on your OS and sed implementation to keep a backup to test safely) and then run it on all of them.
| How to remove all soft hyphens (U+00AD) from a file |
1,676,672,757,000 |
When starting, restarting an apache server is there a difference between the following commands?
sudo service apache2 restart
sudo service apache2 stop
sudo service apache2 start
AND
sudo systemctl restart apache2
sudo systemctl stop apache2
sudo systemctl start apache2
I'm just learning a little bit about Linux servers by playing around with the LAMP stack, but I want to know why certain guides use one syntax and other guides the other.
|
I'm just learning a little bit about Linux servers by playing around with the LAMP stack, but I want to know why certain guides use one syntax and other guides the other.
using a sample file out of my /etc/init.d which is for an rlm license manager, the relevant code in that file related to start, stop, restart, and status :
status() {
pid=`_getpid ${RLM_ROOT}/rlm`
if [ -z "$pid" ]; then
echo "rlm is stopped"
return 3
fi
echo "rlm (pid $pid) is running..."
return 0
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
restart
;;
*)
echo $"Usage: $0 {start|stop|status|restart}"
exit 1
esac
As you can see, restart is just a function call that calls stop then start. So that's all restart is and is nothing more than calling stop then start.
Can a init.d or systemctl file actually define restart to be different and more involved than my example? yes, and you would have to look at the code first hand to really know. But it is highly likely they are all very similar to what I posted above, including your apache file, with regards to start | stop | restart | status.
In the context of your question, if you want to stop AND start your apache2 service, then the least amount of typing to accomplish that would be service apache2 restart as opposed to doing two separate commands manually to first stop and then start. But either way you would accomplish the same thing. Also recognize if you do restart then the service will immediately try to be started after it is stopped, regardless if the stop was successful; this may be good or bad depending on what you are doing or debugging. Sometimes its more useful to do a service whatever stop and then have it stopped for however long while you do other things before manually issuing the subsequent service whatever start.
init.d is the old linux way that worked around the service syntax structure and is now replaced by systemd linux and the systemctl syntax structure. But the service whatever <start|stop|restart|status is still supported and will alias to systemctl <start|stop|restart|status whatever. So there is no difference in using service vs systemctl on modern systemd linux systems. If you try to use systemctl on an old init.d type linux system, redhat5 for example, it'll simply be command not found [hopefully] for obvious reasons. You can read web articles explaining initd vs systemd.
It is still perfectly valid, and correct, to use the service whatever <start|stop|status|restart> syntax in documentation as well as typing it on the command line. For documentation & guides it gets the point across in the least amount of characters, and since it aliases pretty much directly to systemctl (which are services) it's just more human readable in my opinion to continue to use that legacy initd service syntax in any documentation.
| Is there a difference between these two Linux server commands? [duplicate] |
1,676,672,757,000 |
I have an smb share that I see in the Files explorer as smb://whitebox.local/photos/
If I try to use commands on this smb share using the smb:// syntax, I get a "No such file or directory" error message:
hippo@hippo-camp:~/Desktop$ ls smb://whitebox.local/photos/
ls: cannot access 'smb://whitebox.local/photos/': No such file or directory
How do I solve this?
|
Applications using GUI frameworks such as Gnome or KDE let you access not only local files, but also various kinds of URL. On the command line, and in GUI applications that don't support URL, you can only access files.
Files don't have to be local files: they can be files on a network share, but that share has to be mounted. (Note in case you're used to Windows: whereas Windows traditionally makes each disk and network share available under a separate drive letter, Unix makes all files accessible from a single root.)
Generally, if a file is available through a URL syntax in a Gnome file manager, you can make it available to all applications by mounting the resource using gvfs. You can do that with the gio command. (It should be available in your distribution, but it may not be installed by default. On Debian/Ubuntu/Mint… it's in the libglib2.0-bin package, which is automatically installed if ubuntu-desktop or gnome is.)
gio mount smb://whitebox.local/
ls -l $XDG_RUNTIME_DIR/gvfs/whitebox.local/photos/
| Bash commands using smb:// -> No such file or directory |
1,676,672,757,000 |
I wrote the following command: lsblk -nl /dev/sdd -o MOUNTPOINT | awk '{if (NR!=1 && $1) { print 1; } else { print 0; }}'
It is supposed to check if any of the paritions of the given device, in this case /dev/sdd are mounted.
But for some reason, the script prints both 1 and 0? How does that make any sense?
I need this command to evaluate to true if there is a mountpoint and to false otherwise to use it in my shellscript.
|
awk runs its code as a loop across every input line. You'll get either a 1 or a 0 for each non-empty line. You could
If you want to determine whether or not a device has any mounted partitions, consider using this approach:
device=/dev/mmcblk0
if [ -n "$(lsblk -nl -o MOUNTPOINT "$device")" ]
then
echo "Partitions are mounted on $device"
else
echo "Device $device is currently unused"
fi
If you really want to keep the awk approach, maybe this is what you're looking for. (I'm not entirely sure why you wanted to ignore the first line. In my tests it seemed to refer to the entire device and there can be a situations where the device itself holds a filesystem so I've removed the skip test.)
device=/dev/sdd
lsblk -nl "$device" -o MOUNTPOINT | awk 'NF { found=1 } END { print found+0 }'
| awk prints if and else case |
1,676,672,757,000 |
I have managed to get the total number of users on the system with this:
$ getent passwd | wc -l
I need the number of users on the system with a certain first name (for example 'Josh') and whose usernames start with a 'b'. How? I don't know the syntax for this.
|
You can try something like:
getent passwd|awk -F: '$5 ~ /^Josh([ ,.]|$)/ && $1 ~ /^b/'|wc -l
| What is the shell command to tell me how many users there are on the system with a certain first name and whose username starts with a 'b'? |
1,676,672,757,000 |
I want to run a set of commands from a bash script. How ever I don't know how to put the quotation in a bash script.
The following is the bash script which I want to run, how ever in the cmake -DCMAKE_C_FLAGS I want to add another flag -gcc-name=/path/bin/gcc. I want to do it through a shell script and eventually run that shell script, which is going to give me the installation.
Please kindly suggest me a way to do this.
mkdir /g/g92/bhowmik1/installTF/ROSS;
cmake -DCMAKE_INSTALL_PREFIX=/g/g92/bhowmik1/installTF/ROSS -DCMAKE_C_FLAGS=-O3 -DCMAKE_C_COMPILER=mpicc -DARCH=x86_64 -DROSS_BUILD_MODELS=ON ..;
make;
make install;
|
Quote the whole value:
cmake -DCMAKE_INSTALL_PREFIX=/g/g92/bhowmik1/installTF/ROSS \
-DCMAKE_C_FLAGS='-O3 -gcc-name=/path/bin/gcc' \
-DCMAKE_C_COMPILER=mpicc -DARCH=x86_64 -DROSS_BUILD_MODELS=ON ..
| Add multiple options in cmake flag in a shell script and run the shell script |
1,676,672,757,000 |
I am trying to add an extra header item to an existing email in a particular mbox, e.g.
X-archived-to-crm: true,user,CRM-ID
after it was archived, the CRM-ID is basically the id in the DB of the CRM.
My MTA is sendmail, not sure whether this matters.
I had a look mail, mailx but cannot figure out how to do it.
I have been able to do this using Mutt in interactive mode, so there must be a way to do this from the command line.
Any (commandline) utility will do, it just needs to work non interactively.
|
This can be done fairly easily with perl -i as an mbox file is just plain text. There are various mbox manipulation library modules for perl, but something as simple as this doesn't need them.
The biggest difficulty is avoiding writing to the mbox file while something else is writing to it at the same time.
The script below uses the standard dotlock method of avoiding multiple simultaneous writers to the same mbox file (i.e. use mboxfilename.lock).
It should probably do flock and fcntl locking in addition to dotlock, to match whichever locking method(s) are used by your Mail Delivery Agent (MDA) and other programs that may be writing the mbox (e.g. mutt or other Mail User Agents (MUAs), or a Mail Transfer Agent (MTA) like postfix or sendmail, or a POP or IMAP daemon).
I'll leave that as an exercise for the reader - see the File::FcntlLock and File::Flock perl library modules (there are also several similar modules with differing implementations of the same things, these are just the first I found). You can install these on Debian (etc) with apt-get install libfile-flock-perl libfile-fcntllock-perl. Other distros probably have them or similar modules packaged. Otherwise, install with cpan.
#!/usr/bin/perl -i.bak
#!/usr/bin/perl -i
# delete the first #! line if you don't want perl -i to create a .bak copy
use strict;
use Getopt::Std;
my %opts;
getopts('m:i:u:h', \%opts);
if ($opts{h}) {
print "$0 <-m messsage-id> <-u user> <-i crmid> [mbox file...]\n";
exit 1;
};
my $msgid = $opts{m} // die "-m messsage-id is required\n";
my $user = $opts{u} // die "-u user is required\n";
my $crmid = $opts{i} // die "-i crmid is required\n";
# I don't want to implement my own -i, so I'll just iterate
# over each file on @ARGV one at a time
my @files = @ARGV;
foreach my $mbox (@files) {
# if the mbox is locked, then wait until it isn't.
while (-e "$mbox.lock") {
print "$mbox is locked!\n";
sleep 1;
};
# lock it
open(my $touch, ">", "$mbox.lock") || die "couldn't lock $mbox: $!\n";
close($touch);
@ARGV=($mbox);
while (<>) {
print;
# case-insensitive match for "Message-ID" literal string,
# case-sensitive for actual $msgid
if (m/^(?:(?i)Message-ID:) <$msgid>/) {
print "X-archived-to-crm: true,$user,$crmid\n"
};
};
# remove the lock
unlink "$mbox.lock";
}
Given the following mbox file:
From [email protected] Mon Aug 23 16:04:42 2021
Return-Path: <[email protected]>
X-Original-To: [email protected]
Delivered-To: [email protected]
Received: by example.org (Postfix, from userid 1000)
id 6B1DE3F2C; Mon, 23 Aug 2021 16:04:42 +1000 (AEST)
Date: Mon, 23 Aug 2021 16:04:42 +1000
From: Craig Sanders <[email protected]>
To: Craig Sanders <[email protected]>
Subject: test
Message-ID: <[email protected]>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Length: 56
Lines: 6
test messsage
craig
--
craig sanders <[email protected]>
Running the following command:
$ ./insert-header.pl -m '[email protected]' -i 99999 -u cas test.mbox
Results in the mbox file being changed to:
From [email protected] Mon Aug 23 16:04:42 2021
Return-Path: <[email protected]>
X-Original-To: [email protected]
Delivered-To: [email protected]
Received: by example.org (Postfix, from userid 1000)
id 6B1DE3F2C; Mon, 23 Aug 2021 16:04:42 +1000 (AEST)
Date: Mon, 23 Aug 2021 16:04:42 +1000
From: Craig Sanders <[email protected]>
To: Craig Sanders <[email protected]>
Subject: test
Message-ID: <[email protected]>
X-archived-to-crm: true,cas,99999
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Length: 56
Lines: 6
test messsage
craig
--
craig sanders <[email protected]>
BTW, I also test this on a copy of my main mbox with:
$ ./insert-header.pl -m '.*' -i 99999 -u cas main.mbox
And it inserted the same X-archived-to-crm header in every message in the mbox.
| Any utility in Linux to add item to the header of an existing email in a particular mailbox? |
1,676,672,757,000 |
my file:
Informatica(r) PMCMD, version [10.2.0 HotFix2], build [1911.0401], Workflow run status: [Failed]
Output I need is Failed
Output I am getting is 10.2.0 HotFix2
Command used:
grep "Workflow run status:" test.txt | cut -d'[' -f2 | cut -d']' -f1
If Workflow run status: comes in any field, my objective is to search Workflow run status: and then print the status of the field. Because Workflow run status: can come in any line or field
|
cut -d'[' -f2 gets you the second [ delimited field. In
Informatica(r) PMCMD, version [10.2.0 HotFix2], build [1911.0401], Workflow run status: [Failed]
<---------- field 1 ---------> <------ field 2 ------> <------------ field 3 -----------> <field 4>
You'd rather want the fourth field if its Failed you want. However, using sed to extract the part after Workflow run status would make more sense:
sed -n 's/^.*Workflow run status: \[\([^]]*\)\].*$/\1/p' < file
Or as the original tags on your questions suggest you're on a GNU system, get GNU grep to extract it with:
grep -Po 'Workflow run status: \[\K[^]]*' < file
Those would extract the part in Workflow run status: [there] without having to rely on how many [s there were on the line.
| Unix command to cut between strings |
1,676,672,757,000 |
If I use man to view man pages, they are not interactive at all, I can merely change the visuals through a pager.
With info I can view man and info pages with interactive links, including the links in "See Also" in man pages. But info strips all highlighting for man-pages and does not recognize other kinds of links, such as https links or file paths.
I know that there are web and GUI alternatives, but isn't there some command-line tool that provides a smoother experience and can ideally handle both formats in one?
|
I found the answer where I failed to look for a long time: vim
It can view man pages and follow links, but even better, with
https://gitlab.com/HiPhish/info.vim
it will view info pages and fall back to manpages, all while staying interactive and colorful.
I used the following shell line to use it as a simple viewer:
info "$@" -w | grep -q . && $EDITOR -R -M -c "Info $*" +only
And of course there's Emacs, the master of info pages as both are from GNU, but I don't fancy launching that for quick terminal usage.
| Is there a more interactive man and info pages viewer? |
1,676,672,757,000 |
I have a bash script, /usr/bin/local/myscript. I run the file usually from an external software (in my case Autohotkey), basically I mean the script will be finished immediately and the window closes so I cannot see any logs on the window, info logs or error logs. So I do like to "script" the entire running steps for debugging when the program didn't work well. But simply putting a script ~/script.txt in the first of the myscript file didn't work, I don't know what is going on, the next lines on the file didn't executed basically. So, is there a (proper) way to use "script" in bash files, or better alternative to "script" for this?
|
But simply putting a script ~/script.txt in the first of the myscript file didn't work, I don't know what is going on, the next lines on the file didn't executed basically.
For a shell interpreting myscript, script ~/script.txt is just a command that runs an external executable. The executable is script in this case.
If the executable was sleep, tar or ls, you would expect the shell to wait till the executable finishes before interpreting the rest of myscript. With script it's not different: the shell waits for script to finish.
script is intended for interactive use. script ~/script.txt runs an interactive shell which is not the shell that interprets myscript. You wrote "the window closes" (when without script), so I assume myscript runs in some terminal emulator that displays a window. If so, then (with script) you probably saw this inner shell started by script and could interact with it while the outer shell was waiting. script logs what you do in the inner shell. Exiting the inner shell terminates script and only then the outer shell (i.e. myscript) continues. But maybe you closed the terminal emulator, the outer shell got SIGHUP and never continued.
This is not important since it should now be clear script called from within myscript cannot help you with debugging myscript, even if the outer shell continues.
There are at least two methods to debug myscript:
Redirect stderr of the script to a logfile. This can be done from within the script with
# just after shebang
exec 2>/path/to/logfile
If myscript generates output on its stdout that is important and actually goes somewhere (via a pipe or redirection) then you don't want to break this. But if it normally prints to the terminal and you'd like to log this as well then redirect both stderr and stdout to the logfile:
# just after shebang
exec 2>/path/to/logfile 1>&2
This method will work even if myscript runs without a terminal.
Alternatively let the script run an interactive shell (e.g. bash) at the very end. The new shell will prevent the window from closing and you will be able to see what the script has printed to the terminal so far. This obviously requires a terminal. Note if myscript happens to exit or kill itself or exec to another executable then it will never get to the line with bash; take it into consideration.
If for some reason the interactive bash started from the script clears the terminal or something, use sleep 3600 instead. The point is you don't really need an extra shell, you need any process that keeps the terminal emulator open long enough so you can examine the previous output.
No matter which method you choose, place
set -x
early in myscript to make the shell interpreting it print commands and they arguments as they are executed. This should give you useful insight.
| Bash "script" command within bash files? |
1,676,672,757,000 |
So, I'm creating this little convenience command for system information when bug reporting.
alias clip="xclip -selection clipboard -in"
alias bug="echo $(lsb_release -irs && uname -r && free --human --giga) | clip"
The problem is/was two-fold, when I didn't do the echo $() it wasn't capturing the stdout of all commands, just the last one. Doing it this way, however, strips the newlines making it far less readable.
also, I notice that the $() is doing this when I actual run the alias bug but it doesn't when I run the command without the alias directly.
❯ bug # IdeaProjects
Fedora 33
zsh: 5.11.7-200.fc33.x86_64: command not found...
zsh: 10:20:16: command not found...
zsh: total: command not found...
zsh: Mem:: command not found...
zsh: Swap:: command not found...
whereas just running it on an interactive shell
❯ echo $(lsb_release -irs && uname -r && uptime && free --human --giga) | clip # IdeaProjects
❯ Fedora 33 5.11.7-200.fc33.x86_64 10:39:11 up 3 days, 20:44, 1 user, load average: 0.70, 1.02, 0.97 total used free shared buff/cache available Mem: 15G 5.9G 2.3G 1.8G 7.6G 7.8G Swap: 12G 72M 11G
I don't understand why it would be different.
How can I write this as a single alias/command (I suppose a function is acceptable, in which case zsh) and retain the newlines?
p.s. if you have any suggestions on how to improve this for reports, feel free to comment that too.
|
Use a subshell:
alias bug="(lsb_release -irs && uname -r && free --human --giga) | clip"
This will send all three commands’ standard output to clip, without processing whitespace.
Both aliases can be combined:
alias bug="(lsb_release -irs && uname -r && free --human --giga) | xclip -selection clipboard -in"
I wouldn’t format the free output too early; rounding can cause confusion:
alias bug="(lsb_release -irs && uname -r && free) | xclip -selection clipboard -in"
The problem you’re seeing with the bug alias comes from the fact that the command substitution is evaluated when the alias is defined; run alias to see what I mean:
$ alias
...
bug=$'echo Fedora 33\n5.11.7-200.fc33.x86_64\n total used free shared buff/cache available\n ...
So the alias becomes an alias for
echo Fedora 33
5.11.7-200.fc33.x86_64
total used free shared buff/cache available
...
which explains why you get the output you quote. When run directly rather than through an alias, the command works because it’s not evaluated a second time: the expansion of $(lsb_release ...) is given as arguments to echo, which outputs it as is (after whitespace processing by the shell).
The quotes are significant:
If the substitution is not enclosed in double quotes, the output is broken into words using the IFS parameter.
So alias bug="echo $(lsb_release ...)" preserves the newlines, resulting in multiple commands when the alias is run, whereas echo $(lsb_release ...) doesn’t (and in any case, it wouldn’t matter because the command wouldn’t be re-interpreted).
| how can I "pipe" all of these to xclip preservering newlines |
1,676,672,757,000 |
Is there any way to have screen sessions that aren't visible via screen -ls command? If so, then what's the access method to these screens?
Any screen that is created by a screen -S <Name> will be shown in the output of screen -ls.
|
These locations mentioned in man screen could be of interest:
$SCREENDIR/S-<login>
/local/screens/S-<login> Socket directories (default)
/usr/tmp/screens/S-<login> Alternate socket directories.
Example:
% mkdir foo; chmod 0700 foo;
% SCREENDIR=$PWD/foo screen -S foo -d -m sleep inf
% screen -ls
No Sockets found in /var/folders/vy/t__dhyrs3d5dd_bvk6mj5t480000gn/T/.screen.
% SCREENDIR=$PWD/foo screen -ls
There is a screen on:
67294.foo (Detached)
1 Socket in /Users/muru/foo.
So, you could use different SCREENDIRs to keep separate sets of sessions.
| Creating hidden screen sessions which are not visible via "screen -ls" command |
1,676,672,757,000 |
I have a folder with markdown files containing text with citekeys in the Pandoc format [@Name:2021]. I decided to remove the colons from my citekeys and would like to automatically delete them in my markdown files. The citekeys can have the following form:
[@Name:2021]
[@Name:2021, 10]
[@Name:Title]
[Vgl. @Name:2021]
[Vgl. @Name:2021, 20--30]
So they should become:
[@Name2021]
[@Name2021, 10]
[@NameTitle]
[Vgl. @Name2021]
[Vgl. @Name2021, 20--30]
Some citekeys that I added recently do not contain colons already. And there might also be some footnotes in the format ^[Text] containing colons that should not be deleted, of course.
Is there any command/script of command line utilities that would allow me to remove the colons in the citekeys all markdown files automatically? Thanks for your help!
|
perl is handy here: the replacement part of s/// can be evaluated as code:
perl -pe 's/\[[^]]*@.+?\]/ ($cite = $&) =~ s{:}{}g; $cite /ge' file
outputs
[@Name2021]
[@Name2021, 10]
[@NameTitle]
[Vgl. @Name2021]
[Vgl. @Name2021, 20--30]
If you're happy with the output, you can save the changes back to the file with
perl -i -pe ...
| Remove certain character within a pattern of text |
1,676,672,757,000 |
I have a directory AllData with some files like below:
AllData
|____ file_1to1000.track
|____ file_1001to2000.track
|____ file_2001to3000.track
Based on file names I created directories with file names.
for file in *; do dir=$(echo $file | cut -d. -f1); mkdir -p $dir; mv $file $dir; done
And I wanted to create another directory inside that and keep the file. It should like below:
AllData
|__ file_1to1000
|___ cuffcompare
|____ file_1to1000.track
|__ file_1001to2000
|___ cuffcompare
|____ file_1001to2000.track
|__ file_2001to3000
|___ cuffcompare
|____ file_2001to3000.track
After moving the files into directories like above, the file name should be changed from file_1to1000.track to soft.track. All the files inside directories needs to be changed from their original filename to soft.track It should basically look like below:
AllData
|__ file_1to1000
|___ cuffcompare
|____ soft.track
|__ file_1001to2000
|___ cuffcompare
|____ soft.track
|__ file_2001to3000
|___ cuffcompare
|____ soft.track
|
Check:
for file in ./*; do
echo mkdir -p "${file%.*}"/cuffcompare/ && \
echo mv "$file" "$_"soft.track
done
to liner:
for file in ./*; do echo mkdir -p "${file%.*}"/cuffcompare/ && echo mv "$file" "$_"soft.track; done
Note: Remove echo at above when you were happy with dry-run result.
${file%.*} strips the shortest matched suffix from the filename. so it cuts .track from the end of filename here. known Parameter-expansion
$_ is the substitute of the last argument of the previous command (see shell Special Parameters); which expands to "${file%.*}"/cuffcompare/;
so mkdir creates the directories (-p is used to create parent directories if those were not exist) structure below for every file found;
└── file_2001to3000
└── cuffcompare
then mv moves and rename the file to its related directory with soft.tarck name:
└── file_2001to3000
└── cuffcompare
└── soft.track
| How to create multiple directories based on file names and change the file names in linux? |
1,676,672,757,000 |
I have written a simple script to analyse BED files (a text file format used to store genomic regions as coordinates and associated annotations. The data are presented in the form of columns separated by spaces or tabs.) and in one of my arguments I have used awk. The problem is that my second variable of my script $2 matches with the second column of the file using awk.
Here my script (the problem is in the last elif)
#/bin/bash -e
# This script provides handly funtions to analyse bed files.
function show_usage (){
printf "Usage: $0 [options [parameters]]\n"
printf "\n"
printf "Options:\n"
printf " -g|--genes, Print genes avoiding repetition\n"
printf " -cg|--count_genes, Print the number of different genes found in the file\n"
printf " -cl|--count_lines,Count the number of liles\n"
printf " -chr|--count_chromosomes, Print chromosomes avoiding repetition\n"
return 0
}
if [[ "$1" == "--genes" ]] || [[ "$1" == "-g" ]];then
echo gene
# shows all genes
cat $2 | cut -f4 | cut -d "_" -f1 | sort -u
elif [[ "$1" == "--count_genes" ]] || [[ "$1" == "-cg" ]]; then
echo count genes
# Count genes
cat $2 | cut -f4 | cut -d "_" -f1 | sort -u| wc -l
elif [[ "$1" == "count_lines" ]] || [[ "$1" == "-cl" ]]; then
echo Number of lines:
# Count the number of liles
cat $2 | cut -f4 | cut -d "_" -f1 | sort -u| wc -l
elif [[ "$1" == "count_chromosomes" ]] || [[ "$1" == "-chr" ]]; then
echo Number of chromosomes
# The chromosomes of this file are:
cat $2 | cut -f1 | sort -u | sort -g
elif [[ "$1" == "Count Total Length Of Regions" ]] || [[ "$1" == "-p" ]]; then
echo Count Total Length Of Regions
# The Count Total Length Of Regions:
cat "$2" | awk -F"\t" "BEGIN{SUM=0}{ SUM+=$3-$2 }END{print SUM}"
else
echo Incorrect input provided
show_usage
fi
cat "$2" | awk -F"\t" "BEGIN{SUM=0}{ SUM+=$3-$2 }END{print SUM}"
The first $2 in the previous line refers the second variable of my script( the file input) .
The second $2 refers the second column of the file input.
When I faced this I thought that this must be a very typical problem and that awk should have an alternative way for selecting fields. But I have not found any way to avoid this problems.
Do I have to choose another command to avoid this issue?
|
The problem here is that the shell is expanding $2 and $3; to avoid that, use single quotes:
cat "$2" | awk -F"\t" 'BEGIN{SUM=0}{ SUM+=$3-$2 }END{print SUM}'
| Conflict with the fields $ while using awk in my script |
1,676,672,757,000 |
I'm more and more getting into trouble with my Debian 32 bit system, since some software such as the signal messenger are not written for 32 bit anymore.
Operating System: Debian GNU/Linux bullseye/sid
Kernel: Linux 5.8.0-2-686-pae
Architecture: x86
I am not sure if this 32 bit system was installed rather by mistake, then of the hardware capability. Is there a way to detect whether my hardware is capable of running on 64 bit or not? (prefarably from the command-line)
|
You can run these commands, if x86_64 and 64-bit, the answer would be yes:
$ lscpu | grep Arch
Architecture: x86_64
$ lscpu | grep mode
CPU op-mode(s): 32-bit, 64-bit
From 32-bit, 64-bit CPU op-mode on Linux
lscpu is telling you that your architecture is i686 (an Intel 32-bit CPU), and that your CPU supports both 32-bit and 64-bit operating modes. You won't be able to install x64 built applications since they're built specifically for x64 architectures. Your particular CPU can handle either the i386 or i686 built packages.
Here's a list of the flags of /proc/cpuinfo:
What do the flags in /proc/cpuinfo mean?
| Detect if the computers hardware is capable for a 64bit system? |
1,676,672,757,000 |
I need to test whether any process is listening on a specific socket; fuser does not exist on the target system but lsof does. I run this command:
lsof -tU /path/to/socket
It lists the PID of the listener, which is great but lsof exits with a status of 1. I change the command to see what's wrong:
lsof -tUV /path/to/socket
It again lists the PID but also adds this:
lsof: no file use located: /path/to/socket
Is there any way to suppress this extra check of 'file use' so that it exits with 0 when it does find listeners on the socket? I've looked through the man page but can't find what I'm after. I'd like to use it sensibly like this:
sock=/path/to/socket
if [[ ! -S $sock ]] || ! lsof -tU $sock &>/dev/null; then
# task to activate socket listener
fi
|
If you're on a system with a recent version of ss (like that from iproute2-ss190107 on Debian 10), you can use ss instead of lsof:
sock=/path/to/socket
ino=$(stat -c 'ino:%i' "$sock") && ss -elx | grep -w "$ino"
sock=/path/to/socket
if ino=$(stat -c 'ino:%i' "$sock") && ss -elx | grep -qw "$ino"
then
# yeah, somebody's listening on $sock
fi
There are two important things to notice here:
The real address of a Unix socket is the device,inode number tuple, not the pathname. If a socket file is moved, whichever server was listening on it will be accessible via the new path. If a socket file is removed, another server can listen on the same path (that's why the directory permissions of a Unix socket are important, security-wise). lsof isn't able to cope with that, and may return incomplete / incorrect data.
ss is itself buggy, and because the unix_diag netlink interface ss is using returns the device number in the format internally used by the Linux kernel, but ss assumes that it's in the format used by system calls interfaces like stat(2), the dev: entry in the ss -elx output above will be manged. However, de-mangling it may be unwise, because one day they may just decide to fix it. So, the only course of action is to treat dev: as pure garbage, and live with the risk of having two socket files with the same inode, but on different filesystems, which the test above is not able to handle.
If all of the above doesn't matter for you, you can do the same lousy thing lsof does (matching on the path the socket was first bound to), with:
sock=/path/to/socket
ss -elx | grep " $sock "
which should also work on older systems like Centos 7. At least this does have the advantage of only listing the listening sockets ;-)
| How do I get lsof to stop complaining when testing for a socket? |
1,676,672,757,000 |
My ps command works except on a particular version on Linux as you can see below.
[root@failinghost ~]# ps -xef | grep -v grep | grep websphere
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
[root@failinghost ~]# ps -version
ERROR: Unsupported SysV option.
********* simple selection ********* ********* selection by list *********
-A all processes -C by command name
-N negate selection -G by real group ID (supports names)
-a all w/ tty except session leaders -U by real user ID (supports names)
-d all except session leaders -g by session OR by effective group name
-e all processes -p by process ID
-q by process ID (unsorted & quick)
T all processes on this terminal -s processes in the sessions given
a all w/ tty, including other users -t by tty
g OBSOLETE -- DO NOT USE -u by effective user ID (supports names)
r only running processes U processes for specified users
x processes w/o controlling ttys t by tty
*********** output format ********** *********** long options ***********
-o,o user-defined -f full --Group --User --pid --cols --ppid
-j,j job control s signal --group --user --sid --rows --info
-O,O preloaded -o v virtual memory --cumulative --format --deselect
-l,l long u user-oriented --sort --tty --forest --version
-F extra full X registers --heading --no-heading --context
--quick-pid
********* misc options *********
-V,V show version L list format codes f ASCII art forest
-m,m,-L,-T,H threads S children in sum -y change -l format
-M,Z security data c true command name -c scheduling class
-w,w wide output n numeric WCHAN,UID -H process hierarchy
[root@failinghost ~]# ps -V
procps version 3.2.8
[root@failinghost ~]# uname -a
Linux failinghost 2.6.32-754.28.1.el6.x86_64 #1 SMP Fri Jan 31 06:05:42 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
Below is another Linux host where I thankfully don't get the error:
[root@workinghost ~]$ ps -xef | grep -v grep | grep websphere
[root@workinghost ~]$ echo $?
1
[root@workinghost ~]$ uname -a
Linux workinghost 3.10.0-1062.1.2.el7.x86_64 #1 SMP Mon Sep 16 14:19:51 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@workinghost ~]$ ps -V
procps-ng version 3.3.10
I wanted my ps command to work on all non-Solaris.
Can you please suggest a solution that works on both the Linux versions?
|
You might not need x, -e is sufficient to select all processes:
ps -ef
This should work on any version of ps on Linux you’re liable to come across.
Current versions of ps from procps-ng interpret the x option with or without a dash without warning; the older version of ps from procps in CentOS 6 adds a warning (but it still lists all the processes, so your grep should find the processes it’s looking for, if they are present). The behaviour is different though in both versions, ps -xef outputs the command and its environment; if you want to keep this, you can discard the warning:
ps -xef 2>/dev/null
This works with older and newer versions of ps from procps and procps-ng.
| ps does not support the -x flag on a particular version of linux |
1,676,672,757,000 |
I see this command advised, whenever a person asks online about renaming all uppercase files to lowercase:
find "$(pwd)" -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;
I understand the find "$(pwd)" -depth -exec rename part.
Can someone breakdown and explain the regex command of rename - namely: 's/(.*)\/([^\/]*)/$1\/\L$2/'
Why \/([^\/]*) and not only (.*)?
I know what $1 is in the context of bash, but what does $1, \L$2 mean in rename?
I would also appreciate how is this different from a simple
find "$(pwd)" -depth -exec rename 'y/A-Z/a-z/' {} \;
Finally, what book or resources would you recommend to learn this kind of stuff? I read the rename man-page; however, I did not find explanations there about this type of usage.
|
The first s/, the last / and the unescaped / in the middle are the substitute operator and the separators, so we have the pattern (.*)\/([^\/]*) and replacement $1\/\L$2.
In (.*)\/([^\/]*) the first (.*)\/ matches everything up to the last slash, that is, the path before the final filename. The last ([^\/]*) then matches anything but slashes up to end of string.
In the replacement, $1 puts back what the first capture group in parenthesis matched, that was the path. Then \L lowercases the following part, the second captured group $2, or the filename.
The end result here is that the lowercasing only applies to the final filename part, so e.g. dir/OTHERDIR/FOO.txt turns to dir/OTHERDIR/foo.txt, not dir/otherdir/foo.txt. Renaming directly to the latter wouldn't work, since dir/otherdir probably doesn't exist.
However... I think you could just run:
find . -depth -execdir rename 'y/A-Z/a-z/' {} +
"$(pwd)" (or more simply, "$PWD") only works to make find produce absolute paths, instead of relative paths, but there's no need for that. -execdir runs rename separately in each directory, instead of all in the main level, getting rid of the problem of dealing with the full paths. And {} + instead of {} \; lets find give more than one file to each invocation of rename.
Though note that all that probably only works for the 26 ASCII letters, not for the rest of the characters found in actual languages (e.g. äöåé).
| What does \L$2 mean in perl rename tool? |
1,676,672,757,000 |
I use ssh -p 54321 [email protected] to login a gate server and then ssh johnaddress to login the actual GPU server, now how do I combine these 2 commands in one using -J?
ssh -J -p 54321 [email protected] john@johnaddress doesn't work
ssh -p 54321 [email protected] -J john@johnaddress doesn't either
|
man ssh_config shows the syntax for the ProxyJump configuration parameter (which -J is a shortcut to):
ProxyJump
Specifies one or more jump proxies as either [user@]host[:port] or an ssh URI.
In your case it becomes:
ssh -J [email protected]:54321 john@johnaddress
or, using the configuration file's option format:
ssh -o [email protected]:54321 john@johnaddress
| ssh jump through a port |
1,676,672,757,000 |
The following commands work
% DEBUGCLI=YES exec zsh
% set -xv
% echo debug
echo debug
+zsh:2> echo debug
debug
However, when I try something like
% exec zsh -c 'set -xv'
It just close the terminal.
I do not want to use exec zsh -xv. The reason is here.
I am looking for one command instead of two so that I do not have to separately run set -xv after exec zsh.
Yes I have tried
exec zsh ; set -xv & exec zsh && set -xv. Both of these command just ignore set -xv.
|
zsh -c 'set -xv' tells zsh to run an inline script that contains set -xv. After running set -xv, the script is finished and zsh exits (with the exit code of set). Just like zsh -c uname runs uname and finishes.
exec zsh ; set -xv tell the current shell to execute zsh in the current process, so whatever command that lays after will never be executed (and in anycase, that could only be after exec returns), as the shell won't be around anymore to execute them.
If you wanted to run a zsh interactive shell with the -v and -x options on, that would be:
zsh -xv
Or
zsh -o verbose -o xtrace
If you want to run set -xv when zsh is run interactively with DEBUGCLI=YES in your environment like as the answer you linked suggests you want to do, then you can just add:
if [[ $DEBUGCLI = YES ]]; then
set -o xtrace -o verbose
fi
To your ~/.zshrc, (at the very last line there if you don't want the rest of your ~/.zshrc being traced and logged).
In anycase, you can run set -xv and set +xv (or set -o xtrace -o verbose and set +o xtrace +o verbose) to turn that debugging on or off without having to restart a shell each time.
And if you wanted xtrace to only be in effect during the execution of the code you enter (and not for the execution of commands in hooks or zle widgets), you could set it at the end of the preexec() hook and unset it at beginning of the precmd() hook (with set +x 2> /dev/null).
| execute `set -xv` after invoking a new shell |
1,676,672,757,000 |
I am using Ubuntu 16.04, but I believe my question applies to many distros, such as Debian, CentOS, and Red Hat.
The manpage for netstat -l is:
Show only listening sockets. (These are omitted by default.)
and netstat -a is:
Show both listening and non-listening sockets. With the --
interfaces option, show interfaces that are not up
Does the output of netstat -a include the output of nestat -l? It seems like so in the manpage but many websites talk about netstat -plantu so I am wondering if netstat -l covers something that netstat -a does not.
|
Regarding the 2nd part of your question, netstat -plantu will show you only tcp and udp info, that is network connections established and listening ports. netstat -a will show you unix sockets also. That's lots of info, it's better to target what you need on the output.
If you run a recent distro, you can use ss instead of netstat. It's a modern alternative, takes the same parametres.
I usually type ss -tulp (same as netstat -tulp) to check all listening ports on my servers/PCs plus the processes which opened the ports; the possible incoming traffic will be addressed on this ports. To check the current connections and processes, ss -tuap. For -p you need root/sudo permissions, in order to view processes of all users.
| Does netstat -l include anything that netstat -a does not have? |
1,592,644,129,000 |
I have a program that must work forever until kill -HUP command. But sometime this program can exit with an error code or it can be killed by OS for overusing memory.
Is there any standard linux command which can monitor any command and restart them on exit?
Something like this: forever my-command -with some parameters
It can be done via bash script:
#!/bin/bash
while true
do
my-command -with some parameters
done
, but I'd better to use somethings standard rather than writing my own scripts.
|
This is a SystemD unit file which would restart the command until it is killed by SIGHUP. If the program shall be allowed to exit with a success exit code then use Restart=on-failure instead.
[Unit]
Description=A program
[Service]
Type=simple
ExecStart=/path/to/my-command -with some parameters
Restart=always
# Restart=on-failure
RestartPreventExitStatus=SIGHUP
| Is there a linux command which can work as forever: restart my programm if it exits with error code |
1,592,644,129,000 |
I've just noticed that sending a foreground process to background with Ctrl-Z sets the $? variable to a non-zero value. Why is this the case?
This behavior is bothering me, because I wanted to have a terminal prompt which changes color when a command errors, and I did this following this answer. However, this also means that if I'm working in Vim, everytime I send it to background with Ctrl-Z the prompt changes color as if something went wrong.
|
(assuming bash)
Ctrl-Z does not send a process to background (as bg %JOB_NUMBER would), it suspends it. In order to do that a SIGSTP signal is sent to the process (you can do it yourself with kill -SIGSTP PID). SIGSTP is signal 20.
The return value you see is 148, or 128 + SIGSTP.
So, you should change the code in that answer to check for that condition, it is always going to be 148.
| Why does sending a process to background set $? to a non-zero value? [duplicate] |
1,592,644,129,000 |
when I run something like -
# ss -atur
I get lot of replies of opened ports such as -
udp UNCONN 0 0 0.0.0.0:631 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:mdns 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:38350 0.0.0.0:*
now while I know that 631 is used by CUPS, have no idea what these other ports are being used for. Is there a way to get the ports known to the application which used them, a kind of reverse-mapping. I am more concerned about local ports being open without a legitimate use. Please let me know if any more info. is needed. I am on Debian testing.
I even tried lsof but with no luck -
How to check port opened on running a service?
I am using lsof 4.93.2 in case if that makes any difference.
For e.g. I tried -
root@debian:~# lsof -Pi | grep qbittorrent
root@debian:~#
Now I know that qbittorrent would need to open up at least a few local ports in order to accept data and similarly would be opening up local ports but don't see anything :(
|
The ss command provides an option to map ports to their corresponding processes:
-p, --processes
Show process using socket.
The output will then contain an additional column, which maps each listed port to a specific process ID.
| is there a way to map a port to an application or service? |
1,592,644,129,000 |
I want to write a one line command line that
Pings google.com non stop
When the ping timesout (lost
connection), echo an error message on screen
|
Using curl or ping but I'm not sure why do you want to do this.
while curl -Lsf google.com >/dev/null || { printf 'Lost connection!' >&2; break; }; do :; done
while ping -c 1 google.com >/dev/null || { printf 'Lost connection!' >&2; break; }; do :; done
| How to write a one line command line that checks for internet downtime? |
1,592,644,129,000 |
What I want to do is connect my PC to my terminal a DEC vt320 and be able to output the Linux console to it and for me to be able to type commands into the terminal and for it to send a reply on the screen.
I wanted to connect to stuff from telnet but I don't know how to do it through serial.
my serial connection is /dev/ttyS0
|
It looks like Mint 19.3 uses systemd, so unless Mint has modified the systemd configuration from what the parent distributions (Ubuntu and ultimately Debian) have, the following commands should do the job.
To start up a serial port for terminal-style login access immediately:
sudo systemctl start serial-getty@ttyS0
To make the configuration persist over reboots:
sudo systemctl enable serial-getty@ttyS0
After running the first of these commands, a login prompt should appear on the terminal. If it doesn't, press Enter on the terminal once or twice: it can help in detecting the data transfer speed the terminal is operating at.
(The serial port speed is also sometimes known as baud rate, although that term would properly apply only to modem connections and similar where digital-to-analog modulation is involved, not to plain digital data transfer.)
This default systemd configuration for serial-attached terminals includes serial port speed auto-detection for speeds 115200, 38400 and 9600 bits per second. You can confirm this with command systemctl cat serial-getty@ttyS0. It will output the auto-generated unit file for that serial port. Among other things, it should contain this line that starts the actual process that will be managing the terminal:
ExecStart=-/sbin/agetty -o '-p -- \\u' --keep-baud 115200,38400,9600 %I $TERM
If the automatic serial port speed detection does not work well for you, or if you want to specify a speed value that is not included in the default list, you would want to create an override file for this systemd service:
sudo systemctl edit [email protected]
This command will create the file (if necessary) and open it in an editor for you.
For example, to lock the serial port speed detection to 57600 bps, you would write the following three lines to the override file:
[Service]
ExecStart=
ExecStart=-/sbin/agetty -o '-p -- \\u' 57600 %I $TERM
The first line specifies that we want to override things in the [Service] section of the autogenerated service file, the second specifies that we want to override its ExecStart line and not just add another one, and the third line is the new ExecStart line with the desired port speed and/or other options for the agetty process that manages the terminal.
The traditional name for such a process in the Unix world is getty, and Linux typically uses an enhanced (alternative/autobauding) version of it for serial ports: agetty.
| send linux console through serial |
1,592,644,129,000 |
I have this CSV file:
"mikecook1966","6days","","Classy1","7/2020"
"kyndrion","1min","","Doominator handle","7/2020"
"Ataca","Feb2,2020","","Soporte 30.5x30.5 VTX-DVR Speedy Bee","7/2020"
I would like to leave in output results, where column 2 contains 2020.
Example output:
"Ataca","Feb2,2020","","Soporte 30.5x30.5 VTX-DVR Speedy Bee","7/2020"
|
If your CSV doesn't include escaped double quotes, you can use grep:
grep '^"[^"]*","[^"]*2020' file.csv
For more complex CSV, a CSV-aware tool is needed.
perl -MText::CSV_XS=csv -e 'csv( in => "file.csv",
filter => { 2 => sub{ /2020/ } } )'
| Delete line fromo CSV file where columns is not specific string |
1,592,644,129,000 |
I am running a python program via cron that runs every 1 minute. Occasionally, it will eat up a lot of CPU and I need the next cron job to not run if that's the case. I am trying
if (( `~/cpu_usage.txt` < 60 )); then `cd /path/to/program && python myfile.py 100`; fi
myfile has a print bob_here in the file and that causes the above to crash with:
bob_here: command not found
The myfile.py runs perfectly fine on it's own so the issue is with the if statement. How do I get it the script to execute properly?
Note: Probably not too important here but cpu_usage.txt is a simple bash program to print out the current cpu usage:
echo $[100-$(vmstat 1 2|tail -1|awk '{print $15}')]
|
Just lose the ticks and provide the full path to the python interpreter:
if (( `~/cpu_usage.txt` < 60 )); then python /path/to/program/myfile.py 100; fi
You don't need the ticks as the shell will execute the command following the then keyword as designed; the ticks will launch a sub-shell and the result/output is then used as command for the if-then, which is not what you want here:
> if true; then echo OK; fi
> OK
> if true; then `echo OK`; fi
> OK: command not found
Edit: from my experience, if you want to use it with crontab, it works best to if you place all your commands in a shell script and call that from crontab instead.
| execute an if statement via commandline |
1,592,644,129,000 |
It is possible to pass resources to X applications in command-line by appending them with -xrm parameters. So, if I want Xmessage background to be grey, I can issue xmessage Hi -xrm "xmessage*background: grey".
Things get tricky if I want to modify event translations. On my .Xresouces, this
Xmessage*Translations:#override\
<Key>F10:exit(-1) \n\
<Key>q:exit(-1)
succeeds in setting F10 and q keys to exit any Xmessage window, but I'm having trouble doing it with -xrm in command line, certainly because of the newlines and escaping backslashes.
I've tried the three following commands, but without success.
xmessage Hi -xrm "xmessage*Translations:#override <Key>s:exit(4)
<Key>r:exit(3)
<Key>p:exit(2)"
xmessage Hi -xrm "xmessage*Translations:#override\
<Key>s:exit(4)\n\
<Key>r:exit(3)\n\
<Key>p:exit(2)"
xmessage Hi -xrm "xmessage*Translations:#override <Key>s:exit(4)" \
-xrm "xmessage*Translations:#override <Key>r:exit(3)" \
-xrm "xmessage*Translations:#override <Key>p:exit(2)"
The 3rd command only assigns the last key successfully. The others fail, although I expected the 1st to work, since it inserts a newline after exit(4) and exit(3), as confirmed by echoing the command.
What am I missing and how can I correct it?
|
You need to put in single quotes :
xmessage Hi -xrm 'xmessage*Translations:#override\
<Key>F10:exit(-1) \n\
<Key>q:exit(-1)'
Otherwise, newlines get lost.
| Trouble passing Translations resource to -xrm because of newlines |
1,592,644,129,000 |
I'm going through the Linux From Scratch project and I want to verify each of the programs and libs are created properly before I move on from each step.
(I searched here and google but pretty much everything I can find is regarding questions about echoing a var declared in the same line or dumping the output of ls to a variable, neither of which apply to my case)
I'll be typing commands similar to the following quite a bit with different patterns:
ls -ld /tools/lib/mypattern* /tools/bin/mypattern*
For example:
ls -ld /tools/lib/tcl* /tools/bin/tcl*
Since there are repeating uses of the same pattern I'd like to streamline this a bit into something like this:
glob=mypattern* ls -ld /tools/lib/$glob /tools/bin/$glob
But if I run:
glob=tcl* ls -ld /tools/lib/$glob /tools/bin/$glob
then all I get is this:
drwxr-xr-x 2 lfs lfs 4096 Dec 2 03:02 /tools/bin/
drwxr-xr-x 16 lfs lfs 4096 Dec 2 03:02 /tools/lib/
so clearly the variable isn't being picked up properly.
If I run the ls with the pattern manually typed in both places then I get the correct output:
$ ls -ld /tools/lib/tcl* /tools/bin/tcl*
lrwxrwxrwx 1 lfs lfs 8 Dec 2 02:39 /tools/bin/tclsh -> tclsh8.6
-rwxr-xr-x 1 lfs lfs 20512 Dec 2 02:38 /tools/bin/tclsh8.6
drwxr-xr-x 5 lfs lfs 4096 Dec 2 02:38 /tools/lib/tcl8
drwxr-xr-x 6 lfs lfs 4096 Dec 2 02:38 /tools/lib/tcl8.6
-rw-r--r-- 1 lfs lfs 7660 Dec 2 02:38 /tools/lib/tclConfig.sh
-rw-r--r-- 1 lfs lfs 773 Dec 2 02:38 /tools/lib/tclooConfig.sh
How can I shorten this command so I only have to type the pattern in once each time I run it?
|
Expansions are performed before variable assignments, and command execution comes after. Use brace expansions instead, like
ls -ld /tools/{lib,bin}/tcl*
| Why does ls not recognize this declared single-line variable? |
1,592,644,129,000 |
I have a python script that takes a path where a bunch of text files are located to process them somehow. Since there are too many files I want to use batches using a bash script to pass just some of the files on the path, say 100 at a time. Is there a simple way to do this. So for example my scripts is currently
python application.py -fp [path to all files]
Can I do a bash script where I do something like
python application.py -fp [file-1:file-100]
and on the next loop
python application.py -fp [file-101:file-200]
and so on?
Edit:
I tried Stephane solution with bash and I think it almost works but I'm still having trouble getting just a subset of the files
I do this to get the path from the parameters given to the bash script
set -- "$fp*.txt"
echo "${@}"
the result is
../../files_test/pair/*.txt
which is correct since that is the path of the files I need to get. But then I do
files=${@:1:2}
echo $files
just to test if I can get the first file but it echos the list of all files in the directory. Am I missing something?
Edit 2:
Nevermind. I realized I was doing
set -- "$fp*.txt"
instead of
set -- $fp*.txt
Now it works.
|
With GNU xargs and a shell with process substitution support (ksh, bash, zsh), you can do:
xargs -r0 -n100 -a <(printf '%s\0' ./*) python application.py -fp
Example:
$ xargs -r0n4 -a <(printf '%s\0' {1..20}) echo
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
Without process substitution, you can also do:
printf '%s\0' ./* | xargs -r0 -n100 -python application.py -fp
But that means application.py's stdin will be /dev/null which on systems with /dev/fd/xxx you can work around by basically implementing process substitution by hand with:
{
printf '%s\0' ./* |
xargs -a /dev/fd/3 3<&0 <&4 4<&- -r0 -n100 -python application.py -fp
} 4<&0
With zsh:
autoload zargs
zargs -l 100 ./* -- python application.py -fp
Example:
$ zargs -l4 {1..20} -- echo
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
You can also always do (ksh93/bash/zsh):
set -- ./*
while (($# > 0)); do
python application.py -fp "${@:1:100}"
shift "$(($# >= 100 ? 100 : $#))"
done
Example:
$ set -- {1..20};while (($#>0));do echo "${@:1:4}";shift "$(($#>4?4:$#))";done
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
If your files are actually called file-1, file-2... you'll probably want to use zsh and its n (for numeric sorting) glob qualifier for the list of files to be sorted numerically:
zargs -l 100 ./*(n) -- python application.py -fp
Or use GNU sort -zV (for version sort) on the output of printf '%s\0':
xargs -r0 -n100 -a <(printf '%s\0' ./* | sort -zV) python application.py -fp
| Bash: pass batches of files to python script |
1,592,644,129,000 |
I have a document and I want to remove all the patterns so that I stay with only some information, the producer/creator of that document.
I managed to replace patterns with one word "PATTERN" so that it becomes easy to remove them. How can I remove this word? Here are the commands I used:
$ cat /path | tr \n \f | \
tr -cd "[A-Za-z0-9 () /\f] | \
sed s/stream.*endstream/STREAM/ | sed s/[0-9][0-9]* /PATTERN/g | \
sed "s/PATTERN PATTERN n/PTR/g"
I obtain this
/rdfRDF/xxmpmetaxpacket efdwefdstreamefdobjPATTERN PATTERN
obj/DisplayDocTitle trueefdobjPATTERN PATTERN obj/Type/XRef/Size
PATTERN/W[ PATTERN PATTERN PATTERN] /Root PATTERN PATTERN R/Iffo
PATTERN PATTERN
R/ID[PATTERNFEFPATTERNCPATTERNEPATTERNDBPATTERNFPATTERNEPATTERNFPATTERNEEPATTERNFEFPATTERNCPATTERNEPATTERNDBPATTERNFPATTERNEPATTERNFPATTERNEEPATTERN]
/Filter/FlateDecode/Lefgth PATTERNstreamxc
Z)PATTERNBSekgPBB(FUfLqSuefdstreamefdobjxrefPATTERN PATTERN PATTERN
fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN
fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN
fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN
fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN
fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN fPATTERN PATTERN
fPATTERN PATTERN fPATTERN PATTERN ftrailer/Size PATTERN/Root PATTERN
PATTERN R/Iffo PATTERN PATTERN
R/ID[PATTERNFEFPATTERNCPATTERNEPATTERNDBPATTERNFPATTERNEPATTERNFPATTERNEEPATTERNFEFPATTERNCPATTERNEPATTERNDBPATTERNFPATTERNEPATTERNFPATTERNEEPATTERN]
startxrefPATTERNEOFxrefPATTERN PATTERNtrailer/Size PATTERN/Root
PATTERN PATTERN R/Iffo PATTERN PATTERN
R/ID[PATTERNFEFPATTERNCPATTERNEPATTERNDBPATTERNFPATTERNEPATTERNFPATTERNEEPATTERNFEFPATTERNCPATTERNEPATTERNDBPATTERNFPATTERNEPATTERNFPATTERNEEPATTERN]
/Prev PATTERN/XRefStm PATTERNstartxrefPATTERNEOF
How to remove the word PATTERN?
|
Try the following:
sed -i -e 's/PATTERN//g' filename
From man page:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
-e script, --expression=script
add the script to the commands to be executed
| How can I remove patterns using sed command in a file? |
1,592,644,129,000 |
as a newbie to linux kernel and all the commands, I am reaching out to you guys, hoping you can help me solve my issue.
When running the next command
sudo dmidecode -t 5
I get the following output:
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.4 present.
Handle 0x0084, DMI type 5, 46 bytes
Memory Controller Information
Error Detecting Method: None
Error Correcting Capabilities:
None
Supported Interleave: One-way Interleave
Current Interleave: One-way Interleave
Maximum Memory Module Size: 32768 MB
Maximum Total Memory Size: 491520 MB
Supported Speeds:
70 ns
60 ns
Supported Memory Types:
FPM
EDO
DIMM
SDRAM
Memory Module Voltage: 3.3 V
Associated Memory Slots: 15
0x0085
0x0086
0x0087
0x0088
0x0089
0x008A
0x008B
0x008C
0x008D
0x008E
0x008F
0x0090
0x0091
0x0092
0x0093
Enabled Error Correcting Capabilities:
None
Is there any command to filter the output so I get the supported speeds (70ns, 60ns) in any way?
I tried
sudo dmidecode -t 5 | grep -i -e DMI -e speed
which gave me this output:
# dmidecode 3.0
Handle 0x0084, DMI type 5, 46 bytes
Supported Speeds:
but this doesn't output the following lines.
Any suggestions are very welcome, thanks!
|
This will list the supported speeds:
dmidecode | awk '/^\t[^\t]/ { speeds = 0 }; /^\tSupported Speeds:/ { speeds = 1 } /^\t\t/ && speeds'
This works by matching lines as follows:
lines starting with a single tab mean that we’re not expecting speeds;
lines starting with a single tab followed by “Supported Speeds:” mean that we are expecting speeds;
lines starting with two tabs when we are expecting speeds are output as-is.
| Filter dmidecode memory controller for supported speeds |
1,592,644,129,000 |
I have a file.txt for example (it does not have the same number of columns for each row):
1 2 3 4
5 5 6
7 7 7 7 9 10
I have another file (file2.txt) that contains 2 columns
a b
c d
e f
I use this command:
awk '{print $1,$(cut -f2 file2.txt)}' file.txt > final.txt
I want to take the second column of file2.txt and add it between columns 1 and 2 of file1.txt.
Ex. of results:
1 d 2 3 4
5 d 5 6
7 f 7 7 7 9 10
I want also maintaining all the remaining columns of file1.txt
|
Pure awk:
awk '
FNR==NR{c[NR]=$2}
FNR!=NR{$1 = $1 OFS c[FNR]; print}
' file2 file
Output:
1 b 2 3 4
5 d 5 6
7 f 7 7 7 9 10
| Putting a command inside awk to merge columns of different files |
1,592,644,129,000 |
I am asking if there is a way to automatically put the output of a command into the middle of another.
multimon-ng outputs: ZCZC-WXR-TOR-029037+0030-1051700-KEAX/NWS and i want that output to get sent to where i put three question marks : python2.7 easencode.py -z ??? output.wav
can i do this with pipe and if so, how?
|
try
python2.7 easencode.py -z $(multimon-ng) output.wav
if you are in bash.
$( ) construct will execute command and insert result (stripped of end of line) in current command.
as per comment, you might whish to use "$( )" depending of expected result and importance of space.
| Is it possible to put the output of one command into the middle of another [duplicate] |
1,592,644,129,000 |
I'm in the terminal emulator. I want to open another terminal to run a specified command.
gnome-terminal -e "zsh -c 'pwd; zsh;'"
That runs pwd and zsh consecutively and successfully in another terminal. After pwd terminates, zsh launches and I can run other commands in that second terminal instance.
However, when a different app is launched instead of pwd, I can't end that long-running program properly.
For example when I type CtrlC to exit from node.js server in below command, it also closes the terminal. It seems that SIGINT is passed to first zsh with -c option.
gnome-terminal -e "zsh -c 'node server.js; zsh;'"
How can I fix this?
|
I tried this in bash
Apologies in advanced if it does not work.
I wrote a bash script named userInput.sh - this waits for user input and ends.
`
## trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
echo "** Trapped CTRL-C"
exit
}
read -p "Press any key to continue... " -n1 -s
Now, when I run and press clt+c
gnome-terminal -e "bash -c './userInput.sh; bash;'"
userInput.sh exits and I still have the bash prompt.
Now I think in your case instead of waiting for user input, you are running node server.js like
## trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
echo "** Trapped CTRL-C"
exit
}
node server.js
Let's assume that you named the script 'runNodeJs.sh', then command would be
gnome-terminal -e "bash -c './runNodeJs.sh; bash;'"
| How can I open terminal and run command automatically? |
1,592,644,129,000 |
Concept: a terminal pager, like less for example, that interactively "folds" and "unfolds" the input file (as in emacs outline mode). Folded, a recursive directory listing might show only the directory names. Unfolded, it would show the full contents.
As another example
git log | pager
might allow the user to toggle interactively between seeing the headlines and details of each commit.
pager xxx.c
might fold/unfold functions, allowing the user to switch between seeing function definitions only and function bodies.
Obviously, the pager would need to be told (or to deduce for itself) the type of the content it was dealing with.
Does such a program exist?
|
Put this in an executable file named "pager":
#! /usr/bin/env bash
TEMP=/tmp/file-$$.txt
trap "rm -f $TEMP" EXIT HUP INT TERM
echo '-*- outline -*-' > $TEMP
cat "$@" >> $TEMP
emacs $TEMP 0<&1
The initial line of the temporary text file
arranges for emacs to enter outline mode.
Then cat appends the zero-or-more specified file(s).
Finally the editor allows viewing the input text via your favorite mode, then trap EXIT cleans the temp file.
Zero files implies reading from stdin.
Ordinarily git log | pager would not be well supported,
as the pipe can interfere with stdin connected to keyboard.
(Diagnostic in that case is: "emacs: standard input is not a tty".)
We expect that stdout will be connected to terminal,
that is, pager is at the end of the pipeline.
Given that, 0<&1 is able to recover from the situation
by connecting stdin to the same terminal pty
that stdout is connected to,
allowing for a successful edit session.
| Folding terminal pager |
1,592,644,129,000 |
I am creating a file via this command:
awk '{print $2 " "$7" "$8}' REACTOME_EXTENSION_OF_TELOMERES.xls | awk '$8!="No" {print $1 " " $2}' | awk 'NR>1' | awk 'BEGIN { OFS=", "; print "Name" " " "0" };{ print $0 " " "" }'
The output is this:
Name 0
WRAP53 0.08495288
NHP2 0.17606254
POLA1 0.25320756
POLD3 0.32372433
PRIM1 0.38140765
RFC5 0.44302294
POLD1 0.497649
...
I need a command which would subtract every adjacent line in 2nd column and give me this result:
WRAP53 0.0849529
NHP2 0.0911097
POLA1 0.077145
POLD3 0.0705168
PRIM1 0.0576833
RFC5 0.0616153
POLD1 0.0546261
...
I know how to do it when I just keep the 2nd column, it would be like this:
awk '{print $2 " "$7" "$8}' REACTOME_EXTENSION_OF_TELOMERES.xls | awk '$8!="No" {print $1 " " $2}' | awk 'NR>1' | awk 'BEGIN { OFS=", "; print "Name" " " "0" };{ print $0 " " "" }' | awk '{print $NF}' | awk 'NR-1{print $0-p}{p=$0}'
But how to do it so that I preserve the 1st column as shown above?
REACTOME_EXTENSION_OF_TELOMERES.xls file looks like this:
NAME PROBE GENE SYMBOL GENE_TITLE RANK IN GENE LIST RANK METRIC SCORE RUNNING ES CORE ENRICHMENT
row_0 WRAP53 null null 163 1.5818238258361816 0.08495288 Yes
row_1 NHP2 null null 201 1.5055444240570068 0.17606254 Yes
row_2 POLA1 null null 283 1.3435969352722168 0.25320756 Yes
row_3 POLD3 null null 367 1.240567684173584 0.32372433 Yes
row_4 PRIM1 null null 501 1.1049883365631104 0.38140765 Yes
row_5 RFC5 null null 557 1.0596935749053955 0.44302294 Yes
row_6 POLD1 null null 653 1.0035457611083984 0.497649 Yes
It would be great if I could have the output of the whole command into: REACTOME_EXTENSION_OF_TELOMERES.y
|
Your entire awk pipeline can be replaced by
awk 'NR > 1 && $8 != "No" {print $2, $7 - prev} {prev = $7}' REACTOME_EXTENSION_OF_TELOMERES.xls
which outputs
WRAP53 0.0849529
NHP2 0.0911097
POLA1 0.077145
POLD3 0.0705168
PRIM1 0.0576833
RFC5 0.0616153
POLD1 0.0546261
| How to subtract every adjacent line from the 2nd column and keep the first one |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.