date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,591,436,750,000 |
I'd like to list let's say all of the following interfaces:
p4p1
p4p2
p4p3
p4p4
Instead of doing something like this:
for i in {1..4}; do ip a s p4p${i}; done
can I just do something similar to:
ip a s p4p*
and achieve the same effect?
|
According to ip addr help:
[...]
ip addr {show|save|flush} [ dev STRING ] [ scope SCOPE-ID ]
[ to PREFIX ] [ FLAG-LIST ] [ label PATTERN ] [up]
[...]
You can use:
ip addr show label p4p\*
| Does ip command support wildcards? |
1,591,436,750,000 |
For example, I may have the following on my clipboard:
/Users/matt/widgets/file.txt
And I want to change directory to:
/Users/matt/widgets
cd doesn't work:
$ cd /Users/matt/widgets/file.txt
bash: cd: /Users/matt/widgets/file.txt: Not a directory
What simple (i.e. easy to type) change can I make to make this easy?
|
If you're OK with front-loading the effort in order to make subsequent runs easier, you could create a function (name it whatever makes sense to you):
function cdfile { cd -- "$(dirname $1)"; }
Save such a definition to your ~/.bashrc file.
Then, each time you have a file path that you wanted to cd to, you could
cdfile <paste path>
| What's the easiest way to cd to the deepest directory on an absolute path to a file? |
1,591,436,750,000 |
How can I write grep pattern in multiple lines, inside bash script? Like that:
grep -o -P '
(?!<.*?(?<!(href))=")
https?:\/\/(?!(www\.example\.com)).*?
(?=(">))
' input.txt
When I wrote php programs, I could do so. Now tried this in the bash script - it doesn't work. It is very unhandy write all in one line :(
May be exist option, which allows formatting inside pattern, that is, grep doesn't notice whitespaces (spaces and newlines), when this option is enabled.
grep version:
grep -V
grep (GNU grep) 2.25
|
grep takes newlines as separating different patterns. But you could save the pattern in a variable, and then remove the whitespace before passing it to grep:
$ ws=$' \t\n'
$ pat=$'a b\nc'
$ echo abcd | grep "${pat//[$'\n\t ']}"
abcd
(Didn't test with anything more complex.)
| grep - the multiline pattern writing |
1,591,436,750,000 |
I learned about the command line command test and read a few examples.
One of them was obscure to me:
test 100 -gt 99 && echo "Yes, that's true." || echo "No, that's false."
I understand it to this point:
test 100 -gt 99
evaluates whether 100 is greater than 99
echo "Yes, that's true." || echo "No, that's false."
If the expression turns out to be true do the command on the left of the double pipe symbol, otherwise do the right one.
I could not find any hint onto the & expression in the manual.
What is its purpose?
|
& doesn't mean anything to test, it's the HTML entity for the ampersand &, which has a special meaning in HTML so it cannot be presented as-is. Where ever that snippet came from, the presentation is broken.
Decoding that character, the line should be
test 100 -gt 99 && echo "Yes..." || echo "No..."
&& and || still don't have anything to do with test itself, but are conditional constructs for the shell. cmd1 && cmd2 runs first cmd1 then, if it returns a truthy value (zero), runs cmd2.
| What is the meaning of & with regard to the test command? |
1,591,436,750,000 |
A very odd situation. If I enter into interactive commands - in this case tasksh and nslookup I cannot type whitespace. For example:
tasksh> add "my new task"
becomes
tasksh> add"mynewtask"
and does not work, naturally. I thought maybe this is something to do with tasksh but then I checked with nslookup in interactive mode - and it is the same thing. This happens on linux 4.6.7-1-MANJARO but does not on OpenSUSE Leap 42.1.
How should I troubleshoot this very irksome thing?
I tried to run different terminals (gnome-terminal, xfce4-terminal, xterm) and it gives the same result.
|
The most likely explanation is that you've accidentally bound Space to a command that has no visible effect in one of your configuration files.
Check your shell initialization files for stty commands. That will directly affect at least programs that rely on the terminal's primitive line editor, and may indirectly affect programs that come with a decent line editor as they try to remain compatible with stty settings.
If the problem only occurs in certain programs, the problematic configuration may be that of the readline library. This library is used by bash and by a number of other programs. If the key works in bash but not in other programs that use readline, it may be because bash overrides it. The configuration file for readline is .inputrc.
If you only have the problem in bash, check your .bashrc (which is where any terminal-related configuration should go) and other bash configuration files (in case the configuration is in the wrong place).
In a comment, you mention that ~/.inputrc contains
Space: magic-space
magic-space is a bash command. Other programs don't understand that. Either make this setting conditional to bash:
$if Bash
Space: magic-space
$endif
or remove this setting from .inputrc and define it in .bashrc instead:
bind 'Space: magic-space'
| I cannot type space in interactive command mode |
1,591,436,750,000 |
I have a logfile with the following format:
Jul 13 21:47:41 192.168.100.254 "user from 192.168.100.101"
I need to remove ALL lines that contain IP's in the 192.168.x.x range but only if they appear in the 4th column.
I also need to exclude 3 IP's from the 192.168.x.x range. Lets call these
192.168.125.100
192.168.126.100
192.168.155.240
How can I finish this command to find all the IP's in the 192.168.x.x range in the 4th column and remove all the lines except the ones that contain 192.168.125.100, 192.168.126.100, and 192.168.155.240.
awk '{print $4}' file | grep '192.168' | "remove all found except" | > save back to original file
|
Try:
awk '{f=1} $4 ~ /^192.168/{f=0} $4 ~ /192.168.(125.100|126.100|155.240)/{f=1} f' file
Example
Consider this test file:
$ cat file
Jul 13 21:47:41 192.168.100.254 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.125.100 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.126.100 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.155.240 "user from 192.168.100.101"
Jul 13 21:47:41 123.456.789.240 "user from 192.168.100.101"
As I understand your rules, you want to keep all but the first line above.
$ awk '{f=1} $4 ~ /^192.168/{f=0} $4 ~ /192.168.(125.100|126.100|155.240)/{f=1} f' file
Jul 13 21:47:41 192.168.125.100 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.126.100 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.155.240 "user from 192.168.100.101"
Jul 13 21:47:41 123.456.789.240 "user from 192.168.100.101"
Multiline version
For those who prefer their code spread over multiple lines:
awk '
{
f=1
}
$4 ~ /^192.168/ {
f=0
}
$4 ~ /192.168.(125.100|126.100|155.240)/ {
f=1
}
f
' file
How it works
The code uses a single variable f. If a line should be kept, we set f=1. Otherwise, f is set to zero.
f=1
To begin, assume that the line should be kept.
$4 ~ /^192.168/{f=0}
If $4 starts with 192.168, then mark the line as one that we should lose.
$4 ~ /192.168.(125.100|126.100|155.240)/{f=1}
For these three special cases, mark the line as a keeper: f=1.
f
This is awk's cryptic shorthand for: if f is true (nonzero), then print the line.
Additional testing
As per the comments, we will try file2:
$ cat file2
Jul 13 21:47:41 192.168.100.125 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.202.150 "user from 192.168.100.101"
Jul 13 21:47:41 192.168.101.45 "user from 192.168.100.101"
Now, let's run our command:
$ awk '{f=1} $4 ~ /^192.168/{f=0} $4 ~ /192.168.(125.100|126.100|155.240)/{f=1} f' file2
$
All these lines were removed as they should be.
| remove lines that contain IP range from specific column while making exclusions to range |
1,591,436,750,000 |
I am having issues with the part of the following script where it takes the inputted image and checks its dimensions. If it's not a certain dimension then it resizes the image.
My issue is what would I put inside the if-block to check that using ImageMagick?
My current code:
#Change profile picture f(x)
#Ex. changeProfilePicture username /path/to/image.png
function changeProfilePicture() {
userName="$1"
filePath="$(readlink -f "$2")"
fileName="${filePath##*/}" #baseName + fileExtension
baseName="${fileName%.*}"
fileExtension="${filePath##*.}"
echo "Checking if imagemagick is installed..."
if ! command brew ls --versions imagemagick >/dev/null 2>&1; then
echo "Installing imagemagick..."
brew install imagemagick -y
echo "imagemagick has been installed."
else
echo "imagemagick has already been installed."
fi
# check the file extension. If it's not png, convert to png.
echo "Checking file-extension..."
if ! [[ $fileExtension = "png" ]]; then
echo "Converting to ''.png'..."
convert $fileName "${baseName}".png
fileName=$baseName.png
echo "File conversion was successful."
else
echo "File-extension is already '.png'"
fi
# check the dimensions, if its not 96x96, resize it to 96x96.
#I don't know what to put inside the following if-block:
if ! [[ ]]; then
echo "Resizing image to '96x96'..."
convert $fileName -resize 96x96 "${fileName}"
echo "Image resizing was successful."
else
echo "Image already has the dimensions of '96x96'."
fi
echo "changing profile picture to " "$filePath"
sudo cp "$filePath" /var/lib/AccountsService/icons/
cd /var/lib/AccountsService/icons/
sudo mv $fileName "${userName}"
cd ~/Desktop
}
|
First, unlikely you have 96x96 existing picture, so most of the case you need to convert. You don't have to identify and compare the dimensions.
Don't trust extension of filename, .png doesn't means it's a PNG image.
Test command and then install is an unnecessary checking and not portable (apt-get, dnf, ... etc). And the fact is it should output "command not found" if this case happen. Moreover this checking might slowing down your function.
So, why not simply do:
#Ex. changeProfilePicture username /path/to/image.png
function changeProfilePicture () {
sudo mkdir -p -- '/var/lib/AccountsService/icons/'"$1"
sudo convert "$2" -set filename:f '/var/lib/AccountsService/icons/'"$1/%t" -resize 96x96 '%[filename:f].png'
}
[Note]:
If you want to ignore aspect ratio to ensure the output always 96x96, then change -resize 96x96 to -resize 96x96\!, read this.
The reason why .png doesn't put as part of filename:f above is because:
Warning, do not include the file suffix in the filename setting! IM
will not see it, and save the image using the original file format,
rather than the one that was included in filename setting. That is
the filename will have the suffix you specify, but the image format
may different!
| How do I grab image dimensions using ImageMagick cli? |
1,591,436,750,000 |
I want to excecute the following command over a ssh connection:
tmpValue=$(cat /var/run/jboss-as/jboss-as-standalone8.pid) && top -b -U jboss -n 1 |grep $tmpValue |awk '{print $9}'
This command is working on my target machine.
Now I want to use this command from a different machine and execute it via ssh, so what I have done is this:
ssh jboss@myTargetServer tmpValue=$(cat /var/run/jboss-as/jboss-as-standalone8.pid) && top -b -U jboss -n 1 |grep $tmpValue |awk '{print $9}'
The result is
cat: /var/run/jboss-as/jboss-as-standalone8.pid: No such file or directory
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
Whats wrong with my call?
|
It might be easier to simplify the command so that you have fewer special characters on which the shell might choke:
ssh jboss@myTargetServer 'ps -p $(cat /var/run/jboss-as/jboss-as-standalone8.pid) -o %cpu= 2>/dev/null'
The trailing 2>/dev/null throws away the error text in the event that the PID file either cannot be found or contains a stale PID.
| execute command over ssh connection [duplicate] |
1,591,436,750,000 |
I have a small test.sh script as follows:
#!/bin/bash
read -e -p "Enter a Directory: " directory
echo $directory
Here's what happens when I run it:
$ ./test.sh
Enter a Directory: ~/A\ Directory/
/Users/<username>/A Directory/
I want to be able to keep the escaped space inside that variable so that the program output would read:
$ ./test.sh
Enter a Directory: ~/A\ Directory/
/Users/<username>/A\ Directory/
Anyone know how to do this? The use for this is to tell the script where to look for files that it needs to read.
Edit: I forgot to put the \ in the second and third code selection.
|
You could try like this:
Example 1: Your answer.
#!/bin/bash
read -e -p "Enter a Directory: " directory
new_directory="$(echo $directory | sed 's/ /\\ /g')"
echo $new_directory
Example 2: If you are going to cd script output wrap it with double quote.
#!/bin/bash
read -e -p "Enter a Directory: " directory
new_directory=\"$directory\"
echo $new_directory
| How to pass directory with escaped space to variable? |
1,591,436,750,000 |
I have a big file
>fid|29290408|locus|VBIEntCas2262_0001| Phosphoglycolate phosphatase (EC 3.1.3.18) [Enterococcus casseliflavus EC20]
gtgagaaagaaagtactttttgatttagatggaacgatcattgattcgagtgaaggaatc
tatggatcgattcaatatgcgatggaaaaaatgggaaaagagcaattagcgcaagacgta
ctgcggagctttgtggggccgcctttgattgaatccttccgtggcttgggcttcgatgaa
>fid|29290410|locus|VBIEntCas2262_0002| hypothetical protein [Enterococcus casseliflavus EC20]
atgatcggcgaacgttttttgatcacaccgatcgacgaaccgttagacccatacaatgag
ttagtctcaagcaatcagtttactttctttacatcaacctatgatcaaatgttcttgact
ggtcatctgattctagatgttcacccaacttcaggaactttgattttgaaaaacgaaagc
ggctatttggataccaatcttttattggaatcctctccacagttaaaacaaacgaatgcg
>fid|29290414|locus|VBIEntCas2262_0004| FIG00630550: hypothetical protein [Enterococcus casseliflavus EC20]
atgaagcgtgttgcagaaaactatttggttgttttttcgattcttttgctgattatatgg
ctaggcttgatccaagtgaaagaatattcgcaagaagtagccctgtcgatcatttacttt
I need to split each line beginning with ">" based on the space, retaining in the new file only the part before the spaces, with the following lines.
So the file I need should be:
>fid|29290408|locus|VBIEntCas2262_0001|
gtgagaaagaaagtactttttgatttagatggaacgatcattgattcgagtgaaggaatc
tatggatcgattcaatatgcgatggaaaaaatgggaaaagagcaattagcgcaagacgta
ctgcggagctttgtggggccgcctttgattgaatccttccgtggcttgggcttcgatgaa
an so on.
the number of lines following the header (starting with >) is not fixed.
How could I do?
|
You can use this command:
awk '{print $1}' filename > newfile
where filename is the name of the original big file and newfileis the file that will get the results.
| split line based on space and delete the second part |
1,591,436,750,000 |
I was reading the manual pages for the package moreutils but I did not understand the page for zrun.
My manual page is almost the same as the die.net one:
ZRUN(1) ZRUN(1)
NAME
zrun - automatically uncompress arguments to command
SYNOPSIS
zrun command file.gz [...]
DESCRIPTION
Prefixing a shell command with "zrun" causes any compressed files that
are arguments of the command to be transparently uncompressed to temp
files (not pipes) and the uncompressed files fed to the command.
This is a quick way to run a command that does not itself support
compressed files, without manually uncompressing the files.
The following compression types are supported: gz bz2 Z xz lzma lzo
If zrun is linked to some name beginning with z, like zprog, and the
link is executed, this is equivalent to executing "zrun prog".
BUGS
Modifications to the uncompressed temporary file are not fed back into
the input file, so using this as a quick way to make an editor support
compressed files won't work.
AUTHOR
Copyright 2006 by Chung-chieh Shan <[email protected]>
moreutils 2010-04-28 ZRUN(1)
Can you provide an example use?
|
Here's an example;
$ cat >afile
This is a file
line2
line3
$ cp afile bfile
$ gzip afile
$ ls -l
total 8
-rw-r--r-- 1 usera usera 48 2014-02-19 13:24 afile.gz
-rw-r--r-- 1 usera usera 27 2014-02-19 13:24 bfile
with grep
$ grep line *
bfile:line2
bfile:line3
with zrun
$ zrun grep line *
/tmp/dpQH01hY51-afile:line2
/tmp/dpQH01hY51-afile:line3
bfile:line2
bfile:line3
You can see that grep doesn't see proper lines in the zipped up afile.gz. However when using zrun,
zrun first unzips afile.gz into /tmp and the actual command executed by zrun is:
$ grep line /tmp/dpQH01hY51-afile bfile
| Can you provide an example use for `zrun`? |
1,591,436,750,000 |
I have an image Image.png. How do I use the ImageMagick command convert to make this file into a Image.jpg file with the following requirements:
File size is 200~500 kb.
Resolution in not less than 450 dpi
Image size 35 x 45 mm
Edit: These are the exact requirements for making some documents through electronic government. IT isn't a problem that the quality will deteriorate, because the image is needed just for the process of making the document (not for the document itself).
I'm not entirely clear in understanding what these requirements mean with respect to a file. I figured out how to get .jpg from .png:
$ convert Image.png Image.jpg
Also, I found out how to set some specific DPI:
$ convert -units PixelsPerInch Image.png -density 450 Image.jpg
However, I'm not sure about millimeters, and how it's determined in an image file.
|
The command
convert Image.png -resize 620x797 -quality 1 Image.jpg
meets your claims. However PNG is lossless compression and JPG is lossy, so you can compress as much as you want, but the result may not meet your needs.
How to figure it out:
450 dot per inch means 450 pixel per 25.4 mm
to get the width: 450*35/25.4 = 620
to get the height: 450*45/25.4 = 797
| Convert image with imagemagick command line convert tool |
1,591,436,750,000 |
I have fired an ls -1 command that runs and displays a long list of values. When the command ends I can not see the output which is outside the screen vertical length. How can I see those previous entries ?
Is there a way to see the output progressively like :
Display first 15 rows
User hits a keystroke.
Then display the next 15 records
|
You can pipe the output of ls to pipe as follows
$ ls | less
Then you are able to use less to browse the output, for example with Page Up and Page Down. You can exit less by pressing q.
Type man less to find out more ways to scroll the output.
| How can I see output from a command progressively? |
1,591,436,750,000 |
For some reason a while back, the behavior of my command line changed, and I don't know why. Using OSX, now Mountain Lion(although this behavior was present before the switch).
Using standard terminal, I would expect back some results from ps, but I get an error:
$ ps aux |grep 'asdf'
grep: asdf: No such file or director
This also shows up, for example, here:
ln -s "/Applications/Sublime Text 2.app/Contents/SharedSupport/bin/subl" ~/bin/subl
ln: /Users/peter/bin/subl: No such file or directory
|
Does it happens when you login as another user?
Check output of "set ; env; alias". Post it you can't find a problem.
| ps aux |grep 'asdf' grep: asdf: No such file or directory |
1,591,436,750,000 |
I intended to delete all the backup files in a directory, so I was going to type rm *~ in the terminal. Unfortunately, I hit the Enter before hitting the tilde and unhappy things happened. Although, I've recovered all deleted files I really don't want it to happen again. Could I forbid the execution of such a command unless being granted a permission, like that of a superuser?
|
I am not a fan of overriding built-in commands, but in my .bashrc (part of Tilde, my "dot files") I explicitly do this:
alias rm='rm -i';
This makes rm ask for permission before deleting. It has saved me a few times. You can always override with rm -f.
| how to forbid a command without a permission |
1,591,436,750,000 |
I am on a Ubuntu machine.
I have make a directory under root directory, by:
$ sudo mkdir /hello
$ sudo mkdir /hello/bye
Then I mount tmpfs with size 1024M to /hello/bye by:
$ sudo echo "tmpfs /hello/bye tmpfs size=1024M,mode=0777 0 0" >> /etc/fstab
$ sudo mount -a
In future , How to clear /hello/bye (tmpfs) ?
|
If by clear you mean delete all files in there, it's like any other directory:
rm -rf /hello/bye/*
If you mean unmount the tmpfs partition simply do:
umount /hello/bye
Having put the line
tmpfs /hello/bye tmpfs size=1024M,mode=0777 0 0
in your /etc/fstab, that partition will be automatically mounted at every boot. If you don't want to automout use the noauto option:
tmpfs /hello/bye tmpfs size=1024M,mode=0777,noauto 0 0
If you don't need the partition any more, simply delete that line from /etc/fstab and delete the directory /hello/bye.
| clear tmpfs in my case |
1,591,436,750,000 |
Previously, I asked this question about how to suspend Linux after a set amount of time.
I would like to ask a similar question. Supposing I have a USB device attached to my system (OS = Fedora 13), are there commands that can:
detach the USB device and,
after detaching it, shutdown the system after a specified interval.
To be more precise, detaching means a command that safely removes the device.
|
I assume by USB you mean a pendrive or external harddisk mounted to your file system.
You "detach" this by unmounting the device. For that you will have to use the umount command. You can use the device or the mountpoint, for example:
umount /dev/sdb1 or umount /mnt/usb
See man umount for more details.
For shutting down your system, you use the shutdown command. -h will "Halt or power off after shutdown". The manpage says:
SYNOPSIS
/sbin/shutdown [-akrhPHfFnc] [-t sec] time [warning message]
So you can use it to shutdown your system after a specific amount of time. The following command will halt your system after 30 minutes:
shutdown -h 30
Now you have one command which should only executed after the other one was succesfull. This is done with &&, shorthand for a conditional statement and a feature of your shell (Note: || exists also). The second command will only be executed if the first one returned without any errors. This is indicated by a return code of 0. For example:
umount /dev/sdb1 && shutdown -h 15 will detach your USB and halt your system after 15 minutes.
If this doesn't answer your question, please be more specific.
| How can I shutdown after unmounting a USB device from the command line? |
1,591,436,750,000 |
I have been using unix systems the majority of my life. I often find myself teaching others about them. I get a lot of questions like "what is the /etc folder for?" from students, and sometimes I have the same questions myself. I know that all of the information is available with a simple google search, but I was wondering if there are any tools or solutions that are able to add descriptions to folders (and/or files) that could easily be viewed from the command line? This could be basically an option to ls or a program that does something similar.
I would like there to be something like this:
$ ls-alt --show-descriptions /
...
/etc – Configuration Files
/opt – Optional Software
/bin - Binaries
/sbin – System Binaries
/tmp – Temporary Files
...
Could even take this a step further and have a verbose descriptions option:
$ ls-alt --show-descriptions-verbose /
...
/etc – The /etc directory contains the core configuration files of the system, use primarily by the administrator and services, such as the password file and networking files.
/opt – Traditionally, the /opt directory is used for installing/storing the files of third-party applications that are not available from the distribution’s repository.
/bin - The ‘/bin’ directly contains the executable files of many basic shell commands like ls, cp, cd etc. Mostly the programs are in binary format here and accessible by all the users in the Linux system.
/sbin – This is similar to the /bin directory. The only difference is that is contains the binaries that can only be run by root or a sudo user. You can think of the ‘s’ in ‘sbin’ as super or sudo.
/tmp – This directory holds temporary files. Many applications use this directory to store temporary files. /tmp directories are deleted when your system restarts.
...
I know that there is no default way to do this with ls, and to add such a feature would probably require a lot of re-writing of kernel code to account for the additional data being stored, so I'm not asking how to do this natively necessarily (unless there is an easy way I am overlooking). I am more asking if there is a tool that already exists for educational purposes that enables this sort of functionality? I guess it would take output from ls and then do a lookup to match directory names to descriptions it has already saved somewhere, but I digress.
|
tree --info will do what you want.
You can create .info txt-Files that contain your remarks about
certain files and folders
or groups of files and folders (using wildcards).
tree --info will then show them in the directory listing.
Multi-line comments are possible.
There is also a Global info file in /usr/share/finfo/global_info that contains explanations for the Linux file system (see here). This file shows you also easily how the .info file syntax looks.
The homepage of the software is https://fossies.org/linux/misc/tree-2.1.1.tgz/.
| Is there a way to add a "description" field / meta-data that could viewed in ls output (or an alternative to ls)? |
1,591,436,750,000 |
In zsh, globbing kicks in when using wildcard characters ? or * like this:
ls file?.txt
However I would like to disable globbing in a case like this:
youtube-dl https://www.youtube.com/watch?v=QIysdjpiLcA
I can work around this by putting the argument in either single or double quotes (' or ").
Can I somehow configure zsh to ignore wildcard characters (i.e. don't do any globbing) when using in patterns such as URLs? Or for certain commands/executables?
|
If you put noglob in front of a command, no globbing is done.
noglob youtube-dl https://www.youtube.com/watch?v=QIysdjpiLcA
To disable globbing for a particular command, make it an alias.
alias youtube-dl='noglob youtube-dl'
With URLs, this helps with ?, but not with &. There's no way to disable the interpretation of & except quoting.
If you have the URL in the clipboard, instead of pasting it, you can use a command to recall the clipboard content:
youtube-dl "`xsel`" # X11 automatic mouse selection
youtube-dl "`xclip -o`" # X11 automatic mouse selection
youtube-dl "`xsel -b`" # X11 explicitly copied clipboard
youtube-dl "`xclip -o -sc`" # X11 explicitly copied clipboard
youtube-dl "`pbpaste`" # macOS clipboard
You'll have the pasting command in your shell history, not the URL. If you press Tab at the end of the command line before the end of the command, the command substitution will be expanded to the URL. (The required key may be different depending on your completion settings; see also How can I expand all variables at the command line in Zsh? and Shell that tab-completes prefix?.)
Alternatively, if your terminal supports bracketed paste (good modern ones do), press Ctrl+U (or anything else that sets a numeric argument) before pasting. The URL (or whatever you paste) will be pasted with quotes around it. Note that this includes leading and trailing space that browsers would ignore.
| zsh: disable globbing for certain commands or patterns? |
1,591,436,750,000 |
Apologies for any duplication, but most questions I've come across relate to getting a specific value from a field in a row, or using tail to get n tailing lines from a file, where n is known a priori. I'm looking to find a row where a value is matched, and then get all fields in that row AND all following rows. Details below.
I have data files returned from an online database that have a variable number of metadata header rows containing information about the query criteria used to search the database. After these header rows is a tidy dataframe. Example:
Query date: February 3, 2020, 1:34:57 PM
Database: <database name>
\n
Search criteria:
\n
Geographic bounding box coordinates: -130.00 20.00; -130.00 24.00; -120.00 24.00; -120.00 20.00
Sample type: rocks > sediments > dust
\n
SAMPLE ID,REFERENCE,LONGITUDE,LATITUDE,X,Y,Z,A
56,Author (YYYY) Title: Journal,-127.3,22,1.7,2.3,0,0.55
56,Author (YYYY) Title: Journal,-127.34,22.4,1.9,1.3,0.5
I have successfully found the row containing data field names using:
SID=$(awk -F, '{ if ($1 == "SAMPLE ID") print NR }' data.csv)
echo $SID returns 9, as expected
Now I want to take that row of field names and all of the following rows that contain the data and send them to a new file. In other words, I wish to parse the whole input file, and send the rows where NR >= $SID to a new file.
This is the code I've been using, but it instead just returns almost all of the data, except for a few rows. I can't figure out how to get the data I want, or why it's omitting the rows that it is.
awk -F, -v r=$SID '{ if (NR >= $r) print $0}' data.csv > output.csv
Here's my expected output:
SAMPLE ID,REFERENCE,LONGITUDE,LATITUDE,X,Y,Z,A
56,Author (YYYY) Title: Journal,-127.3,22,1.7,2.3,0,0.55
56,Author (YYYY) Title: Journal,-127.34,22.4,1.9,1.3,0.5
Any help would be great! If it wasn't clear, I'm totally new to awk! Meaning I'd also welcome links to any good introductory materials for learning.
|
In awk, $r would refer to the value of the rth field, rather than the value of r itself. Your solution should work if you just replace $r by r :
awk -F, -v r=$SID '{ if (NR >= r) print $0}' data.csv
or (more idiomatically, using the default print action)
awk -F, -v r=$SID 'NR >= r' data.csv
However there's really no need to do it in two steps - either
awk -F, '$1 == "SAMPLE ID" {p=1} p' data.csv
or even (ignoring the CSV structure altogether)
awk '/^SAMPLE ID,/{p=1} p' data.csv
should work as well.
| Find row containing string, then return that row and all following rows of text file with awk |
1,591,436,750,000 |
How can I display the number of registered users on the system who have their home directory in /home and simultaneously have Bash Shell as the command interpreter?
|
You could just grep the /etc/password file for lines that have :/home (so a field that starts with /home), then more non-: characters and only one more : before the end, which should be followed by /bin/bash:
$ grep ':/home/[^:]*:/bin/bash' /etc/passwd
terdon:x:1000:1000::/home/terdon:/bin/bash
bib:x:1001:1001::/home/bib:/bin/bash
So, to display the number only:
$ grep -c ':/home/[^:]*:/bin/bash' /etc/passwd
2
| Display the number of registered users |
1,591,436,750,000 |
I have a small snippet which gives me some ips of my current network:
#!/bin/bash
read -p "network:" network
data=$(nmap -sP $network | awk '/is up/ {print up}; {gsub (/\(|\)/,""); up = $NF}')
it returns ip addresses like this
10.0.2.1
10.0.2.15
and so on.
now I want to make them look like this:
10.0.2.1, 10.0.2.15, ...
I'm a total bash noob ,plz help me :)
|
If you need exactly ", " as separator, you could use
echo "$data" | xargs | sed -e 's/ /, /g'
or if you are enough with comma as separator, then
echo "$data" | paste -sd, -
| separating an array into comma separated values |
1,591,436,750,000 |
Background
CLion's remote project feature currently doesn't support FreeBSD as a remote host OS, but I want to do some hacking and see if it works. By reading the log file, I think I have spotted (one of) the issue.
2019-04-10 00:13:55,850 [2221079] DEBUG - #com.jetbrains.ssh.nio - UnixSshFS:: SshCommandRunner.execute: test -e "/tmp"
2019-04-10 00:13:55,851 [2221080] DEBUG - ellij.ssh.SshConnectionService - Executing SSH command: env "LC_ALL"="C" "JETBRAINS_REMOTE_RUN"="1" test -e "/tmp" within SSH session @3aa57c95 to <user>@<host>::22
2019-04-10 00:13:55,963 [2221192] DEBUG - #com.jetbrains.ssh.nio - UnixSshFS:: SshCommandRunner.execute: stat --printf "%W%i%F%F%F%F%X%Y%s" "/"
2019-04-10 00:13:55,963 [2221192] DEBUG - ellij.ssh.SshConnectionService - Executing SSH command: env "LC_ALL"="C" "JETBRAINS_REMOTE_RUN"="1" stat --printf "%W%i%F%F%F%F%X%Y%s" "/" within SSH session @3aa57c95 to <user>@<host>:22
2019-04-10 00:13:56,071 [2221300] INFO - #com.jetbrains.ssh.nio -
Exit code 1
Basically, stat(1) behaves differently on Linux and on FreeBSD, so the following command fails on FreeBSD-12.0, halting the entire setting up procedure:
$ stat --printf "%W%i%F%F%F%F%X%Y%s" "/"
stat: illegal option -- -
usage: stat [-FLnq] [-f format | -l | -r | -s | -x] [-t timefmt] [file|handle ...]
I thought that the gstat utility in coreutils is the GNU version of stat, but I turned out to be wrong; they are two different commands. I have also tried translating it myself, but I ended up with something weird:
$ stat -f "%B%i%T%T%T%T%a%Y%z" "/"
15006030802////15041781781024
Question
Is it possible to rewrite the command stat --printf "%W%i%F%F%F%F%X%Y%s" "/" for FreeBSD, so that it works the same way as its counterpart does on GNU/Linux?
|
stat -f 0%i%HT%HT%HT%HT%a%m%z /
on FreeBSD should be pretty similar to
stat --printf %W%i%F%F%F%F%X%Y%s /
on Linux, with the exception that %HT will expand to Directory instead of directory, as %F does on Linux.
I just inserted a 0 instead of %W (birth time), since on most Linux systems that will be 0 (unknown). Replace the 0 with %B if you really want the birth time.
That format is quite strange though, and I don't get its purpose; I guess it could be replaced with any "unique" garbage based on file's metadata ;-)
I thought that the gstat utility in coreutils is the GNU version of stat, but I turned out to be wrong; they are two different commands.
gstat on FreeBSD is another program (/usr/sbin/gstat, gstat(8)). You're looking for gnustat:
gnustat --printf %W%i%F%F%F%F%X%Y%s /
Just as with any other package pkg info -l coreutils | grep stat will tell you the files installed by the coreutils package.
| Translating Linux stat(1) command into BSD stat(1) command |
1,591,436,750,000 |
I have a file (comma separated) on a Linux system with 3 columns. I want to start new column after every 4th row.
Input:
col1,col2,col3
1,disease1,high
1,disease2,low
1,disease3,high
col1,col2,col3
2,disease1,low
2,disease2,low
2,disease3,high
col1,col2,col3
3,disease1,low
3,disease2,low
3,disease3,low
Expected output:
col1,col2,col3,col1,col2,col3,col1,col2,col3
1,disease1,high,2,disease1,low,3,disease1,low
1,disease2,low,2,disease2,low,3,disease2,low
1,disease3,high,2,disease3,high,disease3,low
i.e. I want exactly 4 lines of output, each line is the result of joining every fourth line of the input with a comma.
|
With awk:
awk '{a[NR%4] = a[NR%4] (NR<=4 ? "" : ",") $0}
END{for (i = 1; i <= 4; i++) print a[i%4]}' < input.txt
| How to start a new column after every nth row? |
1,591,436,750,000 |
I have a directory with a lot of mp3 files, and I need a simple way to find the accumulated duration for them. I know that I can find the duration for one file with
ffmpeg -i <file> 2>&1 | grep Duration
I also know that I can run this command on all mp3 files in a directory with the command
for file in *.mp3; do ffmpeg -i "$file" 2>&1 | grep Duration; done
This can be somewhat filtered with
for file in *.mp3; do ffmpeg -i "$file" 2>&1 | grep Duration | cut -f4 -d ' '; done
But how do I sum it all up? Using ffmpeg is not necessary. The output format is not so important either. Seconds or mm:ss or something similar will do. I would like it to look something like this:
$ <command>
84:33
|
You can get exactly the duration in seconds, then sum them with bc:
for file in *.mp3;do ffprobe -v error -select_streams a:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 "$file";done|paste -sd+|bc -l
Convert this number to HH:MM:SS format by yourself. e.g. https://stackoverflow.com/a/12199816/6481121
| How to find accumulated duration on several mp3 with command line? |
1,591,436,750,000 |
I have some logic for executing java projects; it all works in the terminal console when I type it, but not in the cron scheduler:
run 1st microservice and get variable from POST request:
java -jar /root/parser-0.0.1-SNAPSHOT.jar
value=$(curl -d '{"query":"java-middle", "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/explorer)
v2=$(echo ${value} | jq '.id')
test:
echo $v2
18
18 - id from database, and I use it in next request: (first run new microservice)
java -jar parsdescription-0.0.1-SNAPSHOT.jar
value=$(curl -d '{"explorerId":'$v2', "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/descriptions) >> /var/log/description3.log 2>&1
So, curl executed normal, database did fill some data and in value I get correct value.
But, when I create a crontab schedule:
50 09 * * * java -jar /root/parser-0.0.1-SNAPSHOT.jar
51 09 * * * value=$(curl -d '{"query":"java-middle", "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/explorer)
52 09 * * * v2=$(echo ${value} | jq '.id')
53 09 * * * java -jar parsdescription-0.0.1-SNAPSHOT.jar
54 09 * * * value=$(curl -d '{"explorerId":'$v2', "turnOff":true}' -H "Content-Type: application/json" -X POST http://localhost:8080/descriptions) >> /var/log/description3.log 2>&1
Then execute normally only first curl (in database created new note).
In next - executed second microservice ( 53 09 * * * java -jar parsdescription-0.0.1-SNAPSHOT.jar ), but nothing not execute in second curl command, and nothing save in description3.log file - he is a empty.
Why that worked in console, but not working in crontab?
|
Each cron job is a unique shell instance that does not share state with any other cron job, so
51 09 * * * value=42
sets value only for that job, which then exits, and value is then lost. A shell session, by contrast, maintains state over successive lines. You will need a single cron job that runs all that code, or some other design; a single cron job might look like
51 09 * * * /path/to/your/script
and then the file /path/to/your/script should be executable and contain
#!/bin/bash
java -jar /root/parser-0.0.1-SNAPSHOT.jar
value=$(curl -d '{"query":"java-middle", ...
and so forth.
If you need to share data between different cron jobs that information would need to be shared via some IPC (interprocess communication) method (the filesystem, a database, etc).
| cron not executing command with variable |
1,591,436,750,000 |
I have read the answer to a post, it suggest to use:
systemctl set-default multi-user.target
to log in command line mode. It works fine, except before login, it seems CentOS has boot into the graphical mode which make the start up process quite slow. See the following picture:
I have minimal installed CentOS before, it boot much faster compare to my situation now. So what's the reason for this, and how to get rid of the pre-boot gui?
|
To get rid of the pre-boot GUI you have to remove the rhgb option in the grub options. If you also want the kernel messages during boot, you also have to remove the quite option of the kernel append line.
To do so, edit the file /etc/default/grub with a text editor of your choice and adopt the GRUB_CMDLINE_LINUX.
If you just want to remove the pre-boot GUI it would look as follows.
GRUB_CMDLINE_LINUX="quiet"
If you also want the kernel messages during boot, just set it as f
GRUB_CMDLINE_LINUX=""
You also might want to preserve the default entry in the file and to do so you just comment that line with #.
After you have edited the file you have to generate the grub configuration as follows.
grub2-mkconfig -o /etc/grub2.cfg
| CentOS boot without GUI has actually started gui before login |
1,591,436,750,000 |
I'm curious if anyone can help me with what the best way to protect potentially destructive command line options is for a linux command line application?
To give a very hypothetical scenario: imagine a command line program that sets the maximum thermal setting for a processor before emergency power off. Lets further pretend that there are two main options, one of which is --max-temperature (in Celsius), which can be set to any integer between 30 & 50. There is also an override flag --melt which would disable the processor from shutting down via software regardless of how hot the processor got, until the system electrically/mechanically failed.
Certainly such an option like --melt is dangerous, and could cause physical destruction at worst case. But again, lets pretend that this type of functionality is a requirement (albeit a strange one). The application has to run as root, but if there was a desire to help ensure the --melt option wasn't accidentally triggered by confused, or not experience users how would you do that?
Certainly a very common anti-pattern (IMO) is to hide the option, so that --help or the man page doesn't reveal its existence, but that is security through obscurity and could have the unintended consequence of a user triggering it, but not being able to find out what it means.
Another possibility is to change the flag to a command line argument that requires the user to pass --melt OVERRIDE, or some other token as a signifier that they REALLY mean to do this.
Are there other mechanisms to accomplish the same goal?
|
I'm assuming you're looking at this from the POV of the utility programmer. This is broad enough that there isn't (and can't be) a single right answer, but some things come to mind.
I think most utilities just have a single "force" flag (-f), that overrides most safety checks. On the other hand, e.g. dpkg has a more fine-grained --force-things switch, where things can be a number of different keywords.
And apt-get makes you write a complete sentence to verify in some cases, like removing "essential" packages. See below. (I think it's not just a command line option here, since essential packages are e.g. those that are required to install packages, so undoing a mistaken action may be very hard. Besides, the whole operation may not be known up front, before apt has had a chance to calculate the package dependencies.)
Then, I think cdrecord used to make the user wait a couple of seconds before actually starting the work, so that you had a chance to verify the settings were sane while the numbers were running down.
Here's what you get if you try to apt-get remove bash:
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
bash
0 upgraded, 0 newly installed, 2 to remove and 2 not upgraded.
After this operation, 2,870 kB disk space will be freed.
You are about to do something potentially harmful.
To continue type in the phrase 'Yes, do as I say!'
?] ^C
Which one to choose is up to you as the program author - you'll have to base the decision on the danger level of the action, and on your own level of paranoia. (Be it based on caring about your users, or on the fear of getting blamed for the mess.)
Something that has the potential to cause the processor to literally (halt and) catch fire probably goes in the high end of the "danger" axis and probably warrants something like the "type 'Yes, do what I say'" treatment.
That said, one thing to realise is that many of the actual kernel-level interfaces are not protected by any means. Instead, there are files under /sys that can change things just by being opened and written to, no questions asked apart from the file access permissions. (i.e. you need to be root.)
This goes for hard drive contents too (as we should know), and, in one case two years back, to the configuration variables of the motherboard firmware. It seems it was possible to "brick" computers with a misplaced rm -rf.
No, really. See lwn.net article and the systemd issue tracker.
So, whatever protections you would implement, you would only protect the actions done using that particular tool.
| How to protect potentially destructive command line options? |
1,517,853,531,000 |
I want to add a tab character to seperate numbers and letters of my file:
71aging
1420anatomical_structure_development
206anatomical_structure_formation_involved_in_morphogenesis
19ATPase_activity
46autophagy
2634biological_process
So now it would look like this:
71 aging
1420 anatomical_structure_development
206 anatomical_structure_formation_involved_in_morphogenesis
19 ATPase_activity
46 autophagy
2634 biological_process
Is there a one liner sed for this?
|
Below one is the sed one liner for your requirement
sed "s/^[0-9]*/&\t/g" filename
output
71 aging
1420 anatomical_structure_development
206 anatomical_structure_formation_involved_in_morphogenesis
19 ATPase_activity
46 autophagy
2634 biological_process
| Adding a tab between every number and letter |
1,517,853,531,000 |
I followed the posts on stackexchange websites to parse command line arguments. My program only parses long arguments and all arguments are mandatory. Here is what I have done:
getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
echo "getopt --test failed in this environment."
exit 1
fi
function quit {
echo "$1"
exit 1
}
# Option strings
LONG="producer-dir:,consumer-dir:,username:,password:,url:,kafka-host:,checkpoint-dir:"
# read the options
OPTS=$(getopt --longoptions ${LONG} --name "$0" -- "$@")
if [ $? != 0 ]; then
quit "Failed to parse options...exiting."
fi
eval set -- "$OPTS"
# extract options and their arguments into variables.
while true ; do
case "$1" in
--producer-dir )
PRODUCER_DIR="$2"
shift 2
;;
--consumer-dir )
CONSUMER_DIR="$2"
shift 2
;;
--username )
USERNAME="$2"
shift 2
;;
--password )
PASSWORD="$2"
shift 2
;;
--url )
URL="$2"
shift 2
;;
--kafka-host )
KAFKA_HOST="$2"
shift 2
;;
--checkpoint-dir )
CHECKPOINT_DIR="$2"
shift 2
;;
-- )
shift
break
;;
*)
echo "Internal error!"
exit 1
;;
esac
done
No matter in which order I pass the arguments, the first one is ignored and the result is empty. The rest of the arguments are parsed as expected. What am I missing?
|
I think what is happening is that what you intend to be your first
parameter is being interpreted by getopt as an optstring. The
beginning of the getopt man page lists three synopses. You seem to
be using the second:
`getopt [options] [--] optstring parameters`
Notice how after the -- the first item is not parameters, but
optstring.
While we're at it, I should mention that bash has an internal version
of getopt, called getopts with the trailing s. All other things
being equal, using bash's internal feature should be more efficient.
| util-linux's getopt ignores first argument |
1,517,853,531,000 |
I struggle to understand the effects of the following command:
yes | tee hello | head
On my laptop, the number of lines in 'hello' is of the order of 36000, much higher than the 10 lines displayed on standard output.
My questions are:
When does yes, and, more generally, a command in a pipe, stop?
Why is there a mismatch between the two numbers above. Is it because tee does not pass the lines one by one to the next command in the pipe?
|
:> yes | strace tee output | head
[...]
read(0, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192
write(1, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192
write(3, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192
read(0, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192
write(1, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=5202, si_uid=1000} ---
+++ killed by SIGPIPE +++
From man 2 write:
EPIPE
fd is connected to a pipe or socket whose reading end is closed. When this happens the writing process will also receive a SIGPIPE signal.
So the processes die right to left. head exits on its own, tee gets killed when it tries to write to the pipeline the first time after head has exited. The same happens with yes after tee has died.
tee can write to the pipeline until the buffers are full. But it can write as much as it likes to a file. It seems that my version of tee writes the same block to stdout and the file.
head has 8K in its (i.e. the kernel's) read buffer. It reads all of it but prints only the first 10 lines because that's its job.
| When does piped command stop? |
1,517,853,531,000 |
I would like write a batch script that checks used or available memory allow me to run commands if available memory less than X mb.
I googled but page they refer didn't work for me I am using centos 7
basically I would like to do
if availablememory < 26000m
do command=forever stopall
do command=pkill -f checkurl.php
end
BEFORE PROGRAM START
[root@www ~]# free -m
total used free shared buff/cache available
Mem: 32002 3471 802 1121 27728 26529
Swap: 38112 234 37878
[root@www ~]#
AFTER PROGRAM START
[root@www ~]# free -m
total used free shared buff/cache available
Mem: 32002 13913 200 1121 17887 16381
Swap: 38112 234 37878
|
if [ $(awk '/^MemAvailable:/ { print $2; }' /proc/meminfo) -lt 123456 ]; then
: do someting
done
| batch script run command if available memory less than X mb |
1,517,853,531,000 |
I want to open eclipse through terminal and i am able to do it, but when when eclipse starts it asks for the workspace directory attached the screenshot,and then there i have to specify it, i dont want this.
As i pass the eclipse command in the terminal, i want to pass the workspace directory along with the eclipse command followed by OK as prompt ask for it.
Thanks in advance.
|
The following should do:
eclipse -data /home/user/path/to/workspace
Here a list of all eclipse command line arguments.
Alternatively, if you anyhow only have one workspace, you could select a workspace in the dialog and check the box "Use this as the default and do not ask again".
| open eclipse from a terminal and pass a workspace to open |
1,517,853,531,000 |
While using applications (such as a database front end, or a programming language) from within a terminal, what's the best way to store the command history (of commands issued within the applications).
E.g. I start python from the shell and issue a bunch of python commands. I'd like those stored somehow.
I know this is possible because I've done it earlier, but despite my best efforts, am unable to recall it now. It was something of the form >XX APP_NAME where APP_NAME was something like python and XX is the program that was prefixed to the application name to store the application's command history.
To clarify, when I go back in the same application the same way (> XX APP_NAME) pressing the up arrow should give previously issued commands.
|
Ok, found it at long last. It is rlwrap
To copy paste from its man page:
rlwrap runs the specified command, intercepting user input in order to provide readline's line editing, persistent history and completion
| Command history in terminal applications |
1,517,853,531,000 |
I want to parallelize spliting of many directories into subdirectories using parallel or using another tool or method.
E.g. I have 1 000 000 directories with content, but it's too much for one directory, so I want to create 10 dirs in the main dir and move in each of them 100 000 original dirs. I also want to use sorting by date. I already asked similar question here, but this isn't duplicate, because I tried new commands, got new results and now I reformulated the question.
So, I already tried this
ls -tr|parallel -n100000 mkdir "dir_{#}"\;mv {} "dir_{#}"
and this
ls -tr | parallel -j10 -n100000 --no-notice -k 'mkdir -p dir_{#}; mv {} -t dir_{#}'
commands, but it moves only ~10 000 into one subdir(sometimes ~6200, sometimes ~12 500) and create too much subdirs - sometimes in 10 times more than I need.
I also tried to use this:
ls -dtr * | parallel -j10 -n100000 --no-notice -k 'mkdir -p dir_{#}; mv {} -t dir_{#}'
but it gave bash: /bin/ls: Argument list too long.
Of course, I don't need exactly 100 000 dirs in each subdir, it can be 101 000 or 98 500 dirs, it should be a number in the range of 100 000
How can I execute this task in parallel or using parallel?
|
This problem deals with heavy IO. I doubt that parallel is really useful in this situation.
Anyway I suggest that you consider a "traditional" approach:
mkdir dir_{1..10}
ls -tr | nl | \
awk '$2 !~ /^dir_/ {i=1+int($1/100000); print $2 | "xargs mv -t dir_"i}'
where
ls -tr | nl sorts the directories by date and adds an auxiliar dir-number
$2 !~ /^dir_/ is used to skip the just-created folders.
i=1+int($1/100000) calculates the number of the folder based on the dir-number
print $2 | "xargs mv -t dir_"i moves without process proliferation
If possible compare also the respective times: time .... (and share the results with us ☺)
| Use parallel to split many directories into subdirectories or parallelize this task |
1,517,853,531,000 |
I want to grep stuff out of an output file. For example ab and cd.
The file looks like:
asdf
ab
hgr
cd
ab
asdsda
if I now grep for ab and cd after one another like
for i in $@
do
grep $i file
done
I get
ab
ab
cd
What I want though is
ab
cd
ab
Is there an elegant solution?
|
You get this result because loop first execute grep ab file and return all occurances of ab in first iteration and after this loop execute grep cd file and return all cd occurances in file.
You do not need a for loop. Try this:
grep -e "^ab$" -e "^cd$" file
Or use -x option to select only those matches that exactly match the whole line (from man grep, thx to Kusalananda):
grep -x -e "ab" -e "cd" file
Output will be:
ab
cd
ab
Or (assuming GNU grep or compatible as \| is not a standard BRE operator):
grep "^\(ab\|cd\)$" file
The same with GNU sed:
sed '/^\(ab\|cd\)$/!d' file
| grep for multiple arguments, but output them in order of appearance |
1,517,853,531,000 |
I'm doing something wrong on the string comparison below. It works if I set a variable and compare that, but I can't copy the value to a string. Does anyone know what's wrong?
$ if [ "$(lsb_release -i)" = "Distributor ID: RedHatEnterpriseClient" ]; then echo yes; else echo no; fi
no
$ lsb_release -i
Distributor ID: RedHatEnterpriseClient
$ var="$(lsb_release -i)"
$ if [ "$(lsb_release -i)" = "$var" ]; then echo yes; else echo no; fi
yes
|
In this case I suggest -is:
if [ $(lsb_release -is) = "Debian" ]; then echo yes; else echo no; fi
| Can't compare lsb_release result to string in Bash |
1,517,853,531,000 |
I'd like to run a shell command on each line taken from STDIN.
In this case, I'd like to run xargs mv. For example, given two lines:
mfoo foo
mbar bar
I'd like to run:
xargs mv mfoo foo
xargs mv mbar bar
I've tried the following strategies with ruby, awk, and xargs. However, I'm doing it wrong:
Just xargs:
$ echo "mbar bar\nmbaz baz" | xargs mv
usage: mv [-f | -i | -n] [-v] source target
mv [-f | -i | -n] [-v] source ... directory
Through awk:
$ echo "mbar bar\nmbaz baz" | awk '{ system("xargs $0") }'
Through ruby:
$ echo "mbar bar\nmbaz baz" | ruby -ne '`xargs mv`'
$ ls
cat foo mbar mbaz
I have some questions:
How do I do what I'm trying to do?
What is wrong with each of my attempts?
Is there a better way to "think about" what I'm trying to do?
I'm especially confused that my xargs attempt isn't working because the following works:
$ echo "foo\nbar" | xargs touch
$ ls
bar foo
|
echo "mbar bar\nmbaz baz" | xargs mv
With xargs you should use the -t option to see what's going on. So in your above case if we were to invoke xargs with -t, what do we see:
mv mbar bar mbaz baz
So obviously it's not correct. What happened was that xargs like a hungry crocodile, ate all the args fed to it via the pipe by echo. So you need a way to limit the release of arguments to the croc. And since you requested on a per-line basis, then what you need is the -l or the -L for POSIX option.
echo "mbar bar\nmbaz baz" | xargs -l -t mv
POSIX-ly way:
echo "mbar bar\nmbaz baz" | xargs -L 1 -t mv
mv mbar bar
mv mbaz baz
And this is what you wanted.
H.T.H
| How do I loop over the lines in STDIN and run a shell command? |
1,517,853,531,000 |
I want to make a python script run on a socket of server in LAN. I wrote a number guessing script and i want to make it run on socket for other clients to play with it by connecting to port (say 1234). I know to achieve this by socket programming from python. But this question is to ask as to why this fails?
ncat 192.168.0.108 -lvp 1234 -e /usr/bin/python3.5 number_game.py
the script:
#!/usr/bin/python3.5
import random
num=random.randint(1,20)
flag=0
print("Welcome to the game...")
for i in range(1,7):
print("Take a guess")
guess=int(input())
if guess > num:
print("Way too high")
else:
if guess < num:
print("Way too low")
else:
flag=1
break;
if flag == 1:
print("You made it in "+str(i)+" attempts")
else:
print("better luck next time")
the error:
Ncat: Version 7.31 ( https://nmap.org/ncat )
Ncat: Got more than one port specification: 1234 number_game.py. QUITTING.
|
Solution
You are trying to listen on the port 1234, and to connect to machine with the IP 192.168.1.108 in the same time.
You can't do that, you either listen for the connection using this :
ncat -lvp 1234 -e "/usr/bin/python3.5 number_game.py"`
or you initiate the connection to the desired machine using this :
ncat -v -e "/usr/bin/python3.5 number_game.py" 192.168.0.108 1234
Note
When you use ncat (or nc) to initiate the connection, you have to keep the IP (or hostname) and the port the last parameters.
Look to the synopsis of ncat in the manual : ncat [OPTIONS...] [hostname] [port]
| Runnning a python script on a socket with ncat? |
1,517,853,531,000 |
I am on a system which deletes files which haven't been modified in 30 days. I need some way to preserve important files by marking them as being recently modifed. What is the best way I can do this? Something like for d in *; do; cat $d > $d ; done
|
cd to that directory, then use this command to mark only the files :
find . -type f -exec touch {} \;
or this command to mark even the directories :
find . -exec touch {} \;
After the execution, the files (and folders if you choose the 2nd command) will be marked that they were just changed, and their content won't be changed.
The advantage of this command that it will go recursive, even the subdirectories and the files under those subdirectories will be marked as changed.
| recursively mark all files in a directory as modified without changing file content |
1,517,853,531,000 |
I have a tabular file in which the first column has IDs and the second one has numeric values. I need to generate a file that contains only the line with the largest score for each ID.
So, I want to take this:
ES.001 2.33
ES.001 1.39
ES.001 119.55
ES.001 14.55
ES.073 0.35
ES.073 17.95
ES.140 1.14
ES.140 53.88
ES.140 18.28
ES.178 150.27
And generate this:
ES.001 119.55
ES.073 17.95
ES.140 53.88
ES.178 150.27
Is there a way of doing this from a bash command-line?
|
Depending on the type of data, sorting may take a long time.
We can get the result without sorting (but using more memory) like this:
awk 'a[$1]<$2{a[$1]=$2}END{for(i in a){print(i,a[i])}}' infile
| Filtering the line with the largest value for a given ID |
1,517,853,531,000 |
I'm trying to pass a windows command into a linux netcat shell and then read back the output.
So far I have:
cat <( printf 'ipconfig\n' )| nc -v 137.148.70.243 443
Which when copied and pasted into my pretty linux terminal gets the ip info from the connected windows machine.
However, when I try to call the same command via bash, I get the following error:
./DumpIP.sh: line n: syntax error near unexpected token `('
What gives?
EDIT
So if I try:
#!/bin/sh
cat <( printf 'ipconfig\n' )| nc -l
I get
./DumpCreds.sh: line 2: syntax error near unexpected token `('
./DumpCreds.sh: line 2: `cat <( printf 'ipconfig\n' )| nc -l'
|
Your problem is that you are invoking sh and not bash for your script in the shebang line. The syntactical convention of <(command) is a bashism that does not exist when invoked via sh, which emulates the POSIX shell (if /bin/sh is a symlink to /bin/bash).
| Command works when copied and pasted but not in a bash script? |
1,517,853,531,000 |
Is there a way to actually execute results from a shell command, instead of using them as arguments to another command?
For instance, I'd like to run '--version' on all executables in a folder, something like:
ls /usr/bin/ | --version
I've found a way using find/exec:
find /usr/ -name valgrind -exec {} --version \;
But I'd like to do it with ls. I've search for over 45 minutes and can't find any help.
|
Try doing this :
printf '%s\n' /usr/bin/* | while IFS= read -r cmd; do "$cmd" --version; done
| Execute stdout results |
1,517,853,531,000 |
I am writing a shell script to auto-deploy a program with Jboss-cli, in linux ubuntu. I need to open the jboss cli interface and execute some commands but I want to do this automatically.
what it looks like
cd /opt/jboss/bin
./jboss-cli.sh --connect
the above line open the jboss command line. I would like to be able to send a command to the open program like:
undeploy FlcErp.ear
I've tried to echo it and give it straight text but nothing will execute until the Jboss program is done running.
I've also tried ./jboss-cli.sh --connect undeploy "FlcErp.ear" but It reads "FlcErp.ear" as a command
|
If jboss-cli.sh reads from standard input, you can pipe the command to it:
echo 'undeploy FlcErp.ear' | ./jboss-cli.sh --connect
To execute multiple commands, you can use multiple echo commands.
{ echo 'undeploy FlcErp.ear'; echo 'other gommands'; echo 'go here'; } | ./jboss-cli.sh --connect
but a here-doc is usually easier:
./jboss-cli.sh --connect <<EOF
undeploy FlcErp.ear
other commands
go here
EOF
| How to pass Command to program open in shell? |
1,517,853,531,000 |
In windows you can undo an operation if you press CTRL+Z. E.g. you delete a file in the gui then press CTRL+Z and the file will be restored and appears in the folder again.
Is there something similar in linux but with commands?! E.g. i accidentially unzipped a file, and i want to undo the operation (all files should dissapear again).
|
A short answer is no. There is no "undo command" on GNU/Linux terminals.
Although lots of commands have an inverse operation, like rename, compress, decompress, etc.
| Undo last command possible? [duplicate] |
1,517,853,531,000 |
I want to copy a text file to a directory with multiple names with curly braces:
cp /path/to/file/a.txt /path/to/file/{b,c,d}.txt
But it gives me the error: target '/path/to/directory/d.txt' is not a directory
|
for i in {b,c,d}; do cp /path/to/directory/a.txt /path/to/file/$i.txt; done
| copy a file to a destination with different names |
1,517,853,531,000 |
What is the meaning of the ? sign in the following command?
find /foo/path -name \?
|
The ? is part of a mechanism called "pathname expansion" in the shell.
Colloquially, the shell mechanism is called "globing". The basic glob makes use just of three characters: * ? and [ that build "patterns".
An asterisk * means:
Any character in any quantity (any string).
A question mark (?) means:
Any character one time.
The square braces define a character list [ ], and mean:
Only the characters inside the list counted once. There may exist negated lists.
Those characters are used in a similar way in the command find. In find, they are called "patterns".
That means that there are two entities using the same characters to perform the same task (globing). One has to be told to ignore those characters. The usual way to tell the shell to avoid interpretation of special characters is to "quote them". Either with 'single quotes', "double quotes" or with a backslash:
'?'
"?"
\?
That is why the "patterns" for find are quoted:
find /path/foo -name \?
What that line means is:
List all files and directories starting from the directory /path/foo that have a name of only one character wide.
about /
Note that ? in find's pattern expansion may match a /.
A pattern in find can match a / as specified by POSIX inside the operands section for the find command:
-path pattern
The primary shall evaluate as true if the current pathname matches pattern using the pattern matching notation described in Pattern Matching Notation. The additional rules in Patterns Used for Filename Expansion do not apply as this is a matching operation, not an expansion.
Again: additional rules ... for Filename Expansion (as in a shell) do not apply as this is a matching operation, not an expansion.
To show that this is true:
$ mkdir test; cd test
$ mkdir -p a/b/c/d
$ find a -path 'a?b'
a/b
$ find . -path './a?b?c?d'
./a/b/c/d
Of course, the -name option of find will match the basename of a file. That, by definition, could not have a / as is not possible to match a / in a basename.
| find: meaning of the \? sign as a value of the name parameter |
1,517,853,531,000 |
I have a directory where there are multiple folders, each of folder contains a .gz file.
How can I unzip all of them at once?
My data looks like this
List of folders
A
B
C
D
In every of them there is file as
A
a.out.gz
B
b.out.gz
C
c.out.gz
D
d.out.gz
|
This uses gunzip to unzip all files in a folder and have the ending .out.gz
gunzip */*.out.gz
This will "loop"* through all folders that have a zipped file in them. Let me add an example:
A
a.out.gz
B
b.out.gz
C
c.out.gz
D
d.out.gz
E
e.out
Using the command above, a.out.gz b.out.gz c.out.gz d.out.gz will all get unzipped, but it won't touch e.out since it isn't zipped.
*this is called globbing or filename expansion. You might like to read some more about it here.
| gunzip multiple files |
1,517,853,531,000 |
I am surprised to see the a2p utility installed by default in my Linux distribution.
a2p is a command line utility that converts an Awk program from standard input to a perl program that it outputs to standard output.
Why would I ever want to convert an Awk program to a Perl program when I have an Awk interpreter installed?
Why is it that Linux distributions include a2p in their default installations?
|
You might want to use these tools to increase the efficiency of perl scripts.
You would want to do this if you had a larger perl program and you wanted to integrate the functionality of an awk script without calling a subprocess. You would call a2p and integrate the generated code into an existing perl script.
There's a similar utility, find2perl which takes a find command line and generates perl code to do the same thing, avoiding the call to a find subprocess.
These are optimization tools for perl scripts.
| Why is a2p (Awk to Perl translator) installed by default? Why would I want to convert Awk to Perl? [closed] |
1,517,853,531,000 |
I want output like this: name size and hash:
myfile.txt 222M 24f4ce42e0bc39ddf7b7e879a
mynewfile.txt 353M a274613df45a94c3c67fe646
For name and size only I have
ll -h | awk '{print $9,$10,$11,$12,$5}'
But how can I get hash for every file? I tried:
ll -h | awk '{print $9,$10,$11,$12,$5}' | md5sum
But I get only one hash at all.
|
You should not parse ls, instead use this:
for f in * .*; do
[ -f "$f" ] && \
printf "%s %s %s\n" "$f" $(du -h -- "$f" | cut -f1) $(md5sum -- "$f" | cut -d' ' -f1)
done
The for loop runs trough all files and directories in the current directory.
[ -f "$f" ] checks if it's a regular file
printf "%s %s %s\n" prints the arguments in the desired format.
"$f" the first argument is the filename
du -h -- "$f" | cut -f1 the second is the size (human readable), but not the filename, cut cuts all excep the first field away
md5sum -- "$f" | cut -d' ' -f1 third is the MD5 sum, but without the filename.
| md5sum for every file (with ll) |
1,517,853,531,000 |
I accidentally typed in cd ` into terminal today and terminal didstrange things.
It put a "> " signed on the next line followed by my cursor like it wanted some input. No matter what I entered continued to do the same thing until I terminated the command.
Out of curiosity what happened? Was this a bug or a feature?
|
Answered here already... essentially
Everything you type between backticks is evaluated (executed) by the shell before the main command
| What is the effect of a lone backtick at the end of a command line? |
1,517,853,531,000 |
If I search for multiple search strings in grep : usually just do:
grep "search1\|search2" somefolder/*.txt
but, what if I have 100 or more search strings? Can I say like this:
grep "stringPattern.txt" somefolder/*.txt
where stringPattern.txt is a file containing a list of words I need to search in *.txt.
|
grep has the -f flag exactly for this, use:
grep -f patternfile somefolder/*.txt
Where in the patternfile the search patterns are separated line by line.
| How to search multiple search patterns from a file with grep |
1,517,853,531,000 |
I got a git directory with plenty of python files(and some special file like .git).
I'd like to copy only these python files to another directory with the directory structure unchanged.
How to do that?
|
You will receive in destination_dir files with full path from /
find /path/git_directory -type f -iname "*.py" \
-exec cp --parents -t /path/destination_dir {} +
Other solution is rsync
rsync -Rr --prune-empty-dirs \
--include="*.py" \
--include="**/" \
--exclude="*" \
/path/git_directory /path/destination_dir
| How to copy a directory with only specified type of files? |
1,517,853,531,000 |
I'm trying to set up a for loop to run a process on pairs of files. The names of the files look like this
36_002_CGATGT_L001_R1_005.fastq.gz
36_002_CGATGT_L001_R2_005.fastq.gz
36_002_CGATGT_L001_R1_002.fastq.gz
36_002_CGATGT_L001_R2_002.fastq.gz
62_013_AGTCAA_L001_R1_003.fastq.gz
62_013_AGTCAA_L001_R2_003.fastq.gz
I need to use each pair in the following command
sickle pe -f 36_002_CGATGT_L001_R1_005.fastq.gz \
-r 36_002_CGATGT_L001_R2_005.fastq.gz\
-o trimmed_36_002_CGATGT_L001_R1_005.fastq.gz\
-p trimmed_36_002_CGATGT_L001_R2_005.fastq.gz\
-s 36_002_CGATGT_L001_singles_005.fastq.gz
To begin with I'm trying:
for n in *R1*; do m='basename $n R2' ; echo $m; done
but clearly this approach isn't working because the both the front and back of the file name are important. Do I need to rename files so the R1 and R2 are the last part of the name? Which would be awkward but not impossible
|
If your shell supports the ksh ${var/search/replace} form of parameter expansion (ksh93, zsh, mksh, yash, bash):
for r1 in *R1*; do
r2=${r1/R1/R2}
singles=${r1/R1/singles}
trimmed1=trimmed$r1
trimmed2=trimmed$r2
sickle pe -f "$r1" \
-r "$r2" \
-o "$trimmed1" \
-p "$trimmed2" \
-s "$singles"
done
POSIXly, you could do
r2=${r1%%R1*}R2${r1#*R1}
| for loop when matching both front and back of file name |
1,517,853,531,000 |
There are two steps, that I would like to be run on one line:
twinkle -c
then
call sip:[email protected]
Here is the output:
I wanted to perform these two steps on one line, I tried twinkle -c && call sip:[email protected] and twinkle -c call sip:[email protected] and twinkle -c ; call sip:[email protected] and twinkle -c --immediate --call sip:[email protected]
But they all give this response:
Is there any way to get them on the same line?
EDIT:
The second command is being performed in bash rather than in Twinkle:
EDIT I tried printf %s\\n 'call sip:[email protected]' |twinkle -c which works for one seconds then closes itself (closes twinkle and returns to bash).It should remain in twinkle for the duration of the call.
|
I guess twinkle is accepting stdin and executing commands. So...
printf %s\\n 'call sip:[email protected]' | cat - /dev/tty |twinkle -c
...should hopefully do it. If, instead, twinkle is one of those that reads /dev/tty explicitly, you can probably do...
printf %s\\n 'call sip:[email protected]' | cat - /dev/tty |
luit -- twinkle -c
...or use perhaps script or screen in place of luit.
Since the former method apparently works for you, the following shell function might make it more simple to run at the command line. You should note, though, that both of the methods in this answer are kind of hacks - I originally wrote this then deleted it after the other answer was edited to include --call. I only undeleted it hours later when comments on the other indicated it wasn't working and I thought this might yet help. If it were me, though, I would try to find out why the other answer doesn't work.
Still, the shell function:
twinksip() while [ -n "$1" ]
do printf 'call sip:%s\n' "$1" |
cat - /dev/tty | twinkle -c || return
shift;done
...which will prepend the call sip: prefix to all of its arguments and print them at twinkle's stdin. It will process in order as many arguments as you give it, which, as I guess, would do many calls in a row - beginning the next when the last one ends.
You'd call it from the prompt like:
twinksip [email protected]
| Perform multiple commands on one line |
1,517,853,531,000 |
I have a program that writes on the standard output a list of string values, one per line, and I would like to display in real time the list of distinct values along with the number of occurences for each one.
For example, from this output:
apple
carrot
pear
carrot
apple
I need a command which generates this output (ideally updated in real time):
apple: 2
carrot: 2
pear: 1
Any idea?
|
To expand on what @Bratchley said in the comments, if you have your program's output printing to a file, then you can run then watch command in the terminal to get near-real-time view of the output by including the -n flag like so:
watch -n 0.1 "cat yourprograms.log | sort | uniq -c | sort -rn"
Note: The ' -n ' flag sets the refresh interval. The minimum time ' watch ' can take is 0.1 second (or 1/10 a second) and nothing smaller.
Example Output:
Every 0.1s: cat yourprograms.log | sort | uniq -c | sort -rn
6 things
4 mine
3 dot
1 below
Including | sort -rn allows for a better sort view. sort -rn sorts the output of uniq -c in reverse numeric order.
If you only care about say the top 10, you can include the head command like so:
watch -n 0.1 "cat yourprograms.log | sort | uniq -c | sort -rn | head"
| Display distinct values of a list and number of occurences |
1,517,853,531,000 |
How can this be done?
I don't see any applicable option in the manual.
I have positively checked that indentation breaks after ten million lines.
You can check it like:
$ (for i in `seq 0 10000000`; do echo "$i"; done) | nl
I don't often generate so many lines, but I don't want it to break like it does. How can this be done?
|
If you are suggesting that nl should buffer the entire input simply to measure the maximum required number, that is not in the spirit of stream filters at all. With rare exceptions (sort, for instance), core utilities try to process streams immediately -- especially as they may be used in a virtually infinite pipeline (for instance, a loging stream that is incrementally filtered through nl and redirected into a file could accumulate quite a lot of data).
The standard way to handle padding is to simply specify the maximal expected width as a parameter. In this case, you can either turn off padding (I prefer this anyway, it makes sense to just have space separated column at the front), or set a different width. Compare:
seq 0 10000000 | nl -w12 # default right-justify, 12 character width
seq 0 10000000 | nl -w1 # default right-justify, 1 character width (no padding)
seq 0 10000000 | nl -w1 -s' ' # right-justify, space delimited instead of tab
seq 0 10000000 | nl -nln # left-justify
If you really want to do this automatically, just use wc -l to first measure the length and then set the -w appropriately.
| `nl` invocation soaking all input before numbering |
1,517,853,531,000 |
I am using the tee command to output the compilation errors of a program into a file along with the terminal.
gcc hello.c | tee file.txt
This is the command I have used. The compilation errors are displayed on the terminal but they are not outputted in the file. How should I output the std errors into file?
|
With csh, tcsh, zsh or recent versions of bash, try
gcc hello.c |& tee file.txt
where
|& instruct the shell to redirect standard error to standard output.
In other Bourne-like shells:
gcc hello.c 2>&1 | tee file.txt
In rc-like shells:
gcc hello.c >[2=1] | tee file.txt
In the fish shell:
gcc hello.c ^&1 | tee file.txt
| Output Redirection |
1,517,853,531,000 |
When I run env it shows 3 times /usr/bin under PATH. Same for every path under PATH title. For example - my scala bin directory shows 3 times. However, in my .bash_profile, it is written just one time. Also its not in .bashrc also. I need to make these 3 occurrences to 1, as even though I remove some path under PATH in .bash_profile, it still shows 2 times, which means that path is still set. echo $PATH shows the same thing.
And, if it matters, I am using Mac OSX.
|
OK..So I found the solution. Here is what I was doing :- 1) vi ~/.bash_profile 2) make changes3) source ~/.bash_profile to see those changes in effect . It seems for every editing and subsequent source command, temporarily keeps in current session. So , if i made changes 3 times and consequent source command, it shows 3 times the same path if i do echo $PATH or env.
Closing the terminal and restarting it puts back everything to normal. So, it was just a matter or restarting the terminal!! Clarification :- Different platforms may perform differently. I found macosx-10.7 works this way.
| env command shows 3 times same path |
1,517,853,531,000 |
When copying multiple files from one directory to another is there a way to get bash to run through each file and ask for y/n? I vaguely remember adding something to the end of my command like 'ok?' to make it do this, but I can't find it anywhere!
|
I suspect that you’re remembering the -ok option to find.
Try something like
find . -name pattern -ok cp {} other_directory \;
| Prompt when copying multiple files? |
1,517,853,531,000 |
Entering the following command prints duplicates as shown below. Not all lines print twice but some do. What gives?
XXXX:~ XXXX$ man -k pid
pid(ntcl) - Retrieve process identifiers
pidpersec.d(1m) - print new PIDs per sec. Uses DTrace
rwbypid.d(1m) - read/write calls by PID. Uses DTrace
syscallbypid.d(1m) - syscalls by process ID. Uses DTrace
git(1) - the stupid content tracker
Sub::Exporter::Cookbook(3pm) - useful, demonstrative, or stupid Sub::Exporter tricks
Tcl_DetachPids(3tcl), Tcl_ReapDetachedProcs(3tcl), Tcl_WaitPid(3tcl) - manage child processes in background
getpid(2), getppid(2) - get parent or calling process identification
pid(ntcl) - Retrieve process identifiers
pidpersec.d(1m) - print new PIDs per sec. Uses DTrace
pthread_setugid_np(2) - Set the per-thread userid and single groupid
rwbypid.d(1m) - read/write calls by PID. Uses DTrace
syscallbypid.d(1m) - syscalls by process ID. Uses DTrace
wait(2), wait3(2), wait4(2), waitpid(2) - wait for process termination
git(1) - the stupid content tracker
XXXX:~ XXXX$
|
I would assume since it's doing a regex search through the descriptions and the man page names that it's finding multiple hits and showing those pages multiple times.
man -k printf
Search the short descriptions and manual page names for the keyword
printf as regular expression. Print out any matches. Equivalent to
apropos -r printf.
If it's that annoying you can filter the output using sort -u.
$ man -k pid|sort -u
getpid (2) - get process identification
getpid (3p) - get the process ID
getpidcon (3) - get SELinux security context of a process
getpidcon_raw (3) - get SELinux security context of a process
getppid (2) - get process identification
getppid (3p) - get the parent process ID
git (1) - the stupid content tracker
mysql_waitpid (1) - kill process and wait for its termination
pidgin (1) - Instant Messaging client
pid (n) - Retrieve process identifiers
pidof (8) - find the process ID of a running program.
pidstat (1) - Report statistics for Linux tasks.
Proc::Killfam (3pm) - kill a list of pids, and all their sub-children
Sub::Exporter::Cookbook (3pm) - useful, demonstrative, or stupid Sub::Exporter tricks
Tcl_DetachPids (3) - manage child processes in background
Tcl_WaitPid (3) - manage child processes in background
waitpid (2) - wait for process to change state
waitpid (3p) - wait for a child process to stop or terminate
If you need to debug your man environment you can always use the -d switch. This will report back the various paths and configurations of your man setup too.
$ man -d
From the config file /etc/man_db.conf:
Mandatory mandir `/usr/man'.
Mandatory mandir `/usr/share/man'.
Mandatory mandir `/usr/local/share/man'.
Path `/bin' mapped to mandir `/usr/share/man'.
Path `/usr/bin' mapped to mandir `/usr/share/man'.
Path `/sbin' mapped to mandir `/usr/share/man'.
Path `/usr/sbin' mapped to mandir `/usr/share/man'.
Path `/usr/local/bin' mapped to mandir `/usr/local/man'.
Path `/usr/local/bin' mapped to mandir `/usr/local/share/man'.
Path `/usr/local/sbin' mapped to mandir `/usr/local/man'.
Path `/usr/local/sbin' mapped to mandir `/usr/local/share/man'.
Path `/usr/X11R6/bin' mapped to mandir `/usr/X11R6/man'.
Path `/usr/bin/X11' mapped to mandir `/usr/X11R6/man'.
Path `/usr/games' mapped to mandir `/usr/share/man'.
Path `/opt/bin' mapped to mandir `/opt/man'.
Path `/opt/sbin' mapped to mandir `/opt/man'.
....
| Why the duplications in the command line output |
1,517,853,531,000 |
How can I allow the traffic to be sent on the loopback device (lo)? What is the iptables command for it?
|
By your question, I presume that you either have default xtables policies of DROP on your chains, or you have explicit DROP/REJECT rules near the end of your chains. Any ACCEPT rules must come before these.
Rule examples:
-A INPUT -i lo -j ACCEPT # accept any traffic coming from lo.
-A OUTPUT -o lo -j ACCEPT # accept any traffic sent to lo.
If you want to play around with testing this, here is a dump to load into iptables-restore. I've explicitly added the -s/-d 127.0.0.1 so you can see how what is normally being matched on (you could match -d in OUTPUT and -s in INPUT if you wanted). Also, I've used non-standard ICMP reject responses, so you can tell which rule matched your patches easily. If you change the order of these rules (they accept for now), you can trigger the rejections. You can also try adding another IP like 127.0.0.2/8 to your loopback interface and testing between that IP and the normal 127.0.0.1/8 IP (you'll want to explicitly specify a source IP in ping).
# Generated by iptables-save v1.4.20 on Sat Dec 7 23:10:52 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 127.0.0.1/32 -i lo -j ACCEPT
-A INPUT -s 127.0.0.1/32 -i lo -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -d 127.0.0.1/32 -o lo -j ACCEPT
-A OUTPUT -d 127.0.0.1/32 -o lo -j REJECT --reject-with icmp-net-prohibited
COMMIT
The FORWARD chain is seldom used with loopback interfaces, unless you're doing things with tunneling traffic (and you might bind to loopback locally).
| How is the loopback device traffic allowed? |
1,517,853,531,000 |
Using a command line in terminal, I want to be displayed 1 if a program (for example Firefox or Chromium) is open and 0 otherwise.
Edit: By "open", I mean "is running on the current machine and has a window open on the X server that I am seeing"
|
xwininfo -root -children | grep -q '"Firefox")'
echo "$(($? == 0))"
Would output 1 if there's a window of class Firefox connected to your X server (by any user from any machine).
To limit to Firefox processes local to the machine where you're running that command:
xwininfo -root -children |
awk '/"Firefox"\)/{print $1}' |
xargs -I% xprop -id % WM_CLIENT_MACHINE |
cut -d\" -f2 |
grep -qFx "$(uname -n)"
Searching by process name gives you no guarantee that the processes are actually displaying their window on your X server.
The method described above is consistent with how firefox checks for a currently running firefox when not passed the --no-remote option.
| How to know if a specific program is open |
1,517,853,531,000 |
Let's say I have these series of commands
mysqldump --opt --databases $dbname1 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1
mysqldump --opt --databases $dbname2 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1
mysqldump --opt --databases $dbname3 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2
mysqldump --opt --databases $dbname4 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2
How do put all their outputs(assuming that the output name is $dbhost.$dbname.sql) and put it inside one file named backupfile.sql.gz using only one line of code?
Edit: From comments to answers below, @arvinsim actually wants a compressed archive file containing the SQL dumps in separate files, not one compressed SQL file.
|
In your comment to @tink's answer you want seperate files in the .gz files:
mysqldump --opt --databases $dbname1 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1 > '/var/tmp/$dbhost1.$dbname1.sql' ; mysqldump --opt --databases $dbname2 --host=$dbhost1 --user=$dbuser1 --password=$dbpass1 > '/var/tmp/$dbhost1.$dbname2.sql' ; mysqldump --opt --databases $dbname3 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2 > '/var/tmp/$dbhost1.$dbname3.sql' ; mysqldump --opt --databases $dbname4 --host=$dbhost2 --user=$dbuser2 --password=$dbpass2 > '/var/tmp/$dbhost1.$dbname4.sql' ; cd /var/tmp; tar cvzf backupfile.sql.gz \$dbhost1.\$dbname*.sql
As an alternative for the output filename I would use backupfile.sql.tgz so it is more clear to experienced users this is a tar file that is compressed
You can append rm \$dbhost1.\$dbname*.sql to get rid of the intermediate files
You could use zip as alternative to compressed tar.
I am not sure why you want this as a one-liner. If you just want to issue one command you should put the lines in script and excute from there.
With the 'normal' tools used for something like this (tar, zip), I am not aware of circumventing the intermediate files.
Addendum
If you really do not want intermediate files (and assuming the output fits in memory) you could try something like the following Python program. You can write this as a oneliner ( python -c "from subprocess import checkout; from cStr....), but I really do not recommend that.
from subprocess import check_output
from cStringIO import StringIO
import tarfile
outputdata = [
('$dbhost1.$dbname1.sql', '$dbname1'),
('$dbhost1.$dbname2.sql', '$dbname2'),
('$dbhost1.$dbname3.sql', '$dbname3'),
('$dbhost1.$dbname4.sql', '$dbname4'),
]
with tarfile.open('/var/tmp/backupfile.sql.tgz', 'w:gz') as tgz:
for outname, db in outputdata:
cmd = ['mysqldump', '--opt', '--databases']
cmd.append(db)
cmd.extend(['--host=$dbhost1', '--user=$dbuser1', '--password=$dbpass1'])
out = check_output(cmd)
buf = StringIO(out)
buf.seek(0)
tarinfo = tarfile.TarInfo(name=outname)
tarinfo.size = len(out)
tgz.addfile(tarinfo=tarinfo, fileobj=buf)
Depending on how regular your database and 'output' names are you can further improve on that.
| Chaining mysqldumps commands to output a single gzipped file |
1,517,853,531,000 |
I am looking for a website that contains a reference between how to do things on the command line in different unix OSs.
I have seen such site before, I am just unable to find it.
I know such site would be beneficial for the community.
|
The Unix Rosetta Stone (resource for sysadmins) might be the one you had in mind.
| Cross Unix Command Reference |
1,517,853,531,000 |
I need to read lines from STDIN and process them. I could do something like:
<print lines to STDOUT> | while read line; do
<process $line>
done
But this means that first all lines are printed and piped and then they are processed. I want to process every single line immediately, eventually before the next lines are sent to STDOUT.
This might be useful if the lines should be processed while they are are generated (e.g. a log file). My task is to use zbar (warning! the web page crashes Firefox...) to scan several QR codes and to open its URLs in Firefox.
If run zbarcam --raw the webcam is used to scan QR codes. If a QR code is found the URL is printed to STDOUT immediately, but zbarcam does not terminate. So I'd like to read this line and to run firefox $url or so (but without waiting for other URLs).
I've found a solution, but I'm wondering if this can't be done easier. Basically he writes the output of zbarcam to a temporary file and waits until something is written there:
zbarcam --raw > tmpfile &
...
while [[ ! -s $tmp ]] ; do
sleep 1
done
Can this be done with out temporary files and without a loop with sleep 1?
|
Your first attempt does work:
tail -f temp.file | while read LINE; do firefox $LINE; done
Then in another terminal, append to the temp file so that tail -f prints a line to STDOUT:
echo 'google.com' >> temp.file
Every time you do this, firefox will open a new tab.
| Reading from STDIN and using data immediatly (while still reading) or opening a URL with zbar |
1,517,853,531,000 |
I have 4 large USB devices with lots of backups collected over the years.
I want to search for all .Trash folders and delete the contents on Fedora 17. I tried the following which failed:-
# find . -name ".Trash*"-exec rm -rf {} \;
find: paths must precede expression: rm
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
Any hints appreciated!
|
To expand on @rkosegi's answer:
# find . -depth -name ".Trash*" -exec rm -rf {} \;
Use -depth so that find doesn't try to descend the now deleted directory.
| Search and delete .Trash |
1,517,853,531,000 |
I know when you are in a man page it will display the file name at the bottom inside the man.
How can I do the same thing manually?
|
I'm not sure it is possible to do everything you ask, because man(1) sends the formatted man page data to your pager program via a pipe. This would prevent showing a file name, for one thing.
You can get a line count at least like so:
Set your MANPAGER or PAGER environment variable to less.
Add -M to your LESS environment variable, to get the "long prompt", which includes the line count.
Instead of -M, you can build your own less prompt with the -P option to get even more details. Again, though, there are some things in what you ask that less simply won't have access to when acting as man's pager program.
| How can I display file info of a man? |
1,517,853,531,000 |
How does one kill a printing job on centOS?
|
There are two command line interfaces to printing:
In the BSD interface, use lpr to print, lpq to view pending jobs, lprm to cancel a job.
In the System V interface, use lp to print, lpstat to view pending jobs, cancel to cancel ongoing jobs.
There are several printing systems available for Linux and other unices. CUPS is the most common one nowadays. It comes with a System V interface by default, and has a BSD interface that may or may not be installed. If you don't have CUPS and are running Linux or *BSD, you have a BSD system.
Different printing systems have different sets of options and other commands, but they are similar enough for simple cases. To cancel a printing job, use lpq or lpstat (whichever is available, or either if both are available) to see the job number, then lprm or cancel to cancel the job.
With CUPS, if you need to cancel a job really fast, cancel -a will cancel all your pending jobs. Most implementations of lprm will cancel the job currently printing on the default printer if called with no argument.
| How to kill a printing job on centOS |
1,517,853,531,000 |
I have a question from this thread How to run a bash script by double clicking by entering the path in sudoers?. Since it was explained to me that it is unsafe to grant sudo privileges to the script.sh file, I decided to find a way to double-click the script.sh file without granting sudo privileges to the file itself. And to make it safe.
I'm using Debian 12 with the KDE Plasma desktop environment. To run the command sudo sh /path/to/script.sh, you need to go to a terminal, type sudo sh /path/to/script.sh, and then enter the password. I want to have a file I can double click in the UI which will then run the sudo sh /path/to/script.sh, so all I need to do after double clicking is just enter the password.
There are commands inside the script that only work with sudo, such as ip tuntap add dev,, or route add. How can I do this?
Thanks.
|
EDIT
Since you edited your comment to provide more information, I can now give you a complete answer. I verified with my friend who runs KDE plasma, and he says that the default behavior is that double-clicking a script on the desktop will launch it, provided that the executable bit is set. It doesn't launch it in a terminal, so the script has to create its own terminal. With that in mind, here are the instructions:
Add a shebang to your script
Add the line #!/bin/sh to the very start of your script. This tells you system which shell to use to execute your file.
Make the file executable
Right click on the file and under permissions, tick the checkbox that says executable. Alternatively, run chmod +x /path/to/script.sh.
Prefix every command (that requires it) with sudo inside the script
Make sure the script opens in a terminal
You can do this by adding the following snippet at the start of your script (but after the shebang):
test "$IN_TERM" || {
export IN_TERM=1
konsole --hold -e "$0"
exit 0
} && true
This uses Konsole, KDE's default terminal. The --hold flag makes sure that the terminal stays open after the script is finished, so that you can verify that it runs correctly.
That's it, everything should work!
Example
Just to get an overview of what everything should look like, here is a similar script I have on my system. Instead of changing network settings, it changes my laptop's charging behavior, but the overall idea is similar.
#!/bin/sh
set -x # Print all commands before running them
set -e # Exit on first error
test "$IN_TERM" || {
export IN_TERM=1
alacritty --hold -e "$0" # I use alacritty as my terminal.
exit 0
} && true
sudo -v # Ask for password, but don't run any command
# Subsequent invocations of `sudo` won't ask for password
sudo tpacpi-bat -s ST 1 0
sudo tpacpi-bat -s SP 1 0
echo 'Done!'
Security
In the comments, you were mentioning that you were worried about putting sudo inside the script itself. I'm not a security researcher, but let me assure you that this is completely fine and secure. sudo will still prompt the user for the password. Any other user on the system, or a malicious program running with your user's permissions, won't be able to take advantage of your script in any way. Putting sudo in scripts may be frowned upon, but for reasons unrelated to security (e.g. putting sudo in a script may be problematic if you intend to distribute this script to other people, since not everybody has sudo installed).
Old answer
As the comments pointed out, we can't really help you, since you didn't tell us what distro / desktop environment you're using.
However, I can give you some pointers.
Move the sudo inside the script
I think in this situation it would be best for you to move the sudo invocation into the script itself. You can do it by putting something like this at the start of the script:
test "$(id -u)" -eq 0 || {
sudo sh "$0"
exit 0
}
Another option is to create a second script that launches the first with sudo. And a third option is to simply prefix any of the root-only commands in your script with sudo. It will only ask for the password once.
Set the executable bit
In the terminal, run chmod +x /path/to/script.sh. This makes the file executable. You can test that it works by launching the script with /path/to/script.sh as opposed to sh /path/to/script.sh. In some desktop environments, this may also make the script runnable by double-clicking it through the GUI.
Set the correct shebang
Make sure your script starts with #!/bin/sh (or #!/bin/bash (or #!/usr/bin/env bash) if you script uses bash-only features)
Dig around in your distro's settings app
Double clicking to launch scripts may be disabled due to security reasons. Try looking for this option in your file managers or desktop environments settings. Here
Create a .desktop file.
If the above fails, you can create a file called myscript.desktop with the following contents (and edit the Type, Comment, and Exec fields):
[Desktop Entry]
Version=1.0
Type=Application
Name=your_human_friendly_name
Comment=your_human_friendly_comment
Exec=/path/to/script.sh
Terminal=false
StartupNotify=false
Categories=Application;
You can now double-click this file to run your script. Note that you will need to move the sudo into your script. If you need your script to launch inside a terminal, change the Terminal field. Note that your script needs to have the executable bit set for this approach to work.
Another bonus of .desktop files is that if you put it into /usr/share/applications or ~/.local/share/applications, it will also be accessible through your application launcher.
Also, there is a tool called gendesk to automatically make .dekstop files for you, but writing them by hand also works.
Use gksudo
If you do figure out how to achieve the behavior that you want, it may still be impossible to run your script if your desktop environment launches it without a terminal, since there would be no way for you to type your password for sudo. Consider installing the package gksudo (may have a different name depending on distro) and using it instead of sudo. It works the same way, but creates a GUI password entry box instead of using the terminal
Make the script launch its own terminal
In case you need the script to launch in a terminal for reasons other than interacting with sudo (e.g. displaying output to the user), you can put this at the beginning of your script (but after the shebang) to make sure it always launches in the terminal:
test "$IN_TERM" || {
export IN_TERM=1
alacritty --hold -e "$0"
exit 0
} && true
Please note that the above snippet uses alacritty, which is my preferred terminal. Change that to whatever terminal you use. Also note the --hold and -e flags, which are also alacritty-specific. -e is actually -c on most other terminals.
Use rofi
This doesn't answer your question, but I think you might find it useful. If you find yourself having a lot of different scripts that you need to be able to launch quickly, consider rofi. It will allow you to have a searchable menu of all your scripts, ready to launch. To use rofi this way, you first need to create a script (let's call it rofi_launch.sh) that looks something like this:
#!/bin/bash
if [ -z "$1" ]; then
ls /path/to/where/you/store/your/scripts
else
FILE="/path/to/where/you/store/your/scripts/$1"
killall rofi
"$FILE" & disown
fi
Then, launch rofi with rofi -show RUN:/path/to/rofi_launch.sh (you can bind this to a keyboard shortcut in your desktop environment's settings) and you should have a menu from where you can launch any of your scripts.
| How to run the command "sudo sh /path/to/script.sh" by double clicking? |
1,517,853,531,000 |
I'm trying to setup a simple script to run on a cron job that runs in the background and notifies any open terminals of the outcome using the wall command. However when testing, I don't get any output at all. I'm using WSL Ubuntu and zsh via the Terminal app from Microsoft.
Running tty and w returns the below, while who returns nothing at all.
hardya@GBH-HARDYA1 ~ tty
/dev/pts/4
hardya@GBH-HARDYA1 ~ w
09:16:01 up 3 days, 4:30, 0 users, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
Any ideas?
|
@EdgarMagallon is correct on two counts in the comments:
This is related to the who command issue mentioned on Ask Ubuntu
The owner of that answer (me) will ... ;-)
Ok, so yes, the wall command does appear to use the same mechanism as who for determining the logged-in users and their terminals.
So to get it to work under WSL2:
You'll need a recent release of WSL that reports 0.67.6 or higher (preferably 1.0.3 or higher) in wsl --version. If it returns invalid command, make sure that your Windows is fully updated, then install the new Windows Subsystem for Linux from the Microsoft Store.
Enable Systemd as noted in this answer
After restarting WSL and confirming that Systemd is enabled, run su - $USER to force a "login" of your user.
Then, in another terminal, you can run:
sudo wall Wallaby # Or a real message
And it should display in the logged-in terminal. Keep in mind that sudo is required on many distributions. It used to be that wall was setguid on Ubuntu and Debian, but that doesn't appear to be the case any longer.
w and who should also show your logged-in user.
Of course, this doesn't necessarily help you achieve your desired script, since WSL doesn't force logins. Some possibilities for that:
Force a su - $USER login in shells where you want to receive the notifications.
Write a Zsh prompt function to monitor a file (or perhaps fifo?) and update the prompt with the "completion" information (until cleared). Note that I find this easier in Fish, but I feel certain it would be possible in Zsh (or even Bash) as well.
Run tmux in your terminal(s) and use tmux display-message to notify of the competion.
| WSL2 Ubuntu - wall command |
1,665,540,504,000 |
What specific syntax needs to be changed in the aws s3api cli command below in order for the bash shell to interpret the command properly?
The environment is an ubuntu-latest GitHub runner executing a GitHub workflow using a bash shell.
The command that is breaking in the GitHub Ubuntu runner is:
aws s3api put-object-tagging --bucket s3.bucket.name --key filename.tar.gz --tagging TagSet={Key=public,Value=yes}
The error being thrown is:
Unknown options: TagSet=Value=yes
The same identical command works perfectly in a windows laptop using cmd.exe, so the code is a valid aws cli command.
The problem might be related to the GitHub workflow syntax for environment variables in bash which looks like ${envVarName}. Or is something else the problem?
|
You just need to quote your arguments:
aws s3api put-object-tagging --bucket s3.bucket.name --key filename.tar.gz \
--tagging "TagSet={Key=public,Value=yes}"
The syntax {a,b,c} in bash indicates brace expansion:
Brace expansion is a mechanism by which arbitrary strings may be generated. This mechanism is similar to pathname expansion, but the filenames generated need not exist. Patterns to be brace expanded take the form of an optional preamble, followed by either a series of comma-separated strings or a sequence expression between a pair of braces, followed by an optional post‐script. The preamble is prefixed to each string contained within the braces, and the post‐script is then appended to each resulting string, expanding left to right. (from the bash(1) man page)
So if we write:
echo TagSet={Key=public,Value=yes}
We get as output:
TagSet=Key=public TagSet=Value=yes
By quoting the argument we inhibit brace expansion.
| bash unable to interpret cli command that uses curly braces in GitHub Ubuntu |
1,665,540,504,000 |
The issue seems to stem from a misconfigured web server and has affected some domains I've came across in lynx and w3m, but links can access at least in some instances. Can this be resolved on the user-side?
403 Forbidden
-------------------------------------------------------------------------------------
nginx
|
Try changing your text web browsers User Agent to something modern, e.g.
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:100.0) Gecko/20100101 Firefox/100.0
on lynx this is done using --useragent option like
lynx -useragent="Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:100.0) Gecko/20100101 Firefox/100.0"
| How to get around 403 errors on websites with text-based browsers? |
1,665,540,504,000 |
In a Unix command line context I would like to compare two truly huge files (around 1TB each), preferable with a progress indicator.
I have tried diff and cmp, and they both crashed the system (macOS Mojave), let alone giving me a progress bar.
What's the best way to compare these very large files?
Additional Details:
I just want to check that they are identical.
cmp crashed the system in a way that the system did restart by itself. :-( Maybe the system ran out of memory?
|
You can use pv as a progress indicator, and pipe that to the shasum function to check the hash to see if they are identical.
pv file1 | shasum
1.08MiB 0:00:00 [57.5MiB/s] [====================================>] 100%
303462e848ecbec5f8ab12718fa6239713eda1c6 -
pv file2 | shasum
1.08MiB 0:00:00 [57.5MiB/s] [====================================>] 100%
303462e848ecbec5f8ab12718fa6239713eda1c6 -
| How to compare huge files with progress information |
1,665,540,504,000 |
I can explain my problem with an example, let us take an example of
man command, when we run this command in the terminal it opens the page in a new window, you cannot see what you had done before, in your terminal. How does one make that?
I am working on a terminal application, I want it to work in a similar way when I type in the name of the application, it should open up with on a new page.
edit:
Example of Vim, where when we type vim we open vim in the terminal, it has it's interface on the screen. How can I do that with the application I am making.
|
What you're asking about is called the alternate screen buffer, and applications switch to and from the alternate screen by sending ESC codes to the terminal.
If your app is using an ncurses library, there will be functions to do this. If you want to do it from a shell script, you can use tput to send the appropriate codes.
tput smcup # switch to alt screen
tput rmcup # switch back from alt screen
NOTE: most, but not all, terminal emulators support this. Those that don't (or which have it disabled, which is an option in some terminal emulators) just ignore the codes.
| Adding pages to terminal applications |
1,665,540,504,000 |
I am trying to compile a file using a makefile but for some reason, I am getting an error in /bin/sh, I am getting the following:
nvc FLAGS(LDFLAGS) black_scholes.o gaussian.o main.o parser.o random.o dcmt0.4/lib/random_seed.o timer.o util.o -o hw1.x
/bin/sh: -c: line 0: syntax error near unexpected token `('
/bin/sh: -c: line 0: `nvc FLAGS(LDFLAGS) black_scholes.o gaussian.o main.o parser.o random.o dcmt0.4/lib/random_seed.o timer.o util.o -o hw1.x'
make: *** [Makefile:16: hw1.x] Error 1
Here is the content of my makefile:
LDFLAGS += -Ldcmt0.4/lib -ldcmt
include Makefile.include
HW1_INCS = black_scholes.h gaussian.h parser.h random.h timer.h util.h
HW1_C_SRCS = black_scholes.c gaussian.c main.c parser.c random.c dcmt0.4/lib/random_seed.c timer.c util.c
HW1_C_OBJS = $(HW1_C_SRCS:.c=.o)
HW1_EXE = hw1.x
all: hw1.x
%.o: %.c
$(CC) -c $(CCFLAGS) $(ACCFLAGS) $< -o $@
hw1.x: $(HW1_C_OBJS) dcmt0.4/lib/libdcmt.a
$(CC) $LFLAGS$ (LDFLAGS) $(HW1_C_OBJS) -o $@
dcmt0.4/lib/libdcmt.a:
make -C dcmt0.4/lib
black_scholes.o: black_scholes.c black_scholes.h gaussian.h random.h util.h
gaussian.o: gaussian.c gaussian.h util.h
main.o: main.c black_scholes.h parser.h random.h timer.h
parser.o: parser.c parser.h
random.o: random.c random.h
dcmt0.4/lib/random_seed.o: dcmt0.4/lib/random_seed.c
timer.o: timer.c timer.h
util.o: util.c util.h
clean:
make -C dcmt0.4/lib clean
rm -f $(HW1_C_OBJS) $(HW1_EXE)
and here is the content of my makefile.include
CC = nvc
LINKER = nvc
LDFLAGS = -lm
I don;t really understand where the error is in the first place as usually errors in makefile indicate an error there and not in /bin/sh , any help understand or fixing the error would be appreciated
|
The errors are in this line:
$(CC) $LFLAGS$ (LDFLAGS) $(HW1_C_OBJS) -o $@
$LFLAGS is interpreted as $L followed by FLAGS; then $ (LDFLAGS) is interpreted as $ (the value of the variable whose name is a single space) followed by (LDFLAGS), which is why you get the FLAGS(LDFLAGS) output.
To fix it, use
$(CC) $(LDFLAGS) $(HW1_C_OBJS) -o $@
| Getting error in /bin/sh when trying to use a makefile |
1,665,540,504,000 |
I'm a beginner in Linux world. Today I encountered something really weird. I used zcat command on a .zip file (this one, it's a manual for a motherboard: https://download.msi.com/archive/mnu_exe/E7A70v1.0.zip). It printed output to the terminal as expected. What surprised me the most is the fact that afterwards my printer started printing binary data as text. It printed about half a page and then stopped. Can anyone tell me what happened? How is it possible? I'm using Manjaro, bash and urxvt.
Command that I used:
zcat E7A70v1.0.zip
|
The output contains (among other things) a valid escape sequence telling urxvt to print the current screen: ESC[i
| Literally printing with zcat |
1,665,540,504,000 |
I'm looking for a way of execute command which I can execute some command only under sub directory not command affect to current directory.
find -name ":RUN" -exec rm -rf {} \;
If I got a directory name is level0 which has sub directory the name is level1. and I run the above example command at level0 directory then not only level1 directory but also level0 affected by command.
How do I handle with that command only affect to level1 directory at level0 execute?
|
When you sayfind -something …
you are effectively sayingfind . -something …
i.e., searching starting in . (the current directory).
You want to search only in subdirectories,
so dofind ./*/ -name ":RUN" -exec rm -rf {} ;
This will not find subdirectories whose names begin with ..
If you want to include such directories, and you are using bash,
do shopt -s dotglob first.
P.S. Naively, find */ -… is equivalent to find ./*/ -….
It’s safer to use ./*/ in case there are files
whose names begin with -.
| How to handle with command to execute only it's subdirectory? [duplicate] |
1,665,540,504,000 |
Synaptic has the option to search from different sources (Package name, description, name and description, etc)... But if you have a package installed in your system, Synaptic can show you what files are attached to that installation.
So, what command can extract the list of all files generated by all installed packages to search on it?
For example, yesterday I wanted to know which packags have been installed ICC profiles in my system, but I have to do it manually (with Synaptic filters) reading all installed files of each installed packages... I don't know if there are more .ICC.
The sample output of the command I'm requesting is:
$ ./search --show-origin --show-package '*.icc'
buster-backports krita /usr/share/colors/icc/krita/cmyk.icc
buster colord /usr/share/colors/icc/colord/sRGB.icc
bullseye ghostscript /usr/share/colors/icc/ghostscript/ps_cmyk.icc
|
You could search for filenames in packages with dpkg -S:
-S, --search filename-search-pattern...
Search for a filename from installed packages.
$ dpkg -S '*.icc'
colord-data: /usr/share/color/icc/colord/x11-colors.icc
libgs9-common: /usr/share/color/icc/ghostscript/lab.icc
libgs9-common: /usr/share/color/icc/ghostscript/scrgb.icc
colord-data: /usr/share/color/icc/colord/Gamma6500K.icc
libgs9-common: /usr/share/color/icc/ghostscript/esrgb.icc
colord-data: /usr/share/color/icc/colord/CIE-RGB.icc
colord-data: /usr/share/color/icc/colord/Gamma5000K.icc
colord-data: /usr/share/color/icc/colord/ProPhotoRGB.icc
colord-data: /usr/share/color/icc/colord/EktaSpacePS5.icc
colord-data: /usr/share/color/icc/colord/ECI-RGBv2.icc
libgs9-common: /usr/share/color/icc/ghostscript/srgb.icc
libgs9-common: /usr/share/color/icc/ghostscript/sgray.icc
colord-data: /usr/share/color/icc/colord/BetaRGB.icc
libgs9-common: /usr/share/color/icc/ghostscript/ps_rgb.icc
colord-data: /usr/share/color/icc/colord/AppleRGB.icc
libgs9-common: /usr/share/color/icc/ghostscript/default_gray.icc
colord-data: /usr/share/color/icc/colord/BruceRGB.icc
colord-data: /usr/share/color/icc/colord/Gamma5500K.icc
libgs9-common: /usr/share/color/icc/ghostscript/a98.icc
libgs9-common: /usr/share/color/icc/ghostscript/ps_gray.icc
colord-data: /usr/share/color/icc/colord/Rec709.icc
libgs9-common: /usr/share/color/icc/ghostscript/default_rgb.icc
colord-data: /usr/share/color/icc/colord/AdobeRGB1998.icc
colord-data: /usr/share/color/icc/colord/WideGamutRGB.icc
libgs9-common: /usr/share/color/icc/ghostscript/default_cmyk.icc
colord-data: /usr/share/color/icc/colord/ECI-RGBv1.icc
colord-data: /usr/share/color/icc/colord/sRGB.icc
colord-data: /usr/share/color/icc/colord/NTSC-RGB.icc
colord-data: /usr/share/color/icc/colord/BestRGB.icc
colord-data: /usr/share/color/icc/colord/DonRGB4.icc
colord-data: /usr/share/color/icc/colord/ColorMatchRGB.icc
colord-data: /usr/share/color/icc/colord/SwappedRedAndGreen.icc
colord-data: /usr/share/color/icc/colord/Bluish.icc
libgs9-common: /usr/share/color/icc/ghostscript/rommrgb.icc
libgs9-common: /usr/share/color/icc/ghostscript/gray_to_k.icc
colord-data: /usr/share/color/icc/colord/PAL-RGB.icc
colord-data: /usr/share/color/icc/colord/Crayons.icc
colord-data: /usr/share/color/icc/colord/SMPTE-C-RGB.icc
libgs9-common: /usr/share/color/icc/ghostscript/ps_cmyk.icc
| How to search for a file from the list of installed files of each package installed in the system |
1,665,540,504,000 |
What command will pipe a man page to Kate text-editor without writing anything to the hard drive?
I've seen examples that create a temp file (on the file system) and then open that tmp file with a graphical text editor.
However, is it possible to accomplish this task in RAM alone, without writing to the file system?
|
kate can read from standard input with option -i or --stdin
man foo | kate -i
source: kwrite -h
-i, --stdin Read the contents of stdin.
Also, -l may be useful (for example, go to line one with -l1):
-l, --line <line> Navigate to this line.
| Pipe a Man Page to Kate Without Writing to Hard Drive |
1,665,540,504,000 |
I have several lists with two fields - first field contain an URL, 2nd field an email-address (an account). The 2nd field is the same for all entries in a list.
I concatenate the lists to one list, and sort it by the 1st field. Most entries are unique, but some are duplicates or triplicates (ie. the URL was in the list for multiple accounts).
Is there a command or script that I can use to join the duplicates, so the 2nd field became a list of accounts when required?
For example:
url1 acct2
url2 acct1
url3 acct1
url3 acct2
url4 acct2
url4 acct3
url4 acct5
...
Should become:
url1 acct2
url2 acct1
url3 acct1 acct2
url4 acct2 acct3 acct5
...
|
With sort + awk pipeline:
sort -k1,1 file \
| awk 'url && $1 != url{ print url, acc }
{ acc = ($1 == url? acc FS:"") $2; url = $1 }END{ print url, acc }' OFS='\t'
Sample output:
url1 acct2
url2 acct1
url3 acct1 acct2
url4 acct2 acct3 acct5
| List sorted on 1st field, how can I join 2nd field on lines where 1st field is the same? |
1,665,540,504,000 |
This is a similar question to this one
I want to do the word count but this time, using an array.
For example, I have the following IPs inside a bash array called IPS.
IPS=("1.1.1.1" "5.5.5.5" "3.3.3.3" "1.1.1.1" "2.2.2.2" "5.5.5.5" "1.1.1.1")
If I read its contents:
user@server~$ "${IPS[*]}"
1.1.1.1 5.5.5.5 3.3.3.3 1.1.1.1 2.2.2.2 5.5.5.5 1.1.1.1
I would like to have something similar to this:
3 1.1.1.1
2 5.5.5.5
1 3.3.3.3
1 2.2.2.2
|
try:
printf '%s\n' "${IPS[@]}" |sort |uniq -c |sort -rn |sed 's/^ *//'
3 1.1.1.1
2 5.5.5.5
1 3.3.3.3
1 2.2.2.2
related:
Printing an array to a file with each element of the array in a new line in bash
Why is printf better than echo?
| Bash - sort and uniq on array |
1,665,540,504,000 |
I am trying to calculate a PBKDF2 hash, but am getting inconsistent results.
Message: Hello
Salt: 60C100D05C610E8B94A854DFC0789885
Iterations: 1
Key length: 16
Expected hash: 584519EF3E56714E301A4D85F972B6B4
nettle-pbkdf2 gives a951d3cd9014e0c0 527000727c1e928a
https://asecuritysite.com/encryption/PBKDF2z and CryptoJS gives 584519EF3E56714E301A4D85F972B6B4
How can I use nettle-pbkdf2 or any other CLI program to generate the expected hash 584519EF3E56714E301A4D85F972B6B4?
Reproduction steps below:
nettle-pbkdf2
$ printf "Hello" | nettle-pbkdf2 --iterations=1 --length=16 --hex-salt 60C100D05C610E8B94A854DFC0789885
> a951d3cd9014e0c0 527000727c1e928a
https://asecuritysite.com/encryption/PBKDF2z
Message: Hello
Salt: 60C100D05C610E8B94A854DFC0789885
Iterations: 1
Key length: 16
Hash: 584519EF3E56714E301A4D85F972B6B4
CryptoJS
<script>
/*
CryptoJS v3.1.2
code.google.com/p/crypto-js
(c) 2009-2013 by Jeff Mott. All rights reserved.
code.google.com/p/crypto-js/wiki/License
*/
var CryptoJS=CryptoJS||function(g,j){var e={},d=e.lib={},m=function(){},n=d.Base={extend:function(a){m.prototype=this;var c=new m;a&&c.mixIn(a);c.hasOwnProperty("init")||(c.init=function(){c.$super.init.apply(this,arguments)});c.init.prototype=c;c.$super=this;return c},create:function(){var a=this.extend();a.init.apply(a,arguments);return a},init:function(){},mixIn:function(a){for(var c in a)a.hasOwnProperty(c)&&(this[c]=a[c]);a.hasOwnProperty("toString")&&(this.toString=a.toString)},clone:function(){return this.init.prototype.extend(this)}},
q=d.WordArray=n.extend({init:function(a,c){a=this.words=a||[];this.sigBytes=c!=j?c:4*a.length},toString:function(a){return(a||l).stringify(this)},concat:function(a){var c=this.words,p=a.words,f=this.sigBytes;a=a.sigBytes;this.clamp();if(f%4)for(var b=0;b<a;b++)c[f+b>>>2]|=(p[b>>>2]>>>24-8*(b%4)&255)<<24-8*((f+b)%4);else if(65535<p.length)for(b=0;b<a;b+=4)c[f+b>>>2]=p[b>>>2];else c.push.apply(c,p);this.sigBytes+=a;return this},clamp:function(){var a=this.words,c=this.sigBytes;a[c>>>2]&=4294967295<<
32-8*(c%4);a.length=g.ceil(c/4)},clone:function(){var a=n.clone.call(this);a.words=this.words.slice(0);return a},random:function(a){for(var c=[],b=0;b<a;b+=4)c.push(4294967296*g.random()|0);return new q.init(c,a)}}),b=e.enc={},l=b.Hex={stringify:function(a){var c=a.words;a=a.sigBytes;for(var b=[],f=0;f<a;f++){var d=c[f>>>2]>>>24-8*(f%4)&255;b.push((d>>>4).toString(16));b.push((d&15).toString(16))}return b.join("")},parse:function(a){for(var c=a.length,b=[],f=0;f<c;f+=2)b[f>>>3]|=parseInt(a.substr(f,
2),16)<<24-4*(f%8);return new q.init(b,c/2)}},k=b.Latin1={stringify:function(a){var c=a.words;a=a.sigBytes;for(var b=[],f=0;f<a;f++)b.push(String.fromCharCode(c[f>>>2]>>>24-8*(f%4)&255));return b.join("")},parse:function(a){for(var c=a.length,b=[],f=0;f<c;f++)b[f>>>2]|=(a.charCodeAt(f)&255)<<24-8*(f%4);return new q.init(b,c)}},h=b.Utf8={stringify:function(a){try{return decodeURIComponent(escape(k.stringify(a)))}catch(b){throw Error("Malformed UTF-8 data");}},parse:function(a){return k.parse(unescape(encodeURIComponent(a)))}},
u=d.BufferedBlockAlgorithm=n.extend({reset:function(){this._data=new q.init;this._nDataBytes=0},_append:function(a){"string"==typeof a&&(a=h.parse(a));this._data.concat(a);this._nDataBytes+=a.sigBytes},_process:function(a){var b=this._data,d=b.words,f=b.sigBytes,l=this.blockSize,e=f/(4*l),e=a?g.ceil(e):g.max((e|0)-this._minBufferSize,0);a=e*l;f=g.min(4*a,f);if(a){for(var h=0;h<a;h+=l)this._doProcessBlock(d,h);h=d.splice(0,a);b.sigBytes-=f}return new q.init(h,f)},clone:function(){var a=n.clone.call(this);
a._data=this._data.clone();return a},_minBufferSize:0});d.Hasher=u.extend({cfg:n.extend(),init:function(a){this.cfg=this.cfg.extend(a);this.reset()},reset:function(){u.reset.call(this);this._doReset()},update:function(a){this._append(a);this._process();return this},finalize:function(a){a&&this._append(a);return this._doFinalize()},blockSize:16,_createHelper:function(a){return function(b,d){return(new a.init(d)).finalize(b)}},_createHmacHelper:function(a){return function(b,d){return(new w.HMAC.init(a,
d)).finalize(b)}}});var w=e.algo={};return e}(Math);
(function(){var g=CryptoJS,j=g.lib,e=j.WordArray,d=j.Hasher,m=[],j=g.algo.SHA1=d.extend({_doReset:function(){this._hash=new e.init([1732584193,4023233417,2562383102,271733878,3285377520])},_doProcessBlock:function(d,e){for(var b=this._hash.words,l=b[0],k=b[1],h=b[2],g=b[3],j=b[4],a=0;80>a;a++){if(16>a)m[a]=d[e+a]|0;else{var c=m[a-3]^m[a-8]^m[a-14]^m[a-16];m[a]=c<<1|c>>>31}c=(l<<5|l>>>27)+j+m[a];c=20>a?c+((k&h|~k&g)+1518500249):40>a?c+((k^h^g)+1859775393):60>a?c+((k&h|k&g|h&g)-1894007588):c+((k^h^
g)-899497514);j=g;g=h;h=k<<30|k>>>2;k=l;l=c}b[0]=b[0]+l|0;b[1]=b[1]+k|0;b[2]=b[2]+h|0;b[3]=b[3]+g|0;b[4]=b[4]+j|0},_doFinalize:function(){var d=this._data,e=d.words,b=8*this._nDataBytes,l=8*d.sigBytes;e[l>>>5]|=128<<24-l%32;e[(l+64>>>9<<4)+14]=Math.floor(b/4294967296);e[(l+64>>>9<<4)+15]=b;d.sigBytes=4*e.length;this._process();return this._hash},clone:function(){var e=d.clone.call(this);e._hash=this._hash.clone();return e}});g.SHA1=d._createHelper(j);g.HmacSHA1=d._createHmacHelper(j)})();
(function(){var g=CryptoJS,j=g.enc.Utf8;g.algo.HMAC=g.lib.Base.extend({init:function(e,d){e=this._hasher=new e.init;"string"==typeof d&&(d=j.parse(d));var g=e.blockSize,n=4*g;d.sigBytes>n&&(d=e.finalize(d));d.clamp();for(var q=this._oKey=d.clone(),b=this._iKey=d.clone(),l=q.words,k=b.words,h=0;h<g;h++)l[h]^=1549556828,k[h]^=909522486;q.sigBytes=b.sigBytes=n;this.reset()},reset:function(){var e=this._hasher;e.reset();e.update(this._iKey)},update:function(e){this._hasher.update(e);return this},finalize:function(e){var d=
this._hasher;e=d.finalize(e);d.reset();return d.finalize(this._oKey.clone().concat(e))}})})();
(function(){var g=CryptoJS,j=g.lib,e=j.Base,d=j.WordArray,j=g.algo,m=j.HMAC,n=j.PBKDF2=e.extend({cfg:e.extend({keySize:4,hasher:j.SHA1,iterations:1}),init:function(d){this.cfg=this.cfg.extend(d)},compute:function(e,b){for(var g=this.cfg,k=m.create(g.hasher,e),h=d.create(),j=d.create([1]),n=h.words,a=j.words,c=g.keySize,g=g.iterations;n.length<c;){var p=k.update(b).finalize(j);k.reset();for(var f=p.words,v=f.length,s=p,t=1;t<g;t++){s=k.finalize(s);k.reset();for(var x=s.words,r=0;r<v;r++)f[r]^=x[r]}h.concat(p);
a[0]++}h.sigBytes=4*c;return h}});g.PBKDF2=function(d,b,e){return n.create(e).compute(d,b)}})();
</script>
<script>
var salt = CryptoJS.enc.Hex.parse("60c100d05c610e8b94a854dfc0789885");
var message = "Hello";
var key128Bits = CryptoJS.PBKDF2(message, salt, { keySize: 4, iterations: 1 });
// Logs "584519ef3e56714e301a4d85f972b6b4"
console.log(key128Bits.toString());
</script>
|
nettle-pbkdf2 documents it uses HMAC-SHA256 as its pseudo-random function; the other two are using HMAC-SHA1. Nettle has a PBKDF2-HMAC-SHA1 implementation, but I'm not sure if you can easily get it from the command line. (HMAC-SHA256 is generally a better choice if you have the option; SHA1 should be avoided).
(Of course, you also shouldn't be using 1 iteration. I presume that's just for testing.)
| PBKDF2 not the same |
1,665,540,504,000 |
I have a command which looks like this:
cat PGC2.SCZ.1.dat | awk 'NR == 1 || $NF < 0.05/1783'
so this part I guess means skip the first line
awk 'NR == 1
but what does this refer to?
|| $NF < 0.05/1783'
Thanks
|
NR means "number of record" and refers to the line streamed to awk. (By default every new, not empty line is a new record. One can define a different record separator RS. Then the term line isn't correct here anymore.)
NF means "number of fields" and refers to the number of columns in the line. Due to the $ before the NF, we are asking for the value in the last column. (NB: By default any run of spaces or tabs is used as a column delimiter aka field separator FS)
The || means "or".
So in summary your command will print out the first line in PGC2.SCZ.1.dat and all lines where the value in the last column is less then 0.05/1783.
| How to interpret Unix command? |
1,665,540,504,000 |
I need a command that would echo the average CPU usage over the past 10 seconds in Ubuntu 18.
Each of the following conditions must be met:
It must be lightweight with a very small footprint, running a
background script constantly writing to the filesystem is a no no.
The value must account for number of cores automatically (I don't
know the number of cores prior).
The value must be a number between
0 and 1. There shouldn't be any other output as it will be read by a
script, not a human. Alternatively suggest a robust algorithm of parsing the output of the suggested command, whatever it is.
sudo requirement is fine, but the script must be able to be run as a command over SSH and it must have a proper process return behavior (0 exit code for success).
Built-in commands and untilities are preferred, but additional software is OK too as long as it's available in the official repos.
|
The sysstat package provides sar, a system activity data collector.
sar -u ALL 10 1
posts average cpu stats of the next 10 seconds since starting the command. The output is like
Linux 5(...) 11/05/21 _x86_64_ (1 CPU)
17:22:35 CPU %user %nice %system %iowait %steal %idle
17:22:36 all 8.85 20.75 2.46 0.00 0.00 67.94
Average: all 8.85 20.75 2.46 0.00 0.00 67.94
| How to get the average CPU usage over the past 10 seconds in bash? |
1,665,540,504,000 |
Suppose I want to do a search to find out if I have a file that matches the sha256 generated from the file test1.txtusing the command:
sha256sum -b test1.txt
I get as output:
e3d748fdf10adca15c96d77a38aa0447fa87af9c297cb0b75e314cc313367daf * test1.txt
So, I want to find the files that match the checksum generated instead of using the name.
Is this possible?
|
find . -type f -exec sha256sum -b {} + |
grep -F 'e3d748fdf10adca15c96d77a38aa0447fa87af9c297cb0b75e314cc313367daf'
This would calculate the SHA256 checksum for each and every file in or under the current directory. The grep at the end would extract the results of the calculations that match the checksum that you are looking for.
If the result of the find operation was diverted to a file, it could serve as a "database" that you could use for doing multiple lookups on with grep. If some extra logic was added, you could make a cron job that periodically refreshed this file with information from new and updated files and removed old information (this was not really what this question was about, so I'm leaving any code out for the time being). With not so much extra effort, you could even do this against a simple SQLite database.
Related to the syntax of the find command:
Understanding the -exec option of `find`
| Is it possible to search for a file using the checksum instead of the name? [duplicate] |
1,665,540,504,000 |
When I install a LAMP server environment on Debians I use this code:
apt-get upgrade lamp-server^ php-cli php-curl php-mbstring php-mcrypt php-gd python-certbot-apache -y
If I add more php-X packages it becomes uglylically long.
Is there a way to shorten it a bit like as follows?
apt-get upgrade lamp-server^ php-cli|curl|mbstring|mcrypt|gd python-certbot-apache -y
|
Yes, you can use bash brace expansion to generate the arguments with the same prefix.
The correct syntax would be:
apt-get upgrade lamp-server^ php-{cli,curl,mbstring,mcrypt,gd} python-certbot-apache -y
| Debian: Remove redundant package terminology |
1,665,540,504,000 |
I have this quite unusual condition here. I have a old linux system that has no sudo or su commands. I do not have physical access to this computer so I cannot login as another user.
Linux kernel is 2.6.18-498 and the system is a red-hat 4.1.2-55.
I can go to the /bin directory and can say for sure there are no su or sudo binaries there. So this is not the case of PATH variables misconfigaration.
Also this is a web server so maybe it is configured this way. Is there any way to run a command as a different user? Any help would be appreciated.
|
See if you have any remote login services running (in.telnetd, rlogind, sshd) and then run the appropriate login command to the localhost (127.0.0.1). For example if you have sshd then do:
$ ssh [email protected]
With telnet you'd run:
$ telnet -l root 127.0.0.1
And with rlogin you'd run:
$ rlogin -l root 127.0.0.1
| How to run command as a different user when there are no sudo or su commands |
1,665,540,504,000 |
In the command line, I appended a directory to my PATH without exporting it:
$ PATH='$PATH:/home/user/anaconda3/bin'
For some reason this has overwritten the PATH environment variable but I'm not sure why this happened. The PATH above is still a colon separated list of directories like it should be so what's the problem? I usually prepend a new directory to my PATH but this time I tested appending it instead which caused unexpected outcomes.
Now any time I try even the simplest commands like ls I get this error (which I expect) followed by a prompt asking me to install the command I typed:
bash: sed: command not found...
Additionally, since I didn't acually export PATH, the subsequent commands are not supposed to inherit the environment of the above PATH variable so what caused it to happen?.
I know I can open a new terminal window to fix it but I'm interested in knowing why this happened?
|
Single quotes suppress parameter expansion.
$ foo=42
$ echo '$foo' "$foo"
$foo 42
| Why did PATH='$PATH:/Path/to/bin' overwrite my PATH? |
1,665,540,504,000 |
I am pasting multiple commands into my command line and would like each line to execute and output the results sequentially.
I am pasting my input:
cat type_of_record.txt | grep 'type_of_record_new creating' | wc -l
cat type_of_record.txt | grep 'type_of_record_modification found 1' | wc -l
cat type_of_record.txt | grep 'type_of_record_modification found 0' | wc -l
cat type_of_record.txt | grep 'type_of_record_inactivation found 1' | wc -l
and getting output:
469005
9999
5099
25
But instead, would like each command to execute after each line break and have my output look like:
cat type_of_record.txt | grep 'type_of_record_modification found 1' | wc -l
469005
cat type_of_record.txt | grep 'type_of_record_modification found 0' | wc -l
9999
cat type_of_record.txt | grep 'type_of_record_inactivation found 1' | wc -l
5099
Not sure if this is possible but it would save me a lot of time if I don't have to map each result back to the line where it came from
|
Just paste your clipboard into a heredoc:
$ sh -v << EOF
> cat type_of_record.txt | grep 'type_of_record_new creating' | wc -l
> cat type_of_record.txt | grep 'type_of_record_modification found 1' | wc -l
> cat type_of_record.txt | grep 'type_of_record_modification found 0' | wc -l
> cat type_of_record.txt | grep 'type_of_record_inactivation found 1' | wc -l
> EOF
cat type_of_record.txt | grep 'type_of_record_new creating' | wc -l
cat: type_of_record.txt: No such file or directory
0
cat type_of_record.txt | grep 'type_of_record_modification found 1' | wc -l
cat: type_of_record.txt: No such file or directory
0
cat type_of_record.txt | grep 'type_of_record_modification found 0' | wc -l
cat: type_of_record.txt: No such file or directory
0
cat type_of_record.txt | grep 'type_of_record_inactivation found 1' | wc -l
cat: type_of_record.txt: No such file or directory
0
In the above, I typed 'sh -v << EOF', then pasted the code from your question into the terminal, and then hit return and typed 'EOF'. If you do this sort of thing, make sure you carefully review the text you're pasting. You will probably want to quote the delimiter (eg sh << 'EOF') to avoid interpolation of any of the pasted text, but that's not necessary in this case.
But note that in this particular case, it seems better to use awk to count the matching records so that you only need to make one pass through the file.
| How do i execute multiple commands sequentially on the command line? |
1,665,540,504,000 |
I'm using OSX terminal and trying to extract specified text of log file by regex.
awk version is
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.1, GNU MP 6.1.2)
Copyright (C) 1989, 1991-2018 Free Software Foundation.
my trying operation is
$gawk '/123/ BEGIN{RS="DEBUG"; FS="\n"}{print $0"\n"}END{}' ./app_108_utf8_T2.log > output.txt
but awk says
gawk: cmd. line:1: /123/ BEGIN{RS="DEBUG"; FS="\n"}{print $0"\n"}END{}
gawk: cmd. line:1: ^ syntax error
Why does awk say error?
|
I’m guessing you want to run
gawk 'BEGIN{RS="DEBUG"; FS="\n"} /123/{print $0"\n"}' ./app_108_utf8_T2.log > output.txt
BEGIN defines the block of instructions which run at the start of the process, and /123/ defines the block which runs when the “123” regular expression matches the current line. You can’t specify both for a single block.
| How to solve this syntax error of gawk (GNU awk) on OSX terminal? |
1,665,540,504,000 |
In zsh, how do I refer to the grandparent directory with ... rather than
../.., and so forth? I used to have this in oh-my-zsh and prezto.
PS. Ideally, M-3 . should yield ../../...
|
The following code does the trick:
rationalise-dot() {
if [[ $LBUFFER = *.. ]]; then
LBUFFER+=/..
else
LBUFFER+=.
fi
}
zle -N rationalise-dot
bindkey . rationalise-dot
bindkey -M isearch . self-insert
| Changing to ancestor directory without typing all the dots and slashes |
1,665,540,504,000 |
I have a file containing full path names of files an directories.
From this list I would like to filter out any pathnames that reference directories so that I am left with a list containing only file paths.
Can anybody think of an elegant solution?
|
while IFS= read -r file; do
[ -d "$file" ] || printf '%s\n' "$file"
done <input_file
Would print the files that are not determined to be of type directory (or symlink to directory). It would leave all other type of files (regular, symlink (except to directories), sockets, pipes...) and those for which the type cannot be determined (for instance because they don't exist or are in directories which you don't have search permission to).
Some variations depending on what you meant by file and directory (directory is one of many types of files on Unix):
the file exists (after symlink resolution) and is not of type directory:
[ -e "$file" ] && [ ! -d "$file" ] && printf '%s\n' "$file"
file exists and is a regular file (after symlink resolution):
[ -f "$file" ] && printf '%s\n' "$file"
file exists and is a regular file before symlink resolution (excludes symlinks):
[ -f "$file" ] && [ ! -L "$file" ] && printf '%s\n' "$file"
etc.
| Filter directories from list of files and directories |
1,665,540,504,000 |
The command 'ps' gives current status of the processes. Is there any way to find the status of a particular process in the past: say I would like to know the status of a particular process 48 hours before from now?
I have a unit crashing and wanted to know the status of different processes during the exact time when the crash occurred.
|
No, commands such ps and top show only the current status of processes. There's no way to know what the process status was in the past unless you already set up a monitoring system.
For the future, you can set up atop to log process status. From its manpage:
In order to store system- and process level statistics for long-term analysis (e.g. to check the system load and the active processes running yesterday between 3:00 and 4:00 PM), atop can store the system- and process level statistics in compressed binary format in a raw file with the flag -w followed by the filename. If this file already exists and is recognized as a raw data file, atop will append new samples to the file (starting with a sample which reflects the activity since boot); if the file does not exist, it will be created.
By default only processes which have been active during the interval are stored in the raw file. When the flag -a is specified, all processes will be stored.
The interval (default: 10 seconds) and number of samples (default: infinite) can be passed as last arguments. Instead of the number of samples, the flag -S can be used to indicate that atop should finish anyhow before midnight.
Clearly, and as already said, atop will start recording only from the moment you set it up.
| Process status of the past time |
1,665,540,504,000 |
I like using curl and the command line to process html pages.
Relative urls are a pain.
Is there some easy utility to make all relative urls absolute?
Ideally this would look something like
curlabsolute $URL | process
|
What you need is wget utulity:
Let's say we need to download a simple web-page given by http://www.littlewebhut.com/articles/simple_web_page/.
The command (the below used url is real, the command can be tested "as is"):
wget -O simple_page -k http://www.littlewebhut.com/articles/simple_web_page/
-O (--output-document=file) - The documents will not be written to the appropriate files, but all will be concatenated together and written to file.
-k (--convert-links) - After the download is complete, convert the links in the document to make them suitable for local viewing
I will just demonstrate some context html fragment from the mentioned web-page before downloading (online varsion):
...
<ul>
<li><a href="/" class="color-menu">Home</a></li>
<li><a href="/html/" class="color-menu">HTML</a></li>
<li><a href="/css/" class="color-menu">CSS</a></li>
<li><a href="/javascript/" class="color-menu">JavaScript/jQuery</a></li>
<li><a href="/inkscape/" class="color-menu">Inkscape</a></li>
<li><a href="/gimp/" class="color-menu">GIMP</a></li>
<li><a href="/blender/" class="color-menu">Blender</a></li>
<li><a href="/articles/" class="color-menu">Articles</a></li>
<li><a href="/contact/" class="color-menu">Contact</a></li>
</ul>
The same fragment after downloading, saved in the file simple_page:
...
<ul>
<li><a href="http://www.littlewebhut.com/" class="color-menu">Home</a></li>
<li><a href="http://www.littlewebhut.com/html/" class="color-menu">HTML</a></li>
<li><a href="http://www.littlewebhut.com/css/" class="color-menu">CSS</a></li>
<li><a href="http://www.littlewebhut.com/javascript/" class="color-menu">JavaScript/jQuery</a></li>
<li><a href="http://www.littlewebhut.com/inkscape/" class="color-menu">Inkscape</a></li>
<li><a href="http://www.littlewebhut.com/gimp/" class="color-menu">GIMP</a></li>
<li><a href="http://www.littlewebhut.com/blender/" class="color-menu">Blender</a></li>
<li><a href="http://www.littlewebhut.com/articles/" class="color-menu">Articles</a></li>
<li><a href="http://www.littlewebhut.com/contact/" class="color-menu">Contact</a></li>
</ul>
| Make all urls in a page absolute from the command line |
1,665,540,504,000 |
I have a similar content in a file. I have a list of line numbers with me say 1,2, 4.
Can feed all the required line #s
Extract the contents between the first occurence of and last occurence of </book>
Data:
</p><p>abc</p></book><book><p style="text-indent:0em;">def</p></book><book><p>ghi</p><p style="text-indent:0em;"></book><book><div><p>
</div><p>123</p></book><book><p style="text-indent:0em;">456</p><p>789</p><p style="text-indent:0em;"></book><book><div><p>
<div><p>nothing !!!</p></div>
</p><p>ABC</p></book><book><p style="text-indent:0em;">DEF</p></book><book><p>GHI</p><p style="text-indent:0em;"></book><book><div><p>JKL</p></div></book><div>
Input Line #s: 1, 2, 4 (Which I want to feed in the command)
Desired Output:
<book><p style="text-indent:0em;">def</p></book><book><p>ghi</p><p style="text-indent:0em;"></book>
<book><p style="text-indent:0em;">456</p><p>789</p><p style="text-indent:0em;"></book>
<book><p style="text-indent:0em;">DEF</p></book><book><p>GHI</p><p style="text-indent:0em;"></book><book><div><p>JKL</p></div></book>
|
1) Extract specific lines
In your four-line example to extract the 1st, 2nd and 4th line would be easy by deleting the 3rd line:
sed 3d file
But your file is probably more complicated, so a more general solution would be to do
sed -e 1b -e 2b -e 4b -e d file
So for each line that should be kept you jump to the end of the script with b and delete all remaining files.
For a longer list of line numbers you may want to generate the script:
sed $(for i in 1 2 4; do echo "-e ${i}b"; done) -e d file
But it seems that it's not about the line numbers, but whether there are <book>s on that line. If this is true, you better forget about the line numbers and do
sed '/<book>/!d'
2) extracting the contents
Greedy * of regexp is not a friend for tasks like this. That's why my personal version of sed has an option o to the s command to replace only by the matched part:
sed '/<book>/!d;s_<book>.*</book>_&_o'
But this won't work for you, so you need some more regex juggling:
sed '/<book>/!d;s_<book>_\n&_;s_.*\n__;s_\(.*</book>\).*_\1_' file
If your version of sed doesn't support \n in the replacement string, use an actual newline (escaped by a backslash):
sed '/<book>/!d;s_<book>_\
&_;s_.*\n__;s_\(.*</book>\).*_\1_' file
| For a set of line numbers ...Extract content between first and last occurence of different patterns |
1,665,540,504,000 |
Googling for this question gives a lot of answers based on PCIe. Unfortunately, I'm not looking for PCIe based answers at the moment. I have an older PC that still contains some PCI-X slots (y'know, like good old PCI but longer and faster). At the moment I have an Auzentech X-Meridian 7.1 2G installed in the PCI slot, and an HP Firewire 800 card in one of my PCI-X slots. When I run # lspci -vvv -s 08: I get the following output, showing both cards. Is the Firewire one running at 66+MHz and in 64 bit mode, or?
08:04.0 Multimedia audio controller: C-Media Electronics Inc CMI8788 [Oxygen HD Audio]
Subsystem: AuzenTech, Inc. X-Meridian 7.1 2G
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 32 (500ns min, 6000ns max)
Interrupt: pin A routed to IRQ 19
Region 0: I/O ports at 5000 [size=256]
Capabilities: [c0] Power Management version 2
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Kernel driver in use: snd_oxygen
Kernel modules: snd_oxygen
08:08.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx] (prog-if 10 [OHCI])
Subsystem: Super Micro Computer Inc Device b380
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 32 (500ns min, 1000ns max), Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 20
Region 0: Memory at d0e04000 (32-bit, non-prefetchable) [size=2K]
Region 1: Memory at d0e00000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [44] Power Management version 2
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold-)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME+
Kernel driver in use: firewire_ohci
Kernel modules: firewire_ohci
|
Your card is running at 33MHz in 32-bit mode. For a 64-bit, 66MHz PCI-X device you’d see 66MHz+ in the Status line, and you’d also have a 68 capability section like
Capabilities: [68] PCI-X non-bridge device
Command: DPERE- ERO- RBC=512 OST=8
Status: Dev=03:04.1 64bit+ 133MHz+ SCD- USC- DC=simple DMMRBC=2048 DMOST=8 DMCRS=16 RSCEM- 266MHz- 533MHz-
(that’s a 133MHz device; you’d expect at least 64bit+ here).
| How to see speed of PCI-X card? |
1,665,540,504,000 |
I have hundreds of directories, inside each of them is a file with name report.ext and this file can contain a row like
Beta score for best model 95.35
I would like to get a list of directories, where this file does exist, where it contain such row and where this row contains value greater than 95.
Is this possible from command line tools?
|
The easiest is to look for those files and print their parent directory if their content match. For instance with something like:
find . -name report.ext -type f -exec awk '
/^Beta score for best model [0-9.]+$/ && $NF > 95 {
dir = FILENAME
sub("/[^/]*$", "", dir)
print dir
nextfile
}' {} +
If your awk implementation doesn't support nextfile, that would print the name of the directory for each occurrence of those lines in the file.
| How to list directories with specified file in them has specified content? |
1,665,540,504,000 |
Is there a way to get a list of all commands that match a specific (case insensitive) pattern? So for example, I know the command (which might be an alias) I'm looking for contains "diag" or "Diag" but I'm not sure of the actual command.
I'm currently on Ubuntu with Bash but am asking specifically on this site because I'd love to learn of a way that's usable across various kinds of distros (e.g. I'll need this skill on CentOS and Manjaro later on too).
I've tried man iag hoping it would work the same as Powershell's help iag but that doesn't work.
I've tried my Google-fu but that only seems to lead to explanations on how to find files by partial name of text inside files.
I've tried searching this SE site in various ways (e.g. 1, 2) but didn't find a duplicate of my question.
How do you find the exact name of a command if you can remember only part of it?
|
Use compgen -c to get a list of all commands, you can also use it like:
compgen -c dif
to get a list of all commands started with "dif".
Combine it with grep to get exactly what you are looking for:
compgen -c | grep -i diag
which looks for any commands containing "diag". use regex for more flexible searches:
compgen -c | grep -i ^diag # started with diag
compgen -c | grep -i diag$ # ended to diag
You can also use apropos to find commands, it searches into the manual page names and descriptions.
| Find commands by partial name |
1,665,540,504,000 |
Trying to install pass (password manager).
I noticed that in my system (Ubuntu 14.04.4) another program called pass is already installed but I am sure that this is not password manager.
pass --help returns
Input format should be:
./pass inputfile min_window max_window false_num outputfile [-qnorm] [-nop] [-adjust] [-p priorfile]
I managed to find that this is possibly compiler, comes from packege pass2 and may be a part of binutils but as a search phrase is short finding it in a web search is quite of pita... Any combinations of 'pass and linux' end with troubleshooting of installing pass in lfs...
Can anyone point me in the right direction?
|
Use dpkg -S to search for what package owns the file:
dpkg -S /usr/bin/pass
My guess, based solely on the names of the expected command line options, is that your pass command is a command line version of Poisson Approximation for Statistical Significance (a bioinformatics tool). This tool has a web interface here (for example, there are others too): http://insilicom.com/root?tool_id=hgv_pass
| /usr/bin/pass and /usr/bin/pass2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.