date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,421,059,115,000 |
I'm trying to recursively copy a folder of files containing .csv extensions and rename them while copying them into a single folder.
I'm close except for the file renaming which eludes me.
Can anyone assist?
find "/IMPORTS/EFHG2" -iname '*.csv*' -exec cp {} /temp/Template \;
As for the rename I'm looking for something that will give some indication of the parent folder from which the file came from.
Original (file1.csv, file2.csv)
Modified (dir1.file1.csv, dir2.file1.csv)
|
Having below structure:
├── destdir
└── srcdir
├── dir1
│ └── with space.csv
├── dir2
│ └── infile.csv
└── dir3
└── otherfile.Csv
running the command:
find "/path/to/srcdir" -type f -iname '*.csv' -exec sh -c '
path="${1%/*}"; filename="${1##*/}";
echo cp -nv "${1}" "/path/to/destdir/${path##*/}.${filename}" ' sh_cp {} \;
will produce output as following (running in dry mode):
cp -v /path/to/srcdir/dir2/infile.csv /path/to/destdir/dir2.infile.csv
cp -v /path/to/srcdir/dir1/with space.csv /path/to/destdir/dir1.file with space.csv
cp -v /path/to/srcdir/dir3/otherfile.Csv /path/to/destdir/dir3.otherfile.Csv
if we remove the echo in front of the cp command (which is used for dry-run) to get copy &rename affective, you will get below structure:
├── destdir
│ ├── dir1.with space.csv
│ ├── dir2.infile.csv
│ └── dir3.otherfile.Csv
└── srcdir
├── dir1
│ └── with space.csv
├── dir2
│ └── infile.csv
└── dir3
└── otherfile.Csv
note that if there was a same filename and same parent directory name say in sub-directories, it will overwrite with latest file found by the find command, that's why I used -n for the cp command to prevent that, so it will not copy that same file, be noted about that.
Explanation:
find "/path/to/srcdir" -type f -iname '*.csv' -exec sh -c '...' sh_cp {} \;
find files with .csv suffix (ignore-case) recursively and -execute the inline-sh-script for each sh -c '...' we name it sh_cp; the {} is the substitution of the filepath that find command finds it and we pass to our script and that is accessible on $1 or ${1} parameter.
${1%/*}: cuts shortest-suffix from the ${1} parameter (known Shell-Parameter-Expansion), as said above ${1} is the filepath and with this we drop the filename + last / from the filepath and keep only path and store in the path variable.
${1} --> /path/to/srcdir/dir2/infile.csv
${1%/*} --> /path/to/srcdir/dir2
${1##*/}: cuts longest-prefix from the ${1} parameter; with this we remove path from the filepath and keep only filename and store in the filename variable.
${1} --> /path/to/srcdir/dir2/infile.csv
${1##*/} --> infile.csv
and accordinlgy:
path --> /path/to/srcdir/dir2
${path##*/} --> dir2
${filename} --> infile.csv
${path##*/}.${filename} --> dir2.infile.csv
tips:
xYz='to-test/path/to/srcdir/dir2/infile.csv'
${xYz%/*} --> to-test/path/to/srcdir/dir2
${xYz%%/*} --> to-test
${xYz#*/} --> path/to/srcdir/dir2/infile.csv
${xYz#*/} --> infile.csv
| Copy and rename files recursively using part of folder name for new name |
1,600,274,121,000 |
I've downloaded this program construct2d and compiled it using GNU Fortran gfortran 9.3.0.
You can compile the program using gnu make:
make
(compilation time: 10 seconds on my PC running Ubuntu 20.04 with GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
).
This program doesn't work with arguments, instead I have to enter manually the options. In order to avoid that tedious workflow I wrote the options in a file instructions.txt to feed it.
construct2d < instructions.txt
The content of instructions.txt is:
naca0012.dat
SOPT
NSRF
80
RADI
5
NWKE
5
QUIT
VOPT
JMAX
5
YPLS
5
RECD
1E5
QUIT
GRID
SMTH
QUIT
The file naca0012.dat can be found under sample_airfoils directory from uncompressed construct2D archive or can be downloaded from this link.
The problem is that the command:
construct2d < instructions.txt
doesn't give the expected result when I run it only once, I have to run the command above several times (4 times maybe) to get the expected results: (the expected output is: naca0012.p3d and naca0012.nmf).
When I run construct2d manually and type the options in instructions.txt one by one, it works as expected. I've tried to use gdb to debug that but unfrotunately it doesn't show anything special.
So it appears that the program is ignoring some instructions when feeded from a file. Why does this happen?
The stdout output when the program runs as expected (in addition, the program will generate the output files: naca0012.p3d and naca0012.nmf): working.log
The stdout output when the program doesn't run as exepcted (without output files): not_working.log
I greatly appreciate your help.
EDIT 1:
On Windows 10, with gfortran 8.1.0, file redirection works just fine, it doesn't fail. This happens only on Linux as I described above.
EDIT2: I confirm this has nothing to do with line endings. Because I've created the file instructions.txt itself on Linux. And used dos2unix tool to check the file.
EDIT3
I have tried compiling the program with older versions of gfortran (gfortran 7.5.0 on Ubuntu server 18.04) and everything works correctly. This might be a bug in newer versions of GNU Fortran.
EDIT 4:
I've solved that weird behaviour in gfortran 9.x and 10.x by adding the flag -Og or -O0 when compiling the program.
|
From contributors to comp.lang.fortran:
One problem seems to be where the main loop:
done = .false.
do while (.not. done)
call main_menu(command)
call run_command(command, surf, options, done, ioerror)
end do
calls 'run_command':
subroutine run_command(command, surf, options, done, ioerror)
...
logical, intent(out) :: done
integer, intent(inout) :: ioerror
gfortran seems to be guessing that since the value of 'done' is never used by 'run_command', there's no point in actually executing the statement 'done = .false.' and since 'run_command' doesn't actually set its argument 'done' to anything unless it sees a 'quit' command, 'done' is left uninitialized when the main loop checks it. Sometimes it's false, and sometimes it contains garbage, in which case it's evaluated as true and the main loop terminates early.
Changing the intent of 'done' to 'inout' seems to fix the problem.
Setting 'done' before the 'select case' statement in run_command also seems to work:
done = .false.
select case (command)
...
My guess is that this is the right way to fix this, and that the compiler's behavior, while surprising to some of us (including me), is in fact correct.
valgrind helped find this.
And from another poster:
Along the same lines, the several instances of
type(options_type), intent(out) :: opt
in file menu.f90 should be changed to
type(options_type), intent(inout) :: opt
or the intent clause should be left out, because an argument with intent(out) becomes undefined when the subprogram is entered and stays undefined unless it acquires a value in the subprogram before returning.
Other suggestions included compiling and running with options to check array bounds, etc.
| Feeding a command using file redirection or pipe doesn't always work |
1,600,274,121,000 |
I'd like to make a hot-key for this task I sometimes need to perform:
%> cp file.txt.1 file.txt.1.bak
Where I've repeated the file name but with a .bak on the end. I'd like to instead just type:
%> cp file.txt.1
Hit the quick key and have it add the file name with a .bak extension. Which would turn the second code snippet into the first.
Is this possible? And if so how can I have the readline add this second parameter? (I think it's readline I'd have to program here).
|
For this case, in either bash or zsh, you can enter the command as
cp file.txt.1{,.bak}
This is brace expansion.
For cases where brace expansion isn't convenient because you want to do more editing on the second argument, in zsh, there's a command copy-prev-word which is bound to Ctrl+Alt+ out of the box. It inserts a copy of the word immediately preceding the cursor. Make sure to type a space before Ctrl+Alt+. You may prefer to bind copy-prev-shell-word which is generally more useful.
bindkey '^[^_' copy-prev-shell-word
In either bash or zsh, to replicate the last word on the command line, starting from the end of the line, make sure the line ends with a space and press Alt+B Ctrl+K Ctrl+Y Ctrl+Y. This cuts and paste the last word plus the trailing space twice. Alternatively, if the line does not end with a space, press Alt+B Left Ctrl+K Ctrl+Y Ctrl+Y. This only works if the last argument doesn't contain whitespace; if it does you need to go back a bit further. You can replace Alt+B by Ctrl+Left if that works on your setup.
| How can I add a hot-key to readline for zsh or bash that takes fills in the 2nd parameter to copy but with .bak? |
1,600,274,121,000 |
How do I download and install packages from the command line in OpenBSD?
As an example, in Fedora, to download the php pecl-memcached package from the command line, I just do this:
dnf install php php-pecl-memcached
I have searched through the net but found no answer relating to it ...
|
Install a package by-
# pkg_add packageName
For downloading PHP packages-
# pkg_add php
# pkg_add php-fpm
# pkg_add php-mysql
Discover more information at the openbsd faq
| OpenBSD: downloading and installing php packages |
1,600,274,121,000 |
I need to go through folders and count files in TARs with same name.
I tried this:
find -name example.tar -exec tar -tf {} + | wc -l
But it fails:
tar: ./rajce/rajce/example.tar: Not found in archive
tar: Exiting with failure status due to previous errors
0
It works when there is only one example.tar.
I need separate number for each file.
Thanks!
|
You need tar -tf {} \; instead of tar -tf {} + to run tar with
each tarball individually. In GNU man find it says:
-exec command {} +
This variant of the -exec action runs the specified
command on the selected files, but the command line is
built by appending each selected file name at the end;
the total number of invocations of the command will be
much less than the number of matched files. The command
line is built in much the same way that xargs builds its
command lines. Only one instance of `{}' is allowed
within the com- mand. The command is executed in the
starting directory.
Your command is equivalent to tar tf example.tar example.tar.
You're also missing [path...] argument - some implementations of
find, for example BSD find will return find: illegal option -- n
error. All in all it should be:
find . -name example.tar -exec tar -tf {} \; | wc -l
And notice that in that case wc -l will count number of files in all
example.tar files found. You can use -maxdepth 1 to search for
example.tar files only in the current directory. If you want to search for all example.tar recursively and print results for each one individually (notice that $ here is a command line
prompt
used to indicate start of a new line, not a part of the command):
$ find . -name example.tar -exec sh -c 'tar -tf "$1" | wc -l' sh {} \;
3
3
and with directory names prepended:
$ find . -name example.tar -exec sh -c 'printf "%s: " "$1" && tar -tf "$1" | wc -l' sh {} \;
./example.tar: 3
./other/example.tar: 3
| Loop through folders and count files in TARs |
1,600,274,121,000 |
I tried the command
setxkbmap cn but nothing happened.
I'd like to write characters in pinyin, so that it would automatically write
chinese characters and then have the possibility to turn in back to english
with
setxkbmap us
I'd like to do it from command line, because I'm using i3 window manager.
|
Normally, you will need an IME(Input Method Editor) to enter languages that make use of Chinese characters (e.g. Mandarin, Japanese, etc.).
Some of the more popular IMEs (in no particular order) include Fcitx, IBus and SouGou PinYin.
Installing an IME is generally straightforward on the mainstream distros, you just install the package from the official repo or link with the distro's package manager. This link, for example, describes this in detail for two input methods.
After installing an IME, you should be able to switch between English and Pinyin with the IME-specific shortcut, typically it's Ctrl+Space by default.
| Set chinese in xkb |
1,600,274,121,000 |
How can an original command issued at the command line be acquired without using proc or any other non standard tool?
When printing a process list using ps, the arguments passed in to initiate the command are shown without quotes, which is not how the original command was issued. It also appears that no combination of ps options can achieve this either.
After searching for quite some time and even reviewing the hard to find ps source code, there was no simple answer. Currently there are other posts mentioning the same issue but are either unanswered or the solutions proposed are not satisfactory. The ps source code can probably be edited to achieve the necessary result, but this approach is not preferred as ps is actually a non standard package that was coded significantly differently for every operating system it was on. For example the source code for ps on MacOS is drastically different than the source code for ps on ubuntu.
|
There is no portable way to get an unambiguous representation of the command line of another process. You didn't find the answer you're looking for because it's impossible. You either need to cope with an ambiguous representation, or to use a different method depending on which OS your code is running on, or to find someone who's done that work for you.
The POSIX specification of ps does not specify exactly how the args field is formatted, merely that it contains “the command with all its arguments as a string” and that it may be truncated. In practice, all the implementations I've seen concatenate the arguments with a space in between. That's the best you can do with POSIX.
If you want an unambiguous representation of a process's command line argument, you can find it in /proc/PID/cmdline on Linux and other Unix variants with a Solaris-like proc filesystem (so not BSD systems such as macOS). The arguments are separated by null bytes, which can't appear in an argument, so the representation is unambiguous. I don't think there's a way to get this information with the Linux procps ps utility, and even if there was it would be specific to Linux so it wouldn't be any more portable (less portable, in fact, since looking inside /proc works even on Linux kernels that have a different ps utility such as the one from BusyBox).
I don't know how to get an unambiguous representation of the command line arguments on macOS.
I know of one program that's done the non-portable part: the Python library psutil. I can confirm that it lets you obtain an unambiguous representation of the command line of a process on Linux. It should also work on macOS since the documentation doesn't mention any restriction, but I haven't tested.
python3 -c 'import os, psutil; print(psutil.Process(os.getpid()).cmdline())'
| process list with quoted arguments, portably |
1,600,274,121,000 |
There are many times I get single page PDFs that I want to convert to JPEGs and crop the excess whitespace off of.
Here is the current set of commands I have which accomplishes what I want:
gm convert -density 300 -trim INPUT.PDF TMP.PNG
gm convert -bordercolor white -border 10 TMP.PNG OUTPUT.JPG
rm TMP.PNG
I am trying to figure out how to condense these commands into a single command, and avoid creating the temporary TMP.PNG for processing.
This is my current attempt at consolidating the above commands:
gm convert INPUT.PDF -density 300 -trim -bordercolor white -border 10 OUTPUT.JPG
The problem I have with this command is that it generates a very blurry JPEG. Below, the first image (on the left) is a sample of the undesired result generated by my single-command attempt. The second image (on the right) is a sample of the crisp, high-quality result I am looking for that I currently have to use multiple commands to achieve. What is the correct way to consolidate the commands at the beginning of my post?
|
One of the few things I've learned the hard way about ImageMagick is that the order of arguments can be vital. In particular, you are providing an input pdf file, then suggesting a density to use when converting it to an image, whereas you need to set the density before reading the pdf. Simply invert those 2 items and you should find the same resolution of output as before:
gm convert -density 300 INPUT.PDF -trim -bordercolor white -border 10 OUTPUT.JPG
| Consolidate several GraphicsMagick (ImageMagick) commands into one |
1,600,274,121,000 |
When I run 'ls -laGp' command in macOS some of the folders are highlighted with this color. What does that mean?
I researched but couldn't find any documentation or anything. If you let me know which sources I should check first in these situations it would be awesome.
|
From the BSD ls man page:
LSCOLORS The value of this variable describes what color to use for which attribute when colors are enabled with CLICOLOR. This string is a concatena-
tion of pairs of the format fb, where f is the foreground color and b is the background color.
The color designators are as follows:
a black
b red
c green
d brown
e blue
f magenta
g cyan
h light grey
A bold black, usually shows up as dark grey
B bold red
C bold green
D bold brown, usually shows up as yellow
E bold blue
F bold magenta
G bold cyan
H bold light grey; looks like bright white
x default foreground or background
Note that the above are standard ANSI colors. The actual display may differ depending on the color capabilities of the terminal in use.
The order of the attributes are as follows:
1. directory
2. symbolic link
3. socket
4. pipe
5. executable
6. block special
7. character special
8. executable with setuid bit set
9. executable with setgid bit set
10. directory writable to others, with sticky bit
11. directory writable to others, without sticky bit
The default is "exfxcxdxbxegedabagacad", i.e. blue foreground and default background for regular directories, black foreground and red back-
ground for setuid executables, etc.
The default is "exfxcxdxbxegedabagacad"
The above means that a "11. directory writable to others, without sticky bit" will be ac or black foreground with a green background.
Note that the directories highlighted mustard in your example are all writable to others while the non-highlighted directories are not.
| What does 'mustard color' highlighted directories mean? |
1,600,274,121,000 |
I have a set of lines (item + description) which I want to run through fzf -m. For example:
item1: Some description
item1: Another description
item2: Yet another description
After selection I would like fzf to return the line numbers (e.g. 1 3) instead of the lines themselves because: 1) I don't want to include the description; 2) The items are not unique.
True, I can just prefix the lines with the line numbers first:
1: item1: Some description
2: item1: Another description
3: item2: Yet another description
then extract it later. But I think it would be great if fzf can be instructed to do this. It will make some things easier and open some more possibilities for the tool.
|
fzf can already do this with --with-nth to change the presented (and searched-for) line to only some fields of the original line. So we start with:
1: item1: Some description
2: item1: Another description
3: item2: Yet another description
then use:
fzf -d: --with-nth 2..
which means to skip showing the first field (the fields are separated by colon). fzf will then return something like this:
1: item1: Some description
3: item2: Yet another description
from which you can extract the line numbers.
| fzf: How to return "ID" / line numbers? |
1,600,274,121,000 |
I'd like to save the output from running a specific command on the command line. However, after running the command, I am prompted to type in the name of my account to confirm that I would indeed like to run this command.
If I simply use command > file.txt, the prompt itself is saved to the file and I can't type the confirmation. What command can I use to save this output?
|
You can use tee, which writes both to stdout and a file.
command | tee file.txt
| Redirect output to file when there is additional prompt |
1,600,274,121,000 |
System:
Red Hat Enterprise Linux Server release 7.6 (Maipo), 3.10.0-957.el7.x86_64
GOAL:
Collect configurations for data from multiple servers to validate they are the same.
What Works:
ssh $SERVERNAME 'yum list installed | grep -E "krb|java|libkadm|realmd|oddjob|sssd|adcli"' >> $FILENAME
What Doesn't Work:
ssh $SERVERNAME 'adcli info domain.name' >> $FILENAME
ssh $SERVERNAME 'realm list' >> $FILENAME
Error Received:
bash: adcli: command not found
bash: realm: command not found
Full Script:
#!/bin/bash
DATE=`date '+%Y%m%d'`
SERVERLIST=(
#"server1.com"
"server2.com"
"server3.com"
#"server4.com"
"server5.com"
)
for SERVERNAME in ${SERVERLIST[*]}
do
FILENAME=${SERVERNAME}-config.${DATE}
ssh $SERVERNAME 'yum list installed | grep -E "krb|java|libkadm|realmd|oddjob|sssd|adcli"' >> $FILENAME
ssh $SERVERNAME 'adcli info domain.name' >> $FILENAME
ssh $SERVERNAME 'realm list' >> $FILENAME
ssh $SERVERNAME 'cat /etc/sssd/sssd.conf' >> $FILENAME
done
|
GracefulRestart is almost certainly correct.
No verify, compare the output of $PATH between on-server exec and ssh to server exec:
[server2.com]# echo $PATH
[jumpbox]# ssh server2.com 'echo $PATH'
If the path to 'adcli' & 'realm' are missing on the ssh $PATH env variable, then the simplest way to fix is to simply use the full path.
| Commands Not Found when Passed through SSH |
1,600,274,121,000 |
how can I use printf to print a row of minus symbols?
when I try: printf "-----------\\n"
I get:
bash: printf: - : invalid option
printf: usage: printf [-v var] format [arguments]
when I try: printf "\-\-\-\-\-\-\-\-\-\-\-\\n"
I get: \-\-\-\-\-\-\-\-\-\-\-
|
It is an extremely in-efficient way to use printf() without using format specifiers. You generally define them to let know what type of output is being formatted. It should have been written as
printf '%s\n' "-----------"
Such that the printf matches the ----------- as a string type with the format specifier that takes a string keyword (%s). The \n after the specifier means, add the new line after the string is printed.
With the attempt what you have, when the quote removal happens printf interprets the dashes as one of its command line flags, which it does not understand.
Another hacky way of doing it would be to let printf know that its command line arguments are complete and interpret the content following it as its arguments. Most of the shell built-ins and/or external commands support this by suffixing a -- after the command keyword, i.e. as
printf -- "-----------\n"
| Printf - print repeating minus symbols [duplicate] |
1,600,274,121,000 |
I've read this thread https://unix.stackexchange.com/a/7718/256195, that only if var doesn't contain any tab/spaces but in my case it does contain spaces, like below example:
"this is a test" this_is_a_solid_line_that_doesnot_contain_tab_or_spaces
command column will separate this is ..etc also, but I'd want something acts on only "this is a test" and this_is_a_solid_line_that_doesnot_contain_tab_or_spaces.
Purpose: I have a bunch of lines like above in a file that don't be aligned properly.
|
Assuming the input doesn't contain | characters, you could convert those sequences of whitespace that are not inside quotes to | (or any other character that doesn't occur in the input) and then pipe to column -ts'|':
<input.txt perl -lpe 's/(".*?")|\s+/$1||"|"/ge' | column -ts'|'
| Layout tab/spaces [closed] |
1,600,274,121,000 |
I have a list of names and I have a binary file. I want to copy that binary file so that there is one copy for each member of the list. The list is a text file with one name in each row. I keep coming back to
for i in $(cat ../dir/file); do cp binaryfile.docx "$i_binaryfile.docx"; done
There is no error. Only one file titled _binaryfile.docx is created.
I have looked at this [Copy-a-file-to-a-destination-with-different-names]
and [duplicate-file-x-times-in-command-shell] but I cannot see how they are different.
|
It should be:
for i in $(cat file); do cp binaryfile.docx "${i}_binaryfile.docx"; done
EDIT:
You can reproduce it with this example:
$ i=1
$ echo $i
1
$ echo $i_7
$ echo ${i}_7
1_7
The point is that _ (underscore) character is allowed in variable
name. You can read about it in man bash but keep in mind that it's
written in a very technical, succinct language:
name A word consisting only of alphanumeric characters and underscores, and
beginning with an alphabetic character or an underscore. Also referred to
as an identifier.
And later on:
A variable is a parameter denoted by a name.
And:
${parameter}
The value of parameter is substituted. The braces are required when
parameter is a positional parameter with more than one digit, or when
parameter is followed by a character which is not to be interpreted as
part of its name. The parameter is a shell parameter as described above
PARAMETERS) or an array reference (Arrays).
So if we have a variable named i and we want to print its
value next to adjacent _ we have to enclose it in {} to tell
Bash that the name of the variable ends before _.
| Copy a file a number of times with names that come from a list |
1,600,274,121,000 |
I need to do simple financial calculations on Linux and had been using wcalc for this until I stumbled upon a wrong result caused by floating-point number issues. Is there a calculator (I really would prefer the command-line) that one can rely on for this task? One that doesn't use floats internally?
|
As pointed out in the comments to my question, bc doesn't have this problem. Another solution is the the more powerful command line of Qalculate! which seems to fulfill my needs.
| Financial calculations without floating point numbers in console? |
1,600,274,121,000 |
If you put this link in a browser:
https://unix.stackexchange.com/q/453740#453743
it returns this:
https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu#453743
However cURL drops the Hash:
$ curl -I https://unix.stackexchange.com/q/453740#453743
HTTP/2 302
cache-control: no-cache, no-store, must-revalidate
content-type: text/html; charset=utf-8
location: /questions/453740/installing-busybox-for-ubuntu
Does cURL have an option to keep the Hash with the resultant URL? Essentially I
am trying to write a script that will resolve URLs like a browser - this is what
I have so far but it breaks if the URL contains a Hash:
$ set https://unix.stackexchange.com/q/453740#453743
$ curl -L -s -o /dev/null -w %{url_effective} "$1"
https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu
|
Curl download whole pages.
A # points to a fragment.
Both are not compatible.
hash
The symbol # is used at the end of a web page link to mark a position inside a whole web page.
Fragment URLs
...convention called "fragment URLs" to refer to anchors within an HTML document.
What is it when a link has a pound "#" sign in it
It's a "fragment" or "named anchor". You can you use to link to part of a document.
Wikipedia: Uniform Resource Locator (URL)
An optional fragment component preceded by an hash (#). The fragment contains a fragment identifier providing direction to a secondary resource, such as a section heading in an article identified by the remainder of the URI. When the primary resource is an HTML document, the fragment is often an id attribute of a specific element, and web browsers will scroll this element into view.
Its main use is to move the "presentation layer" (what is viewed) to the start of an item.
curl
There is no "presentation layer" in curl, its goal is to download whole pages, not parts or fragments of pages. Therefore, there is no use for a "fragment" marker in curl. It is simply ignored by curl.
Workaround
Re-append the tag to the (redirected) link:
originallink='https://unix.stackexchange.com/q/453740#453743'
wholepage=$(curl -Lso /dev/null -w %{url_effective} "$originallink")
if [ "$originallink" != "${originallink##*#}" ]; then
newlink=$wholepage#${originallink##*#}
else
echo "link contains no segment"
newlink="$wholepage"
fi
echo "$newlink"
Will print:
https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu#453743
A quite faster solution is to not download the page. It is being redirected to /dev/null anyway. By removing the -L option and asking what would be the link if the (first) redirect were followed. The first redirect works in this case and most others.
wholepage=$(curl -so /dev/null -w %{redirect_url} "$originallink")
| cURL url_effective with Hash |
1,600,274,121,000 |
I used netstat -anlptu to check for open ports.
This command is now somewhat deprecated so I start using ss -anptu but each entry takes 2 lines. The result is not practical.
I use Debian.
netstat -anlptu:
tcp 0 0 0.0.0.0:6001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 192.168.0.106:xxxxx 192.0.x.y:443 ESTABLISHED 5081/firefox
Easy to read and clear.
ss -anptu:
tcp LISTEN 0 20 127.0.0.1:25 *:*
users:(("exim4",pid=823,fd=3))
tcp LISTEN 0 128 *:22 *:*
users:(("sshd",pid=807,fd=3))
tcp ESTAB 0 272 192.168.1.200:22 78.224.x.y:36028
users:(("sshd",pid=849,fd=3),("sshd",pid=840,fd=3))
tcp LISTEN 0 20 ::1:25 :::*
users:(("exim4",pid=823,fd=4))
tcp LISTEN 0 128 :::22 :::*
users:(("sshd",pid=807,fd=4))
This is clearly not easy to read.
Some columns are not aligned.
If I redirect to less or more:
tcp LISTEN 0 20 127.0.0.1:25 *:* users:(("exim4",pid=823,fd=3))
tcp LISTEN 0 128 *:22 *:* users:(("sshd",pid=807,fd=3))
tcp ESTAB 0 40 192.168.1.200:22 78.224.x.y:36028 users:(("sshd",pid=849,fd=3),("sshd",pid=840,fd=3))
tcp LISTEN 0 20 ::1:25 :::* users:(("exim4",pid=823,fd=4))
tcp LISTEN 0 128 :::22 :::* users:(("sshd",pid=807,fd=4))
Each entry takes one line, but columns are not aligned. Once again not easy to read
--> how can I have a readable output with ss?
|
Use column. For example:
ss -anpt | column -t -x | less
I get output like:
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 100 127.0.0.1:143 0.0.0.0:*
LISTEN 0 100 0.0.0.0:465 0.0.0.0:*
LISTEN 0 128 0.0.0.0:42449 0.0.0.0:*
LISTEN 0 10 0.0.0.0:5298 0.0.0.0:* users:(("pidgin",pid=30003,fd=19))
Note: i run a terminal that is 282 columns wide (a maximum width terminal on my 1440p screen with Liberation Mono Regular 11). YMMV on narrower terminals.
column performs a similar, but not identical, job as the columns program from the autogen. It does a good job of auto-formatting text into tabulated columns.
I'm not sure where the original source for column came from, but on debian systems, it's in the bsdmainutil package. It may be in a similarly named package on other Linux distributions.
Package: bsdmainutils
Version: 11.1.2
Description-en: collection of more utilities from FreeBSD
This package contains lots of small programs many people expect to find when
they use a BSD-style Unix system.
.
It provides banner (as printerbanner), calendar, col, colcrt, colrm, column,
from (as bsd-from), hexdump (or hd), look, lorder, ncal (or cal), ul, and
write (as bsd-write).
.
This package used to contain whois and vacation, which are now distributed in
their own packages. Also here was tsort, which is now in the "coreutils"
package.
The /usr/share/doc/bsdmainutils/copyright file in the package says:
This is a collection of programs from 4.4BSD-Lite that have not (yet)
been re-written by FSF as GNU. It was constructed for inclusion in
Debian Linux. As programs found here become available from GNU sources,
they will be replaced.
and
This package may be redistributed under the terms of the UCB BSD
license:
Copyright (C) 1980 -1998 The Regents of the University of California.
All rights reserved.
| ss command take 2 lines for each result |
1,600,274,121,000 |
I want to link the ssh command to autossh command, so for example when I type ssh [email protected] it will execute autossh [email protected].
I was tried ln -s autossh ssh but it doesn't work.
|
The command you are looking for is alias.
The general syntax for the alias command varies somewhat according to the shell. In the case of the bash shell (or any sh-like shell) it is
alias [-p] [name="value"]
So it seems you want:
alias ssh="autossh"
| Link command to another command |
1,600,274,121,000 |
I'm using a piped command to migrate a big production DB from one host to another using this command:
mysqldump <someparams> | pv | mysql <someparams>
And I need to extract the line 23 (or let's say the first X lines) (saved as file or simply in bash output) from the SQL passing from one server to another.
What I've tried:
Concatenate in output less, at least to see the output scrolling, but no luck
mysqldump <someparams> | pv | mysql <someparams> | less
Read about sed, but it's not useful to me
Using head to write to a file, but it is empty
mysqldump <someparams> | pv | mysql <someparams> | head -n 25 > somefile.txt
The only requirement I have is that I cannot save this .sql file.
Any idea?
Thanks
|
With zsh
mysqldump <someparams> |
pv > >(sed '22,24!d' > saved-lines-22-to-24.txt) |
mysql <someparams>
With bash (or zsh):
mysqldump <someparams> |
pv |
tee >(sed '22,24!d' > saved-lines-22-to-24.txt) |
mysql <someparams>
(though beware that as bash doesn't wait for that sed process, it's not guaranteed that saved-lines-22-to-24.txt will be complete by the time you run the next command in the script).
Or you could have sed to the writing:
mysqldump <someparams> |
pv |
sed '22,24 w saved-lines-22-to-24.txt' |
mysql <someparams>
To have it as output, with zsh:
{mysqldump <someparams> |
pv > >(sed '22,24!d' >&3) |
mysql <someparams>} 3>&1
or bash/zsh:
{ mysqldump <someparams> |
pv |
tee >(sed '22,24!d' >&3)
mysql <someparams>
} 3>&1
| Get first N lines of output from a pipe operation |
1,600,274,121,000 |
I'm looking to create a script that when executed it will look at a directory and search for all files and then automatically discover the filename patterns to then move them based on additional logic stated below.
Say I have the following files in a folder:
aaa.txt
temp-203981.log
temp-098723.log
temp-123197.log
temp-734692.log
test1.sh
test2.sh
test3.sh
The script should automatically be able to search the directory and it should find that there are 4 files (temp-XXX.log) and 3 files (testXXX.sh) that have a matching prefix in their name. Then once having found the number of files it should compare it to a defined limit, say 3.
If the number of files matching the specified name is greater than the limit it should then move the found files into a folder named after the part of the file names that matched.
So the parent folder from above should now look like:
aaa.txt
temp.log (This would be the folder that contains temp-734692.log, temp-123197.log, temp-098723.log, temp-203981.log)
test.sh (This would be the folder that contains test1.sh, test2.sh, test3.sh)
Hope this makes sense.
P.S. I am using ASH for this script so it will need to be able to run without many of the fancy bash abilities, otherwise this would be easier.
Thanks!
EDIT: Clarity changes in the beginning. Also, it might be easier if I supply a predetermined delimiter, say "&", which all the file names will have. The script will still need to create variable folder names based on the file names before the delimiter, but I think this will make things more clear and easier.
|
Check, does it work and I will add explanation, how it works. I tested it in the dash.
Note: file names should not contain spaces, newlines.
#!/bin/dash
limit=1
printf "%s\n" * |
sed 's/[-0-9]*\..*$//' |
uniq -c |
awk -v lim=${limit} '$1 >= lim {print $2}' |
sort -r |
while read -r i; do
for j in "${i}"*; do
[ -f "$j" ] || continue
dir=${i}.${j#*.}
[ -d "$dir" ] || mkdir "$dir"
mv -v "$j" "$dir"
done
done
There is one problem here - the case, when the file name equals to the future directory name, like the aaa.txt. In the aaa.txt case, the file name doesn't have any extra characters, so will nothing removed from it, therefore, the new directory name will be the same, that causes error:
mkdir: cannot create directory ‘aaa.txt’: File exists
mv: 'aaa.txt' and 'aaa.txt' are the same file
One workaround of this problem is checking, does the supposed directory name equals to the file name and, then adding some number to the future directory name, like aaa1.txt.
Demonstration
Before the script execution.
$ tree
.
├── aaa.txt
├── temp-098723.log
├── temp-123197.log
├── temp-203981.log
├── temp-734692.log
├── temp-new-file123.log
├── temp-new-file-2323-12.log
├── temp-new-file-342.log
├── test1.sh
├── test2.sh
└── test3.sh
0 directories, 11 files
After the script execution: script.sh
$ tree
.
├── aaa.txt
├── temp.log
│ ├── temp-098723.log
│ ├── temp-123197.log
│ ├── temp-203981.log
│ └── temp-734692.log
├── temp-new-file.log
│ ├── temp-new-file123.log
│ ├── temp-new-file-2323-12.log
│ └── temp-new-file-342.log
└── test.sh
├── test1.sh
├── test2.sh
└── test3.sh
3 directories, 11 files
| How to move all files matching a certain name to a new folder if the number of matching files is greater than 10? |
1,600,274,121,000 |
I accidentally executed man ls > info.txt and now I don't know how to recover the contents of the file.
|
you've wrote the output of "man ls" command in a file that you called it "info.txt".
If your info.txt file was empty, now easily you can delete your file and create new one by using these commands:
#rm -f info.txt
#vi info.txt
(thenCtrl + X and press yes to save it.)
Or:
you can open info.txt file, and delete content of it.
for example, if you use "nano" editor, you can follow steps:
# nano info.txt
#ctrl+k
(pressing ctrl+k on each line)
#ctrl+x
save your edition.
But if your info.txt file had something before overwriting, unfortunately you can't retrieve it.
| How do I recover the contents of a file? [duplicate] |
1,600,274,121,000 |
I'm using this command, which searches pacman.log for packages updated today and converts them into a conky string:
tail -500 /var/log/pacman.log | grep -e "\[$(date +"%Y-%m-%d") [0-5][0-9]:[0-9][0-9]\] \[ALPM\] upgraded" | sed 's/^.*\([0-2][0-9]:[0-5][0-9]\).*upgraded \([^ ]*\).*/${color2}\2${goto 250}${color1}\1/' | tail -18
With tail -18 the maximum number of lines is 18.
What is the best way to append new lines so that the stream always has 18 lines?
|
You can do (with a shell with support for zsh's {x..y} form of brace expansion like zsh, bash, ksh93 or yash -o braceexpand):
{
printf '%.0s\n' {1..18}
your-command
} | tail -n 18
Note that it prepends newline as opposed to appending them. To append, you could do:
your-command | tail -n 18 | awk '{print};END{while (NR++ < 18) print ""}'
| Append new lines to stream, until certain number is reached |
1,600,274,121,000 |
I have three commands in one shell script: wget, 7z, and rm.
Inside list.txt
URL1 archive_01.zip
URL2 archive_02.zip
URL2 archive_03.zip
Download archives:
while read -r url filename; do
wget -O "$filename" "$url";
done < list.txt
$ ls
archive_01.zip
archive_02.zip
archive_03.zip
Extract archives:
for file in *zip; do
7z x $file;
done
$ ls
archive_01.zip
archive_02.zip
archive_03.zip
folder_01
folder_02
folder_03
Delete archives:
rm *zip
Is it possible to make these three commands into a nested loops? If so, how?
I'm also curious, because the commands above will be executed like this (if I'm not wrong):
Download URL1 > Download URL2 > Download URL3
Extract Archive1 > Extract Archive2 > Extract Archive2
Delete Archive1 > Delete Archive2 > Delete Archive3
how to make the execution be like this?
Download URL1 > Extract Archive1 > Delete Archive1
Download URL2 > Extract Archive2 > Delete Archive2
Download URL3 > Extract Archive3 > Delete Archive3
|
Something like this?
#! /bin/bash -
while read -r url filename; do
wget -O "$filename" "$url"
7z x "$filename"
rm -- "$filename"
done < list.txt
| How to combine multiple commands into a nested loops? |
1,600,274,121,000 |
I'm working in the same directory of the files.
I have files with three different extensions.
I want to perform each one of the five commands on a file with the specific extension by passing them as arguments to the for loop.
example:
I want when I run the code like: $my_code.sh *.zap *.F *.T
I want the script to perform each command in the specific extension and prepare a list of command at the end and append them as output.
When I run the code as is, it will just take the first arguments (which contains files with *.zap files) and will perform all the commands on it, but what I want is apply each command in specific files with extension.
here is my code:
#!/bin/bash
frequ=$1
tim=$2
zap=$3
ls -1 * |
for i in "$@"; do
echo pav -g \""$frequ"_"avprof.ps/cps"\" -DT $frequ
echo pav -g \""$tim"_"fprof.ps/cps"\" -Gd $tim
echo pav -g \""$tim"_"ds.ps/cps"\" -j $tim
echo pav -g \""$frequ"_"stack.ps/cps"\" -R $frequ
echo psrplot -D \""$zap"_"bp.ps/cps"\" -p freq+ $zap
done >> ps_files.txt
|
It makes no sense put all commands into the single for loop, in your case. You don't have common actions for all files - each extension has own commands and them doesn't intersects. Thus, you will need use if or switch for distinguishing one extension from another. Why do so? It will be easier to create a custom loop for each extension.
I decided don't pass extensions to the script, but write them into code directly. Also, I picked printf - it is more suitable for this task.
Usage: ./my_script.sh > ps_files.txt
#!/bin/bash
for i in *.zap; do
printf 'psrplot -D "%s_bp.ps/cps" -p freq+ "%s"\n' "$i" "$i"
done
for i in *.T; do
printf 'pav -g "%s_fprof.ps/cps" -Gd "%s"\n' "$i" "$i"
printf 'pav -g "%s_ds.ps/cps" -j "%s"\n' "$i" "$i"
done
for i in *.F; do
printf 'pav -g "%s_avprof.ps/cps" -DT "%s"\n' "$i" "$i"
printf 'pav -g "%s_stack.ps/cps" -R "%s"\n' "$i" "$i"
done
Testing
I created six files:
$ ls -1
1.F
1.T
1.zap
2.F
2.T
2.zap
Output
# run my script
$ ./my_script.sh > ps_files.txt
# and look at the ps_files.txt content
$ cat ps_files.txt
psrplot -D "1.zap_bp.ps/cps" -p freq+ "1.zap"
psrplot -D "2.zap_bp.ps/cps" -p freq+ "2.zap"
pav -g "1.T_fprof.ps/cps" -Gd "1.T"
pav -g "1.T_ds.ps/cps" -j "1.T"
pav -g "2.T_fprof.ps/cps" -Gd "2.T"
pav -g "2.T_ds.ps/cps" -j "2.T"
pav -g "1.F_avprof.ps/cps" -DT "1.F"
pav -g "1.F_stack.ps/cps" -R "1.F"
pav -g "2.F_avprof.ps/cps" -DT "2.F"
pav -g "2.F_stack.ps/cps" -R "2.F"
| how to pass many arguments to for loop? |
1,600,274,121,000 |
I have a large list of IP addresses (most are IPv4, but a few are IPv6), followed by a space and then a domain name, followed by another space and the same domain name with "www." in front of it. Each instance is on it's own line. The list looks like this (but is much larger):
23.212.109.137 at.ask.com www.at.ask.com
216.58.206.74 maps.googleapis.com www.maps.googleapis.com
2400:cb00:2048:1::6812:32a5 litscape.com www.litscape.com
104.16.244.35 loc.gov www.loc.gov
216.70.104.235 mbu.edu www.mbu.edu
I would like to know two find and replace commands; each to generate another text file after the last.
1) The first command should find and replace everything before the "www." with "http://" so that the lines of the second text file will look like this:
http://www.at.ask.com
http://www.maps.googleapis.com
http://www.litscape.com
http://www.loc.gov
http://www.mbu.edu
2) The second command should find and replace all instances of "http://www." in the second text file so that the lines of the third text file will look like this:
at.ask.com
maps.googleapis.com
litscape.com
loc.gov
mbu.edu
Thank you.
|
With single awk command:
awk '{ print $2 > "domains.txt"; print "http://"$3 > "domains_http.txt" }' file
Results:
> cat domains_http.txt
http://www.at.ask.com
http://www.maps.googleapis.com
http://www.litscape.com
http://www.loc.gov
http://www.mbu.edu
> cat domains.txt
at.ask.com
maps.googleapis.com
litscape.com
loc.gov
mbu.edu
| Find and Replace Everything Before a String of Text |
1,600,274,121,000 |
Lets say I was told to do sudoers file changes with the following... What does that mean, and how do I actually do it?
www-data ALL = NOPASSWD: /bin/rm /etc/vsftpd/vusers/[a-zA-Z0-9]*
I believe that it's setting the premissions for those folders, and I think I use the visudo command to do it... but I'm not sure what the www-data means or anything like that. Can anyone shed some light on this for me?
|
The first word in the line indicates who this line applys to. www-data is a user, you can find it in /etc/passwd.
NOPASSWD means members of this user doesn't have to authenticate when calling sudo. Mostly used when a process will be calling sudo instead of a human.
The next part is the what your www-data has access to.
So this line means that the user www-data can execute /bin/rm on the files found in /etc/vsftpd/vusers/[a-zA-Z0-9]* as root without supplying their password.
| How to interpret line in sudoers |
1,600,274,121,000 |
This is what I get
üê∑ echo $((5/2))
2
How to make $((5/2)) giving me 2.5 ?
|
You can't. Bash will only work with integers. For more precision, use something like bc.
| How to make $((5/2)) deliver a floating point number? [duplicate] |
1,600,274,121,000 |
When you browse in a webpage with a browser, after the code of the page is downloaded, the browser downloads all the assets (CSSs, JSs and images).
Is there a way I can list all the URL of the assets of a page (internal and external assets)?
The idea is to monitor changes on the external and internal assets.
|
I wrote a Python script that might do what you want:
#!/usr/bin/env python2
# -*- coding: ascii -*-
"""list_assets.py"""
from bs4 import BeautifulSoup
import urllib
import sys
import jsbeautifier
import copy
# Define a function to format script and style elements
def formatted_element(element):
# Copy the element
formatted_element = copy.copy(element)
# Get beautified element text
formatted_text = jsbeautifier.beautify(formatted_element.text)
# Indent all of the text
formatted_text = "\n".join([" " + line for line in formatted_text.splitlines()])
# Update the script body
formatted_element.string = "\n" + formatted_text + "\n "
# Return the beautified element
return(formatted_element)
# Load HTML from a web page
html = urllib.urlopen(sys.argv[1]).read()
# Parse the HTML
soup = BeautifulSoup(html, "html.parser")
# Extract the list of external image URLs
image_urls = [image['src'] for image in soup.findAll('img') if image.has_attr('src')]
# Extract the list of external CSS URLs
css_urls = [link['href'] for link in soup.findAll('link') if link.has_attr('href')]
# Extract the list of external JavaScript URLs
script_urls = [script['src'] for script in soup.findAll('script') if script.has_attr('src')]
# Extract the list of internal CSS elements
styles = [formatted_element(style) for style in soup.findAll('style')]
# Extract the list of internal scripts
scripts = [formatted_element(script) for script in soup.findAll('script') if not script.has_attr('src')]
# Print the results
print("Images:\n")
for image_url in image_urls:
print(" %s\n" % image_url)
print("")
print("External Style-Sheets:\n")
for css_url in css_urls:
print(" %s\n" % css_url)
print("")
print("External Scripts:\n")
for script_url in script_urls:
print(" %s\n" % script_url)
print("")
print("Internal Style-Sheets:\n")
for style in styles:
print(" %s\n" % style)
print("")
print("Internal Scripts:\n")
for script in scripts:
print(" %s\n" % script)
These are the (note-worthy) packages I used:
urllib to load the HTML from a web-page
BeautifulSoup to parse the HTML
jsbeautifier to beautify/format embedded JavaScript and CSS code
The script prints the URLs for external resources and the element tags themselves (prettified/beautified) for internal resources. I made some off-the-cuff stylistic choices about how to format the results which can easily be modified or improved upon.
To see it in action, here is an example HTML file (assets.html):
<!doctype html>
<html lang=en>
<head>
<meta charset=utf-8>
<title>assets.html</title>
<link rel="stylesheet" type="text/css" href="mystyle.css">
<style>
body {
background-color: linen;
}
h1 {
color: maroon;
margin-left: 40px;
}
</style>
<script src="myscripts.js"></script>
</head>
<body>
<script>alert( 'Hello, world!' );</script>
<img src="https://www.python.org/static/community_logos/python-logo.png">
<p>I'm the content</p>
</body>
</html>
Here is how we might execute the script (on a local file):
python list_assets.py assets.html
And here is the output:
Images:
https://www.python.org/static/community_logos/python-logo.png
External Style-Sheets:
mystyle.css
External Scripts:
myscripts.js
Internal Style-Sheets:
<style>
body {
background - color: linen;
}
h1 {
color: maroon;
margin - left: 40 px;
}
</style>
Internal Scripts:
<script>
alert('Hello, world!');
</script>
Finally, here are some posts that I used as references and which you might find useful:
Web Scraping with Beautiful Soup
Beautifulsoup to extract all external resources from html
Checking for attributes in BeautifulSoup?
clone element with beautifulsoup
BeautifulSoup4: change text inside xml tag
| Listing webpage assets via CLI |
1,600,274,121,000 |
I have two large files ~9GB. CSV File 1 has columns A, B, C, D, E and CSV File 2 has columns B, C, F, G. The desired output is A, B, C, D, E, F, G. All I have been able to find is joining on similar columns and concatenating with the same columns, however here some match, and some do not. A sample output would look something along these lines:
A B C D E F G
1 2 3 4 5 6 7
NaN 1 2 NaN 1 2 1
So if the value doesn't exist for that column, as in it doesn't exist, I just want it to have a NaN value. I hope I have explained the problem well enough. Thanks!
Edit: Normally I would do this in Python but these massive files make it considerably more annoying iterating over chunks and then concatenating at the end. There appears to be a more straightforward way using bash that I am unaware of. Thanks!
|
This works based on the following facts:
(a) All fields are strictly tab separated
(b) Common columns in both files (B and C) have the same value
$ join --nocheck-order -eNaN -13 -22 -t$'\t' -o 1.1 1.2 1.3 1.4 1.5 2.3 2.4 b.txt c.txt
A B C D E F G
1 2 3 4 5 6 7
NaN 1 2 NaN 1 2 1
Files Sample:
$ cat b.txt
A B C D E
1 2 3 4 5
1 2 1
$ cat c.txt
B C F G
2 3 6 7
1 2 2 1
Join Options:
-13 -22 : Join based on file1 column3 (C) = file2 column2 (C)
-t$'\t' : tab delimiter for input and output
-o : Output format. 1.1 means file1, column1, and so on.
-e : Fill empty values with NaN
For more info see man join and even better info join
Alternative Solution with AWK
PS: Bear with me in awk, i'm an awk new learner.
$ awk -F"\t" '{a[1]="";{for (i=1;i<=NF;i++) if (i==6 ||i==7) continue;else \
if ($i!="") a[1]=a[1]FS$i;else a[1]=a[1]FS"NaN";print a[1]}}' <(paste b.txt c.txt)
Update for comma separated input fields
As advised in your comments , since csv files are separated by comma, this solution will separate input fields by comma and will output the results using tabs to be more readable.
awk 'BEGIN {FS=",";OFS="\t"}{a[1]="";{for (i=1;i<=NF;i++) if (i==6 ||i==7) continue;else \
if ($i!="") a[1]=a[1]OFS$i;else a[1]=a[1]OFS"NaN";print a[1]}}' <(paste b.txt c.txt)
If you need output to be printed also with comma , just replace the begine section with {FS=OFS=","}
Though is still unclear what you intent to do with common columns / different values.
You can remove the part if (i==6 ||i==7) continue;else to see if the results fits your needs. This condition check actually skips field 6 (B column of file2) and field7 (C column of file2) since those two columns of file 2 had been considered as identical to columns of file 1 till now.
For the join solution:
Replace -t$'\t' with -t',' to read comma separated fields
For the common columns you can play with this output format:
join --nocheck-order -eNaN -13 -22 -t',' -o 1.1 1.2 2.1 1.3 2.2 1.4 1.5 2.3 2.4 b.txt c.txt
| Concatenate CSV with some shared columns |
1,600,274,121,000 |
I used the below command to list down all the daemons that are in a machine
/sbin/initctl list | awk '{ if ($1 == "tty") print $1" "$2; else print $1; }'
Now my next requirement is to get the daemons running path i.e command line.
For instance vmsd /usr/sbin/vmsd
So gave couple of tries using the ps aux command followed by the grep command but I am not getting the results as expected.
Can some one help me out with this?
|
As DopeGhoti answered in the comments:
If you can find the PID, use cat /proc/$pid/cmdline
| Need to get the command line of all the daemons that are running |
1,600,274,121,000 |
In all the different Linux desktop environments there is usually a list of all the (xorg) programs that can be run.
For example in my most recent Linux install (Arch running the Deepin Desktop Environment) if you press the Windows/Mac key it brings up a list of all the applications that use xorg, and shows what ones where installed recently.
How do I get that list of the installed xorg applications/packages from the command line?
|
Desktop entries for applications, or .desktop files, are generally a combination of meta information resources and a shortcut of an application. These files usually reside in /usr/share/applications or /usr/local/share/applications for applications installed system-wide, or ~/.local/share/applications for user-specific applications. User entries take precedence over system entries.
source :https://wiki.archlinux.org/index.php/Desktop_entries
| Get list of all installed X applications |
1,600,274,121,000 |
I'm creating an animated gif using mogrify's convert. However, at the moment I run in on a folder with dozens of images in it, and I just instruct it to use all of the images it finds. However, I'd like it to only use files that were created on a particular date. Can I do such a thing?
Current command used:
convert -delay 10 -loop 0 images/* animation.gif
All my filenames are timestamps so alternatively I'd like to be able to specify a range, something like:
convert -delay 10 -loop 0 --start="images/147615000.jpg" --end="images/1476162527.jpg" animation.gif
I've tried the convert man pages but no luck. Is any of this possible?
|
This small shell script will loop through every file in the current directory and compare it's last-modified timestamp to the range that is built by the start and end timestamps (here, October 10th). Matching files are added into the files array, and if there are any files in the array, it calls convert on them. Adjust the -gt 0 to -gt 1 if you want to have at least two files (or more).
Note that the creation time is not usually preserved in a file's (Unix) attributes, so this method could be fooled by a simple touch 1476158400.jpg, which would make an old file appear new. See below for a second option.
#!/usr/bin/env bash
start=$(date +%s -d 'Oct 10 2016')
end=$(date +%s -d 'Oct 11 2016')
files=()
for f in *
do
d=$(stat -c%Z "$f")
[[ $d -ge $start ]] && [[ $d -le $end ]] && files+=("$f")
done
[[ ${#files[*]} -gt 0 ]] && convert -delay 10 -loop 0 "${files[*]}" animation.gif
Alternatively, if the filenames themselves encode the creation timestamp, then you could use a brute-force loop to find them:
start=$(date +%s -d 'Oct 10 2016')
end=$(date +%s -d 'Oct 11 2016')
files=()
for((i=start;i<=end;i++)); do [[ -f "${i}.jpg" ]] && files+=("${i}.jpg"); done
[[ ${#files[*]} -gt 0 ]] && convert -delay 10 -loop 0 "${files[*]}" animation.gif
| How do I use mogrify convert only on files with a certain creation date? |
1,600,274,121,000 |
If I start a terminal (any terminal, for example urxvt) like urxvt -e sleep 5, then a new terminal is launched but after 5 seconds the terminal closes, because the sleep program has ended. How can I start a terminal with a program on the command line, but have the terminal stay alive after that process has ended?
In practice, what I'd actually like to do is urxvt -e tmux new-session top, which opens urxvt with a tmux session that is running top. But when I press q, top ends which also causes tmux and urxvt to end as well. I'd like when I exit top for me to be taken to a shell within tmux.
|
The terminal (tmux) closes when it's executed the command you told it to execute. If you want to execute top and then an interactive shell, you need to tell it to do that. Combining commands is the job of the shell, so run an intermediate shell (which isn't interactive) and tell it to run the two commands in succession.
urxvt -e tmux new-session sh -c 'top; "$SHELL"'
| How to prevent the terminal from closing when the program it was started with ends? [duplicate] |
1,600,274,121,000 |
Why does this work when I type it direct on the commandline:
oldversion=12345
newversion=67890
sed -i "s/${oldversion}/${newversion}/g" "/home/user/MyDir_${newversion}/MyDir_${newversion}.reg"
But when I put it in a script it doesn't:
#!/bin/bash
oldversion=12345
newversion=67890
sed -i "s/${oldversion}/${newversion}/g" "/home/user/MyDir_${newversion}/MyDir_${newversion}.reg"
It is executed as ./myscript.sh
It generates the error
sed: -e expression #1, char 16: unknown option to `s'
|
Character 16 in the sed script does not exist. This is the " character that sed is complaining about, and that means that your editor or input method replaced it with some non-ASCII rendition of ". My guess would be either “ or ” or ¨.
Use file on your file in order to get some guess on the encoding. It should be "ASCII". Anything else hints to the file containing stuff you did not intend to be there.
| sed works on commandline but not in script [closed] |
1,468,321,036,000 |
Copy and pasting from a website to the terminal is very harmful , there are an example here
You past the following line :ls /some/thing/much/too/long/to/type/ , by pressing Enter the following command will be executed without confirmation:
ls /dev/null; clear; echo -n "Hello ";whoami|tr -d '\n';echo -e '!\nGotcha!!!\nThis is the first line of your /etc/passwd: ';head -n1 /etc/passwd
ls /some/thing/much/too/long/to/type/
The text editor like vim , nano... can display easily the hidden command.
Using a multi user operating system , is that possible to find an additional option , package or configuration file to display the hidden command on the terminal before to be executed ?
|
The easiest and most portable way to see "hidden commands" is probably using
cat -v
For instance, I might run "cat -v" and paste into that terminal to see the nonprinting characters.
Further reading:
How can I see what my keyboard sends? (ncurses FAQ)
| Is there an additional option to display hidden command on the terminal? |
1,468,321,036,000 |
For example , there are two files a.ppt and b.jpg .
And I can call a magic method to open them appropriately just like :
magic_method a.ppt
magic_method b.jpg
And it open libreoffice writer and image viewer or something that fit the file type .
Is there any command or script for that?
|
You might be thinking of xdg-open:
xdg-open opens a file or URL in the user's preferred application. If a URL is provided the URL will be opened in the user's preferred web browser. If a file is provided the file will be opened in the preferred application for files of that type. xdg-open supports file, ftp, http and https URLs.
The Arch Wiki lists some alternative tools.
| Is there any way to detect file type and open it with GUI in terminal in Fedora? [duplicate] |
1,468,321,036,000 |
I'm looking for something like Zenity or Yad, except I want something that behaves like a menu, namely: it opens right next to the cursor; it takes one click to select things; it's possible to have multiple levels.
The closest thing I've found is actually Autokey's folders, but Autokey needs to always be running (even if I call autokey-run), which I'd prefer to avoid.
The key requirement is that I be able to single click on something that appears near my cursor.
Any ideas?
|
Sawfish manages its menus with a companion program sawfish-menu. You can use that program even if you aren't running Sawfish as your window manager. The protocol between sawfish and sawfish-menu doesn't seem to be documented anywhere; it's inspired from the menu specification format in Sawfish itself.
echo '(popup-menu (("_toplevel" 0) ("_submenu" ("_foo" 1) () ("_bar" 2))))' |
/usr/lib/sawfish/1.5.3/x86_64-pc-linux-gnu/sawfish-menu
sawfish-menu prints 0 if the user selects “toplevel”, etc. You can specify strings (in double quotes, or even without quotes if they're valid Lisp identifiers) instead of numbers for the entries. If the user aborts (e.g. by pressing Esc) then the output is ().
Here's a summary of the input syntax of sawfish-menu.
Start with (popup-menu and end with ).
For a clickable menu entry, use ("TEXT" OUTPUT) where TEXT is the text of the entry and OUTPUT is what the program prints if this menu entry is selected.
If there is an underscore in TEXT, the next character is the accelerator for that entry.
You can put a check mark in front of a menu entry by adding (check . t), e.g. ("Foo" 42 (check . t)).
You can put a bullet (radio button) in front of a menu entry by adding (group . SOMETHING) (check . t). Only one entry within a given group can have the button.
You can make an entry be greyed out and non-selectable by adding (insensitive . t).
For a submenu, use ("TEXT" ENTRY…).
For a separator, use ().
Obviously, don't expect people to have this utility installed. It's typically not packaged separately from Sawfish, but it doesn't actually need anything from Sawfish itself; it's a rep script, rep being the Lisp dialect that Sawfish (and basically nothing else) is written in. On Debian, you need the rep-gtk package to run sawfish-menu, plus the script itself.
| Is there a program that will launch a configurable context menu |
1,468,321,036,000 |
How might I more easily use a filename found by grep as an argument to vim:
$ grep -r foo *
lib/beatles/john.c
lib/pantera/phil.c
lib/pinkfloyd/roger.c
lib/pinkfloyd/richard.c
lib/whitestripes/elephant/jack.c
$ vim lib/pinkfloyd/roger.c
To autocomplete with Bash, I need to type " l \t p \t i \t r \t o \t " because many other files match. Is there an easer way to say "Give me the third found file?" Maybe something like one of these:
$ vim $3 // Third line
$ vim 'roger' // Only line with string 'roger'
$ vim TabTabTab // Each Tab completes a different filename from grep
I do use Tmux and I know that I can go up and select the filename and then paste it. But that is still a bit clumsy, even as a VIM user. There must be a better way!
Note that I am not looking for a solution to script the grep output, rather this is something that I will be using manually as I grep for and open files with VIM.
EDIT: I am looking for a general solution, not one that will always target 'roger' or always target the third item.
|
I've made a script to grep recursively for a pattern, and then I can select one of the matches so vim will open that file in that line. I call it vgrep (vim grep, although it uses also awk in it). The following is its code:
#!/bin/bash
my_grep() {
grep -Rn -I --color=auto --exclude-dir={.svn,.git} --exclude={tags,cscope.out} "$@" . 2>/dev/null
}
my_grep_color() {
grep -Rn -I --color=always --exclude-dir={.svn,.git} --exclude={tags,cscope.out} "$@" . 2>/dev/null
}
awk '{
print NR, $0
}' <(my_grep_color "$@")
awk '{
lines[NR]=$1;
}
END{
printf "Enter index: "
getline num < "-";
split(lines[num], s, ":")
system("vim "s[1]" +"s[2])
}' <(my_grep "$@")
I'm calling grep twice just to highlight my matches, you can change it so it will be called only once, but you will lose the highlighting.
Usage:
Suppose you want open vim in the line where there is a word "Foo" in it, you can use it like this: vgrep Foo
If there is a file foobar like:
a b c
Bar
e f Foo g
h i
vgrep will output this:
1 ./foobar:3 e f Foo g
Enter index:
So you can type 1+enter and it will open that file in the third line. It will output one indexed line for each match it finds.
I'm pretty sure this script can be improved, but for my uses it works just fine.
Also, consider using ctags and cscope if you are working with C/C++.
| Autocomplete from grep output |
1,468,321,036,000 |
I get Python. I don't get shell script. I could learn shell script, but I would rather not if I can use Python in its place.
A good place for me to start would be the .profile script. Currently, for me it is:
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash
if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
# added by Anaconda2 2.4.0 installer
export PATH="/home/alien/anaconda2/bin:$PATH"
# ===== Added manually.
# texlive
export PATH="/home/alien/texlive/2015/bin/x86_64-linux:$PATH"
export INFOPATH="/home/alien/texlive/2015/texmf-dist/doc/info:$INFOPATH"
export MANPATH="/home/alien/texlive/2015/texmf-dist/doc/man:$MANPATH"
# petsc
export PETSC_DIR="/home/alien/petsc"
# PYTHONPATH
export PYTHONPATH="/home/alien/cncell:$PYTHONPATH"
export PYTHONPATH="/home/alien/csound:$PYTHONPATH"
Instead, I'd like to write something like this:
import os
import subprocess
# if running bash
HOME = os.environ["HOME"]
if os.environ["BASH_VERSION"]: #not sure how to complete this line
bashrc_path = os.path.join(HOME, ".bashrc")
if os.isfile(bashrc_path):
subprocess.call([bashrc_path])
user_bin_dir = os.path.join(HOME, "bin")
if os.isdir(user_bin_dir):
os.environ["PATH"] += ":" + user_bin_dir
user_added_vars = [("PATH", "/home/alien/anaconda2/bin"),\
("PATH", "/home/alien/texlive/2015/bin/x86_64-linux"),\
("INFOPATH", "/home/alien/texlive/2015/texmf-dist/doc/info"),\
("MANPATH", "/home/alien/texlive/2015/texmf-dist/doc/man")]
for var_name, addition in user_added_vars:
os.environ[var_name] += ":" + addition
This is just more readable/familiar to me.
Is it possible to somehow write Python scripts where bash scripts are expected? I think an answer to an earlier question of mine might be useful, perhaps we just stick #!/usr/bin/env python at the top of the script to designate it as a "Python script"? But then, why isn't there a #!/bin/bash line at the top of the current .profile?
|
Not really. The .profile and .bashrc (and .bash_logout and .bash_profile) are specific to the shell. That is, the shell programs and only the shell programs read these files. It (the shell) does not execute these as a separate process, but rather source them, in a way similar to how Python does an import, but far less elegantly. If you want something similar, you need to find a python-based shell. An answer to that related question is found here.
The closest you can get is a python script that does its work and then exports its shell-compatible KEY=VALUE pairs, prints them to standard out, and then in the .profile or whatever, you have (for instance):
set -a
eval `python $HOME/.profile.py`
set +a
You must, however, take care of several things. First, all these VALUEs must be appropriately quoted. Usually, you want single-quotes, unless the VALUE contains single-quotes. Second, certain shell variables should not be overwritten (unless you know what you're doing): SECONDS, RANDOM come to mind.
By the way: The set pair turn on and off automatic exporting, so that whatever variables you send from python to the shell, then get exported by the shell to sub-processes. This isn't necessary if your python script precedes each KEY with the term export. (However, strictly speaking, that's incompatible with the original Bourne shell.)
| .profile is written in shell script — can I instead make my system understand that I want it to execute a Python script instead? |
1,468,321,036,000 |
Background
If think most of us agree that filters and pipes are the very foundation of Unix systems.
Pipes and filters are very powerful tools. Almost all Unix utilities use the standard input and output streams by default to make it possible to use them within pipelines.
It is a convention for utilities within Unix to operate standard input and standard output if no other input/output files have been specified.
grep, as, sed, tr, perl, sort, uniq, bash, cmp, cat and many others are all utilities that follow this convention.
But many programming utilities have abandoned this convention.
Reading input
The most obvious example of this is cc (the C compiler).
If you invoke cc with no arguments you get this message:
ryvnf:~$ cc
cc: fatal error: no input files
compilation terminated.
This is not the only example of this:
ryvnf:~$ yacc
/usr/bin/bison: -y: missing operand
Try '/usr/bin/bison --help' for more information.
Lower-level utilities like as read standard input by default. I wonder why that is.
Writing output
This also applies to output.
cc outputs its executable code into a.out by default. The parser generator yacc outputs its generated parser to y.tab.c.
To me using standard input/output streams by default is advantageous because then you can easily connect various utilities. Like this pipe which compiles a yacc parser to executable code in one go without generating intermediate files like y.tab.c:
yacc parser.y | cc -o parser
My question
Why is it that utilities for programming don't use the standard streams by default as many other Unix utilities do?
What is the motivation for not using standard input streams by default for these utilities?
Note that I am aware that you can get cc to read standard input by using cc -x c -. This works but my question remains why it doesn't do this by default.
|
Pipelines don't work for source code because you can't process input as it comes in. You need the entire file loaded before processing begins. It gets even worse when you need multiple files for compilation (.h files for example). If you were reading from stdin you would need to stream in all of the needed files with some method of specifying file breaks between the files you piped in. The problems just grow from there.
The idea behind the pipeline was that it would be a series of simple tasks. Compiling code is NOT a simple task and so it was never designed to be a part of a pipeline. Also pipeline theory said that all communication between processes in the pipeline should be in plain text to facilitate portability of individual components. By definition the output of cc or yacc or ld or anything else involved in compiling code is binary data which doesn't fit the model.
| Why doesn't cc (the C compiler) and similar utilities use standard streams by default? [closed] |
1,468,321,036,000 |
When I run my favorite app, why the arguments look different when viewed by ps?
$ redshift -l 12.94:43.75 2>/dev/null 1>&2 &
[1] 8637
$ ps -o cmd= -C redshift
redshift -l 12.94 43.75
Notice the missing colon.
|
Though the details are operating-system specific, most systems allow you to alter the command-line arguments as they are reported by ps (or in the /proc file system). For example, on some systems you can directly edit argv.
Many systems ship with a library function called setproctitle that allows you to do this. So a good place to look would be the man page and source to setproctitle if you want to see how this works on your system.
| Why are arguments of a command altered when viewed by ps? |
1,468,321,036,000 |
I am fairly new to Linux and have been trying to move some files around with terminal on my external hard drive but I can't seem to get it to work. I am using a generic external hard drive with a ext4 format but not matter what I try I can't do anything with it through my terminal. The Drive's name does have spaces in it so when ever I do something in terminal it tries to separate the externals name and the spits out no directory found. is there a way to make it recognize the name without removing the spaces? Any help would be more than grateful.
|
Welcome to Linux! A trick that will get you started here (and will save you from getting carpal tunnel in the future) is "tab completion":
$ ls /med
then press Tab to see
$ ls /media/
If you press Tab again, you might see a list of possible options to continue the path,
$ ls /media/
MyBigExternalDrive/ My Example Hard Drive/
or (if there is only one path) the entire path will be completed:
$ ls /media/My\ Example\ Hard\ Drive/
Tricks like this are nice because you can learn seemingly unrelated syntax. In this case, you can write out paths with spaces by putting a \ in front of the space.
| Can not access my external hard drive. |
1,468,321,036,000 |
I have:
$ find 1 2 -printf '%i %p\n'
40011805 1
40011450 1/t
40011923 1/a
40014006 1/a/e
40011217 1/a/q
40011806 2
40011458 2/y
40011924 2/a
40013989 2/a/e
40013945 2/a/w
I want:
<inode> <path>
any 2
40011450 2/t
40011458 2/y
any 2/a
40014006 2/a/e
40011217 2/a/q
40013945 2/a/w
How do do it?
|
Already answered.
Here is version adapted to this task:
D=$(readlink -f "2"); (cd "1" && find . -type f -print0 | cpio --pass-through --null --link --make-directories "$D") && rm -Rf 1
After this command I have exactly what I wanted:
$ find 1 2 -printf '%i %p\n'
find: `1': No such file or directory
40011806 2
40011450 2/t
40011458 2/y
40011924 2/a
40011217 2/a/q
40014006 2/a/e
40013945 2/a/w
Read notes about usage in the original answer (linked above).
| How do I merge (without copying) two directories? [duplicate] |
1,468,321,036,000 |
Need to remove default gateway. For example, there is an IP 192.168.4.15 with default gateway 192.168.4.14. I connect to WLAN with gw 10.0.0.1 and after that I would like do remove previous gw.
IFS='.' read -ra IPARR <<< "$IP"
Gateway="${IPARR[0]}.${IPARR[1]}.${IPARR[2]}.14"
ssh blah@$IP '/sbin/route -v del default gw $Gateway;'
#ssh blah@$IP '/sbin/ip route delete $Gateway dev rndis0;'
#ssh blah@$IP '/sbin/route -n'
Both ways don't work. However, it is possible to remove it if I ssh into machine. My guess is that something wrong with passing $Gateway variable. Any suggestions?
|
So, the answer is to use double quotes when ssh'ing into machine:
ssh blah@$IP "/sbin/route -v del default gw $Gateway;"
| Concatenate and pass as parameter, bash |
1,468,321,036,000 |
Is there any way I can save a list in a output.txt, but without using the text editor to make so that:
before each output there should be the command used that made the list ( list -l, for example or if I use two ls with different arguments, two separated groups of list with each of their own command used on to make them)
between the outputs there should be an empty line.
|
Well, you can always create function like
f () { echo "$@" >> output.txt; $@ >> output.txt; echo >> output.txt;}
and then write
f ls /
f ls /tmp
f do_something_that_produces_list
| Saving a list in a specific format |
1,468,321,036,000 |
I'm looking for a way to view the timestamp of a job in CUPS. I've searched the man pages and can't seem to find it.
The long term goal is to have a script that parses the time from the jobID and will automatically delete any job that is over a certain age - to avoid overloading the server. My CUPS server has over 2000 print queues.
|
I found the following 2 questions within the U&L site that would seem to give hints as a possible way to do this. These 2 questions:
View all user's printing jobs from the command line
How to show the CUPS printer jobs history?
Would seem to imply that you could use lpstat to get what you want. I noticed that I could run this command:
$ sudo lpstat -W completed
mfc-8480dn-1652 root 1024 Tue 28 Jan 2014 01:19:34 AM EST
And this one:
$ sudo lpstat -W completed -u saml | head -2
mfc-8480dn-1524 saml 23552 Thu 28 Nov 2013 10:45:44 AM EST
mfc-8480dn-1526 saml 699392 Sat 30 Nov 2013 10:34:34 AM EST
But the -u all did nothing for me.
$ sudo lpstat -W completed -u all | head -2
$
Curiously I could do this:
$ sudo lpstat -W completed -u saml,root | head -3
mfc-8480dn-1524 saml 23552 Thu 28 Nov 2013 10:45:44 AM EST
mfc-8480dn-1526 saml 699392 Sat 30 Nov 2013 10:34:34 AM EST
mfc-8480dn-1652 root 1024 Tue 28 Jan 2014 01:19:34 AM EST
So one hackish way to do this would be to formalize a list of the users on your system and then add that as a subcommand to the -u argument like so:
$ sudo lpstat -W completed -u $(getent passwd | \
awk -F: '{print $1}' | paste -sd ',')
Just to show that this sees all the users locally you can get a unique list of your users like so:
$ sudo lpstat -W completed -u $(getent passwd | \
awk -F: '{print $1}' | paste -sd ',') | awk '{print $2}' | sort -u
ethan
root
sam
tammy
Issues?
One problem with this is if the user printing to CUPS does not have an account locally then they won't get displayed.
But if you have a directory that contains your LPD control files, typically it's /var/spool/cups, you'll notice a bunch of control files in there. These files are kept as a result of theMaxJobs` setting, which defaults to 500 when unset.
$ sudo ls -l /var/spool/cups/ | wc -l
502
Another source of usernames?
If you look through these files you'll notice that they contain usernames, and not just ones for accounts that are present on the system.
$ strings /var/spool/cups/* | grep -A 1 job-originating-user-name | head -5
job-originating-user-name
tammyB
--
job-originating-user-name
tammyB
So we could select all the entries that contain the username followed by the B.
$ sudo strings /var/spool/cups/* | grep -A 1 job-originating-user-name | \
grep -oP '.*(?=B)' | sort -u
ethan
guest-AO22e7
root
sam
saml
slm
tammy
This list can then be adapted in the same way as we were originally using to take the list of users from getent passwd, like so:
$ sudo lpstat -W completed -u $(strings /var/spool/cups/* | \
grep -A 1 job-originating-user-name | \
grep -oP '.*(?=B)' |sort -u | paste -sd ',')
mfc-8480dn-1525 tammy 545792 Thu 28 Nov 2013 01:36:59 PM EST
mfc-8480dn-1526 saml 699392 Sat 30 Nov 2013 10:34:34 AM EST
mfc-8480dn-1652 root 1024 Tue 28 Jan 2014 01:19:34 AM EST
mfc-8480dn-1672 saml 1024 Sun 09 Feb 2014 01:56:26 PM EST
References
why is /var/spool/cups so huge?
| View timestamp for CUPS print jobs |
1,468,321,036,000 |
So today I encountered one of my PHP files was outdated so I've got to overwrite the phpthumb directory on the entire server.
Multiple websites use this folder on multiple unknown locations, so how can I overwrite all these directories from 1 source path? (ie: /home/test/testuser/phpthumb/ to /home/*/*/phpthumb/)
|
This should work:
echo /home/*/*/phpthumb | xargs -t -n 1 cp -r /home/test/testuser/phpthumb/*
You have to work with xargs. Unfortunately cp cannot copy to multiple target. cp can handle multiple sources.
Explanation:
echo /home/*/*/phpthumb: lists all phpthumb directories
xargs -t -n 1: xargs should call cp for every line separately
cp -r /home/test/testuser/phpthumb/* the command xargs should call. Note the target directory will be appended at the end by xargs.
| How-to overwrite directory on multiple places with 1 source directory |
1,468,321,036,000 |
Typing in xdotool getwindowfocus windowkill currently terminates the active window and bypasses any safeguards like "would you like to save your work?". Is there a weaker command than windowkill I can here use that won't make such bypasses?
|
The soft way to request an X11 application to close its window and possibly then exit is to send it a WM_DELETE_WINDOW message.
Xdotool doesn't appear to have a way to do this. You can do it in Perl with X11::Protocol::WM. Untested:
perl -MX11::Protocol -MX11::Protocol::WM -e '$X = X11::Protocol::new(); X11::Protocol::WM::set_wm_protocol($X, ($X->GetInputFocus())[0], "WM_DELETE_WINDOW")'
Alternatively, wmctrl can do that:
wmctrl -c :ACTIVE:
| A quit command weaker than windowkill? |
1,468,321,036,000 |
I'm running a command on command line which infinitely generates a certain data and I'm looking for a particular bit of data, so I used grep to find it. As soon as I get the data, I want the command to kill itself. How do I achieve this?
NOTE1: AND'ing kill $$ with grep is not terminating the command.
NOTE2: I saw this question, which does not seem to work in my case.
|
Try this
loud_program | grep --max-count=NUM
then, according to my limited knowledge, loud_program receives SIGPIPE because it is writing to a disconnected end, which in turn could terminate loud_program. Try it with your program, not sure if this works for all programs.
| How to make an infinitely executing command kill itself when certain conditions are met? |
1,468,321,036,000 |
I've downloaded list of videos via youtube-dl and each file has corresponding .json file containing certain properties. So I would like to sort the files based on the selected json property which is inside of corresponding .json file (e.g. by number of views count, property: view_count).
What tools I need and how this can be achieved?
|
You would need to use some command-line JSON parser, extract the specific value for each file by printing it and sort it by the printed value.
Here is the example of the script which you can use:
ls -1 *.json | tr \\n \\0 | xargs -0 -L1 -I% sh -c "cat '%' | jshon -e view_count | awk '{print \$1\" %\"}'" | sort -k 1 -nr
Where view_count is your json property name.
The script will list the .json files and for each file will print the JSON view_count property value and numerically sort by the 1st column.
In this example, you need jshon tool which can be easily installed from the package manager. Or install it from the GitHub source.
Then you can freely modify the above script on your needs. Some examples:
To print top 20, add: | head -n20
To print corresponding videos instead of json files, add: | sed s/info.json$/mkv/
Links:
Unix command-line JSON parser?
How to parse JSON with shell scripting in Linux?
Sort a file based on 1 column
| How to sort files based on the json property value inside the file? |
1,468,321,036,000 |
I want to convert a particular amount of seconds to a date. In my case the it's the number of seconds elapsed since 1st of January 0001.
If it were for seconds elapsed from epoch it would be easy: $ date -r nr_of_seconds. It would be awesome if there was a way of telling date to start at a particular date. Is there such an option (the -v option almost does what I need, ... I think)?
I'm on a Mac.
|
date -r almost does the job. All you need to do is shift the origin, which is an addition.
date -r $((number_of_seconds - epoch))
where epoch is the number of seconds between 1 January 1 and 1 January 1970. The value of epoch depends on your calendar.
In the Gregorian calendar, there are 477 leap years between 1 and 1970, so 365 * 1969 + 477 = 719162 days = 62135596800 seconds. Note that this number is greater than 232, so you'll need a shell capable of 64-bit arithmetic to handle it. Your number_of_seconds will be more than 232 anyway if it represents dates beyond the second century AD. I think bash supports 64-bit arithmetic even on older, 32-bit OSX but I'm not sure.
date -r $((number_of_seconds - 62135596800))
| Convert a number of seconds elapsed to date from arbitrary start date |
1,468,321,036,000 |
I have 11 files with spaces in its name in a folder and I want to copy the newest 10. I used ls -t | head -n 10 to get only the newest 10 files. When I want to use the expression in a cp statement I get an error that the files could not be found because of the space in the name.
E.g.: cp: cannot stat ‘10’: No such file or directory for a file named 10 abc.pdf
The cp statement: cp $(ls -t | head -n 10) ~/...
How do I get this working?
|
If you're using Bash and you want a 100% safe method (which, I guess, you want, now that you've learned the hard way that you must handle filenames seriously), here you go:
shopt -s nullglob
while IFS= read -r file; do
file=${file#* }
eval "file=$file"
cp "$file" Destination
done < <(
for f in *; do
printf '%s %q\n' "$(date -r "$f" +'%s')" "$f"
done | sort -rn | head -n10
)
Let's have a look at its several parts:
for f in *; do
printf '%s %q\n' "$(date -r "$f" +'%s')" "$f"
done
This will print to stdout terms of the form
Timestamp Filename
where timestamp is the modification date (obtained with -r's option to date) in seconds since Epoch and Filename is a quoted version of the filename that can be reused as shell input (see help printf and the %q format specifier). These lines are then sorted (numerically, in reverse order) with sort, and only the first 10 ones are kept.
This is then fed to the while loop. The timestamps are removed with the file=${file# *} assignment (this gets rid of everything up to and including the first space), then the apparently dangerous line eval "file=$file" gets rid of the escape characters introduced by printf %q, and finally we can safely copy the file.
Probably not the best approach or implementation, but 100% guaranteed safe regarding any possible filenames, and gets the job done. Though, this will treat regular files, directories, etc. all the same. If you want to restrict to regular files, add [[ -f $f ]] || continue just after the for f in *; do line. Also, it will not consider hidden files. If you want hidden files (but not . nor .., of course), add shopt -s dotglob.
Another 100% Bash solution is to use Bash directly to sort the files. Here's an approach using a quicksort:
quicksort() {
# quicksorts the filenames passed as positional parameters
# wrt modification time, newest first
# return array is quicksort_ret
if (($#==0)); then
quicksort_ret=()
return
fi
local pivot=$1 oldest=() newest=() f
shift
for f; do
if [[ $f -nt $pivot ]]; then
newest+=( "$f" )
else
oldest+=( "$f" )
fi
done
quicksort "${oldest[@]}"
oldest=( "${quicksort_ret[@]}" )
quicksort "${newest[@]}"
quicksort_ret+=( "$pivot" "${oldest[@]}" )
}
Then, sort them out, keep the first 10 ones, and copy them to your destination:
$ shopt -s nullglob
$ quicksort *
$ cp -- "${quicksort_ret[@]:0:10}" Destination
Same as the previous method, this will treat regular files, directories, etc. all the same and skip hidden files.
For another approach: if your ls has the i and q arguments, you can use something along these lines:
ls -tiq | head -n10 | cut -d ' ' -f1 | xargs -I X find -inum X -exec cp {} Destination \; -quit
This will show the file's inode, and find can perform commands on files refered to by their inode.
Same thing, this will also treat directories, etc., not just regular files. I don't really like this one, as it relies too much on ls's output format…
| Use output from head to copy files with spaces |
1,468,321,036,000 |
When I run, say, cp, I get output like the following:
# cp -v Foo Bar
âFooâ -> âBarâ
What's up with the weird â characters? Why is the shell doing this? It looks like some kind of strange encoding issue.
When I use PuTTY, I get â. When I log into the actual machine locally, I get ? in inverse-video. If I redirect stdout to a file, copy that to my Windows machine, and open it, I get some random combination of characters until I tell me text editor to pretend the file is UTF-8. And then I get propper open- and close-quotes.
|
It's an encoding issue.
Set your Putty character set translation to "UTF-8":
Window -> Translation -> Remote character set
| Incorrect output from cp, rm, and so on |
1,468,321,036,000 |
Sometimes I wonder how Linux programs achieve certain results knowing that they internally use system calls (system() or exec() in C programs). Given a working binary I wonder if it is possible to know easily which commands where executed. In a concrete example I use the genealogy tool gramps to build a family tree. I assume that it generates certain graphs using graphviz (using command line tools rather than libraries). Now I want to reproduce the same manually on the command line. This task would be simplified a lot if I knew the commands used by gramps. I could download and look at the sources to find out, but I wonder if I can obtain this information more easily using debugging tools such as strace, something like strace gramps 2>/dev/stdout | grep system. Note that I would not like to search the binary using a hex editor unless this can be automated. Looking at the sources directly should be easier in this case.
|
strace -o dumpfile.strace -f -e trace=process $your $app $with $parameters
You find the result in the file dumpfile.strace.
| Retrieve system commands without reading sources |
1,468,321,036,000 |
I am running Ubuntu 13.10 on a Thnkpad X1 Carbon. My Wifi card is Centrino Advanced-N 6205 [Taylor Peak]
I normally connect to open WiFi networks using the commands
sudo iw dev wlan0 connect <ESSID> <Frequency> <BSSID>
dhclient wlan0
This method works for me everywhere except in one room on my campus, where the signal is very strong (iwlist reports a signal strength of under -30dB. I am fairly certain there is nothing wrong with the Access Point, because my phone connects to it just fine.
However, when I do the first command above, I am never able to associate with the Access Point, and dmesg always shows a message containing something about ipv6 and Wifi not ready.
I have tried various version of the iwlwifi driver but none of them are able to work for this Access Point.
I am wondering if anyone has suggestions for debugging, such as "turning down the radio power", "turning up the radio power", other obscure settings that can be set on WiFi cards that might help?
I am also interested in knowing what sorts of root causes this kind of problem might have, since my wireless card seems to be able to other networks and AP's just fine.
|
You can use wpa_supplicant to connect to open AP. Add following section to the /etc/wpa_supplicant.conf:
ap_scan=1
# no encryption
network={
ssid="TEST"
key_mgmt=NONE
}
You'll find more details in the sample /etc/wpa_supplicant.conf
| Tips for debugging WiFi on the command line? |
1,468,321,036,000 |
I am using a macBook Pro
I enter: new-host-2:~ Justin$ hostname
And it returns: new-host-2.home
Why is this when it says in setting/sharing my computers name is "Justin's macbook pro" and computers on my local network can access my computer at "Justins-MacBook-Pro.local"
The tutorial I am reading says that the command should return one of the "many" names of your computer, I am assuming this is one, but if it is where else on my computer can I find this name or a list of names for my computer? And why did it not return "Justins-MacBook-Pro.local", this format was what the tutorials computer returned?
|
MAC OS X maintains at least three different names for different usages (ComputerName, HostName and LocalHostName).
You can set the command line hostname to a different value with this command:
scutil --set HostName "justins"
| Why when I enter the command "hostname" it returns something other than my computers name? |
1,468,321,036,000 |
I've experienced an issue where some of my scripts run perfectly fine when I call them manually, but these same scripts when called as cron jobs via cron don't seem to work at all.
So my question: I'd like to know if there are restrictions that apply with the use of commands and/or scripts (and the privilege of execution) in a script scheduled to run with cron?
|
The most common reason why commands that work fine from the command line would fail under cron is the fact they run under a stripped down environment with only a handful of variables defined.
In particular PATH is set to its default value.
Any customization done in dot files (.profile /etc/profile and the likes) is not done with cron scripts but of course, this can be fixed by modifiying the cron entry or the called script itself.
The fact the script is non interactive and missing a graphic environment (DISPLAY variable) might also affect scripts to run as expected.
| Does cron impose some limitations to types of commands and privilege of execution? [duplicate] |
1,468,321,036,000 |
I'm trying to remove a file I made by mistake using --exclude with tar. I ended up making a tar file named --exclude=*.tar Now I want to delete it but I'd like to rename it first. How do I escape it correctly?
|
There are two straightforward ways to do this using just rm:
rm -- --exclude=*.tar
or
rm ./--exclude=*.tar
| Proper escape sequence for a non-standard file name [duplicate] |
1,468,321,036,000 |
I want to send, maybe using escape sequences such as $$$, fragments of an article through various commands, possibly with modifiers. The stdin would be replaced by the corresponding stdout. (Deleting the very special modifier should be simple enough with sed, if necessary.)
I believe I can do it with python... but I was wondering if there is that special UNIX tool or technique that I don't know about that might do 90% of the trick elegantly.
To be more concrete, consider the following document:
$$$ figlet -f script
Title
$$$
Now it's time to visualize the cosine function:
$$$ gnuplot
set terminal dumb
plot cos(x)
$$$
Finally, a simple directed graph:
$$$ graph-easy
digraph G {
A -> B;
}
$$$
After running my program I would get:
______ _
(_) |o | |
| _|_ | | _
_ || | |/ |/
(_/ |_/|_/|__/|__/
Now it's time to visualize the cosine function:
1 ++----------***--+---------------****---------------+--***----------++
+ * * + * +* + *cos(x) ****** +
0.8 ++ * * * * * * ++
| * * * * * * |
0.6 ++ * * * * * * ++
| * * * * * * |
0.4 ++ * * * * * * ++
| * * * * * * |
0.2 ++ * * * * * * ++
| * * * * * * |
0 ++ * * * * * * ++
-0.2 ++ * * * * * * ++
| * * * * * * |
-0.4 ++ * * * * * * ++
| * * * * * * |
-0.6 ++ * * * * * * ++
| * * * * * * |
-0.8 *+ * * * * * * +*
+* * + * * + * * + * *+
-1 +***-------------+-----**----------+---------**-----+-------------***+
-10 -5 0 5 10
Finally, a simple directed graph:
+---+
| A |
+---+
|
|
v
+---+
| B |
+---+
Any thoughts will be appreciated. Thanks a lot, and cheers!
|
With awk & bash (for here-string <<< feature ):
awk '
/^\$\$\$/{
sub(/\$\$\$/, "")
cmd=$0
next
}
{ arr[cmd]=arr[cmd] $0";"}
END{
for (a in arr) {
if (a ~ ".") {
exe=sprintf("%s", a " <<< \042" arr[a] "\042" )
system(exe)
}
else{
print arr[a]
}
}
}' file
Output:
1 ++----------***--+---------------****---------------+--***----------++
+ * * + * +* + *cos(x) ****** +
0.8 ++ * * * * * * ++
| * * * * * * |
0.6 ++ * * * * * * ++
| * * * * * * |
0.4 ++ * * * * * * ++
| * * * * * * |
0.2 ++ * * * * * * ++
0 ++ * * * * * * ++
| * * * * * * |
-0.2 ++ * * * * * * ++
| * * * * * * |
-0.4 ++ * * * * * * ++
| * * * * * * |
-0.6 ++ * * * * * * ++
| * * * * * * |
-0.8 *+ * * * * * * +*
+* * + * * + * * + * *+
-1 +***-------------+-----**----------+---------**-----+-------------***+
-10 -5 0 5 10
;;;Now it's time to visualize the cosine function:;;;Finally, a simple directed graph:;;
+---+
| A |
+---+
|
|
v
+---+
| B |
+---+
______ _
(_) |o | | o
| _|_ | | _
_ || | |/ |/
(_/ |_/|_/|__/|__/o
/
It's not a big deal to modify it a bit to be 100% compliant with your demand.
| Piping fragments of a document through various commands |
1,468,321,036,000 |
I have about 40 files in my folder. I selected all the files and pressed command+I. Instead of opening one Get Info window, my Mac shows up 40 windows! Is there a terminal command to hide the file extension from being shown when I open this folder in Finder the next time?
|
I found the answer in the other stack exchange forum "Super User". It looks like this is not possible in Terminal, unless we have Xcode installed or via AppleScript.
Show/hide extension of a file through OS X command line
| Hide a file extension using Terminal |
1,468,321,036,000 |
I have 2 files: d and t. I would like to be able to combine these files so that the first line of file t is followed by a tab and then the first line of d. For shorter lines, paste t d seems to work fine.
$ cat d t
Highly reactive metals in group 1A of the periodic table.
Fairly reactive metals in group 2A of the periodic table.
alkali metals
alkaline earth metals
$ paste d t
Highly reactive metals in group 1A of the periodic table. alkali metals
Fairly reactive metals in group 2A of the periodic table. alkaline earth metals
$ paste t d
alkali mHighly reactive metals in group 1A of the periodic table.
alkalineFairly reactive metals in group 2A of the periodic table.
Trying to paste full sentences seems to act strangely. As seen above, the terms get trimmed down to the first 8 characters.
$ paste t d > temp
$ gedit temp &
$ vim temp
Opening gedit shows line breaks following each term. Vim shows this:
alkali metals^M Highly reactive metals in group 1A of the periodic table.
alkaline earth metals^M Fairly reactive metals in group 2A of the periodic table.
Well, that seems to be easy enough to fix. :%s/^V^M//g removes all of the carriage returns, and everything shows up correctly. But how did those carriage returns end up there in the first place?
While my question does involve carriage returns in a text file from Windows making things act strangely in a Unix-like environment, it is not a duplicate of this question. The problems are similar, but the manifestations are very different. It took me around an hour to figure out that carriage returns were even the culprit because I could not find an instance of a similar enough problem through web searches. That is why I posted this after having solved it myself.
|
The file which was the source for t was created using notepad on Windows 8 and copied by Ubuntu 13.04 into my home directory. The source for d was created on Ubuntu in gedit. Thus, the carriage returns were in the file all along. It seems that moving files back and forth between different operating systems results in problems like this fairly often.
newline differences
converting line endings
| Why does the paste command add line breaks? [duplicate] |
1,468,321,036,000 |
For example, the following works fine:
/usr/bin/program
It produces some output, and gets to result.
But if I invoke it like this:
echo -n | /usr/bin/program
or this
echo -n | bash -c "/usr/bin/program"
or even this:
echo -n | bash -c "wc -c; /usr/bin/program"
It produces some lines of output, then fails. I have no access to source of the program, so I can't even look what could cause this behavior.
And when I try to invoke it from the python script, I get the same stuff:
echo | python -c 'from subprocess import call; call("/usr/bin/program", shell=True)'
(version without "echo" prepended works fine)
I don't even have the faintest idea why that could be happening. Stdin is going to be open even if I don't explicitly specify where the program should read from, so that shouldn't be the cause.
Is there any way to work around this issue?
EDIT:
The last four lines from strace output - the only ones that differ:
# without echo
select(1, [0], NULL, NULL, {0, 0}) = 0 (Timeout)
select(1, [0], NULL, NULL, {0, 0}) = 0 (Timeout)
select(1, [0], NULL, NULL, {0, 0}) = 0 (Timeout)
select(1, [0], NULL, NULL, {0, 0}) = 0 (Timeout)
...
# with echo
select(1, [0], NULL, NULL, {0, 0}) = 1 (in [0], left {0, 0})
write(4, "\0\0\0j\0\0\0\3\0\0\0\4\0\0\0\0\377\377\377\377", 20) = 20
write(3, "\0\0\0j\0\0\0\3\0\0\0\4\0\0\0\0\377\377\377\377", 20) = 20
exit_group(1) = ?
PARTIAL SOLUTION:
sleep 20 | /usr/bin/program
Seems that program waits for something to happen at stdin, and exits if it encounters a newline or EOF (we can see it from select call in strace output - it timeouts if input comes from "real" user). So we needed a program that doesn't write anything to stdin, while still keeping it open - sleep does the job.
|
Here's a Perl one-liner that will do what you want:
perl -e '$SIG{CHLD} = sub{exit 0}; open $fh, "|-", @ARGV or die; sleep 20 while 1;' /usr/bin/program
It's essentially the same as a mythical* sleep forever | /usr/bin/program, except it also watches for the program to finish, and will quit immediately when it does. If /usr/bin/program needs any args, you can tack them onto the end of the line.
*sleep forever doesn't work, but GNU sleep will sleep forever if you sleep inf!
| How can I work around the program failing if there is *any* stdin? |
1,468,321,036,000 |
I was just introduced to xxd today. See link. I am very familiar with the QNX hd and odcommands, both of which will take input and create a hex dump (or octal dump if you like). See hd and od.
What I am looking for is the xxd -r capability to go backwards from a hexdump to a binary file, but apparently QNX doesn't have that, or I'm not reading the description appropriately. Can someone point me in the right direction?
This is regarding QNX Neutrino 6.4.1 or newer.
|
I would try to compile xxd for QNX instead of trying to find an alternative: ftp://ftp.uni-erlangen.de/pub/utilities/etc/xxd-1.10.tar.gz
The source is small (less than 1000 lines) and it has defines for windows and amiga so I expect it's fairly portable.
| Alternative to xxd for QNX |
1,468,321,036,000 |
I want to know that if i install a daemon service in Arch Linux, then what will be the path of the files that will get install. also which files will be install where.
|
Have you read pacman(8)?
To list all the files being installed by a particular package, run:
$ pacman -Ql <package_name>
Daemons are usually systemd services in Arch Linux, hence you could run:
$ pacman -Ql <package_name> | grep service
to see a list of service files installed by that package.
| How to get the Installation path of binary and logic file in Daemon in Arch Linux |
1,468,321,036,000 |
I am able to use ImageMagick to create a thumbnail of the first page of a PDF using:
convert -thumbnail x80 95.pdf[0] thumb_95.png
This works fine and generates a thumb_95.png file.
I have tried several permutations of "find" using xargs but I can't get a combo working that will create the thumbnails in the folders along with the source PDFs.
The PDFs are in folders named with UUIDs, e.g.:
/511017a7-67fc-4897-80c1-0d42ac100b68/415.pdf
/511015bc-e0a8-4ab7-ba29-0ce9ac100b68/122.pdf
My required result would be:
/511017a7-67fc-4897-80c1-0d42ac100b68/415.pdf
/511017a7-67fc-4897-80c1-0d42ac100b68/thumb_415.png
/511015bc-e0a8-4ab7-ba29-0ce9ac100b68/122.pdf
/511015bc-e0a8-4ab7-ba29-0ce9ac100b68/thumb_122.png
Any help on the best ways to get this conversion to happen for all *.pdf recursively would be much appreciated!
|
Try this:
find /source/directory -name "*.pdf" -exec \
sh -c 'convert -thumbnail x80 {} $(dirname {})/thumb_$(basename {})' \;
I had to modify it slightly to:
find /source/directory -name "*.pdf" -exec \
sh -c 'convert -thumbnail x80 {} $(dirname {})/thumb_$(basename {} .pdf)'.png \;
To have basename strip the filextension and then append .png.
Thanks to all answers. This one worked well for me!
| Howto recursively create PDF thumnbails on Linux command line |
1,356,894,572,000 |
I have one system that I would like to do a little clean up, so I would like to get all user accounts and last date they accessed they mail. It is a Debian system.
So far I got to this:
cut -d: -f1 /etc/passwd | xargs -n1 finger | grep "Mail last read"
But I don't know how to write that username in front of
Mail last read Sun Aug 12 03:06 2012 (CEST)
|
You can try something like:
for USER in $(cut -d: -f1 /etc/passwd); do MAILINFO=$(finger $USER | grep "Mail last read"); echo "$USER - $MAILINFO"; done
I think you should get the gist ... you need to manipulate the return from the grep "Mail last read" a bit.
| List all users and last time they read mail, pipeing to multiple output |
1,356,894,572,000 |
Currently I'm repeatedly doing a 'find' that's too slow. I'm searching for non-hidden executable files within "$root", excluding "$root/bin":
find "$root" -type f -perm -o+x -not -path "$root/bin/*" \( ! -regex '.*/\..*' \)
I'd like to restrict find to only look in directories with mtimes older than a certain time. I still want it to recurse into old directories' subdirectories, but I don't want it to check the regular files inside unless the directory passes my mtime check. Is it possible to do this with GNU find or do I need two invocations, one to find the directories and another to check the files inside?
|
Try this one:
find "$root" -type d -mtime -1 ! -path "$root/bin*" -exec find "{}" -maxdepth 1 -type f -executable \;
It's not just one find run, however maxdepth should accelerate the result.
| How to have find only search for files in changed directories? |
1,356,894,572,000 |
From what I know, parameters you pass to a command, goes to it's STDIN stream.
so this:
cut -d. -f2 'some.text'
should be perfectly identical to this:
echo 'some.text' | cut -d. -f2
as we send some.text to STDIN. In first case through a paramter, in second case via pipe.
Where do parameter some.text from the first sample go, if not to STDIN?
|
STDIN and the program command ( including arguments ) are completely different things. STDIN is a file that the program can read from. It may be connected to a terminal, a disk file, a device, a socket, etc. The program command is simply a set of strings, the first of which is the name of the program. They are passed as arguments to the program's main() function.
| Where are command line arguments (e.g. 'some.text') actually passed to? |
1,356,894,572,000 |
I have used oh_my_zsh (and tinkered with bash_it) on multiple systems and have generally been happy with it, though I hate it's auto-correction feature and generally turn it off.
My usual shell is zsh and I really want just three things from my prompt:
Current directory/or pwd.
Git status and branch.
Color output from ls (on the ls command, not in the prompt).
The rest is just bling and is often irritating.
By using these shell scripts I am paying too much in cpu cycles for what I want.
Any suggestions, either with using these scripts or as a separate shell script. I am OK with either zsh or bash.
Thanks.
|
To have colored output from ls, use the alias ls='ls --color=always'.
You can enable this with
alias ls='ls --color=always'
As for having your current directory in your prompt:
PROMPT='%~'
To add git status to you prompt, take a look at this.
| Shell Prompt Customization? [closed] |
1,356,894,572,000 |
Is the ide interface also a bus? Is there a command like lspci or lsusb, to find out what devices are on the ide bus? Sources seem to contradict eachother. I'm asking this question here because of the need to know how to explore the bus in linux, but I would also like to make sure I understand what it is to begin with. Would that there is command like lssata as well?
|
According to wikipedia, IDE is a bus: http://en.wikipedia.org/wiki/Parallel_ATA
As far as I know there's no tool to scan the IDE bus apart from letting the kernel do it. I think it might interfere with regular I/O.
| ide and pci bus commands |
1,356,894,572,000 |
Is there a command-line utility which would allow me to extract pitch information from an audio file and store it in a numerical form (i.e. as a list of comma-separated values)?
|
You could have a look at pitchtrack but it only supports two output formats, its own (.pt) and PostScript if you use the included "pt2ps" tool.
| Extract pitch information from audio file |
1,356,894,572,000 |
I've managed to get flvstreamer to read a radio station's RTMP stream with the options --live -r [url], and it outputs what I guess is the raw audio data + stream info to stdout.
Can I make it play the stream through my speakers, from the command-line?
Possibly by sending sending the raw audio data to mplayer or something else. Thanks.
I got it to work with the command ./flvstreamer_x86 --live --quiet -r [url] --buffer 3000 | mplayer -vo null -, but it quits after a couple of seconds to a couple of minutes. I added --buffer 3000 to imitate what I saw when tracing the original Flash player with Wireshark. These are the last rows of output.
[pulse] working around probably broken pause functionality,
see http://www.pulseaudio.org/ticket/440
AO: [pulse] Init failed: Connection refused
Failed to initialize audio driver 'pulse'
AO: [alsa] 48000Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
FAAD: Failed to decode frame: Maximum number of bitstream elements exceeded
A:16866.9 ( 4:41:06.9) of 0.0 (unknown) 8.1%
Exiting... (End of file)
This was with flvstreamer 1.81. I couldn't get it to work with any version above, they just output
FLVStreamer v2.1c1
(c) 2010 Andrej Stepanchuk, Howard Chu, The Flvstreamer Team; license: GPL
Connecting ...
ERROR: rtmp server sent error
Starting Live Stream
FLV☺♣ Metadata:
audiodatarate 48.00
audiosamplerate 44100.00
audiocodecid 10.00
[stripped]
How can I make it play continuously? Thanks.
|
I think I've made it work with 1.81 now :)
./flvstreamer_x86 --live --quiet --buffer 3000 -r [url] | mplayer -vo null -idle -
I added the -idle to stop it from exiting, I guess the problem was that flvstreamer needed to buffer and mplayer didn't receive more data, so it quit.
| Play RTMP stream from command-line |
1,356,894,572,000 |
Is there a command like
mv --preserve-structure src src/1 src/2/3 dst
which creates dst/1 and dst/2/3? It should work similar to mv src/* dst, but move only the subtrees listed.
|
Under Linux, using rename from the Linux utilities (rename.ul under Debian and Ubuntu):
rename src dst src/1 src/2/3 # dst/2 must exist
With the rename Perl script that Debian and Ubuntu install as prename or rename:
rename 's!^src!dst!' src/1 src/2/3 # dst/2 must exist
rename 'use File::Basename; use File::Path;
s!^src!dst! && mkpath(dirname($_))' src/1 src/2/3
Here's a shell function that does what you're asking except for the argument order:
mv-preserving-structure () {
s=${1%/} t=${2%/}; shift 2
for x; do
case $x in
$s/*)
y=$t${x#$s}
mkdir -p -- "${y%/*}"
mv -- "$x" "$t${x#$s}";;
esac
done
}
mv-preserving-structure src dst src/1 src/2/3
| Selective recursive move? |
1,356,894,572,000 |
I wanted to know what should I do to restore the configuration files if I've modified or accidentally deleted a file.
In my case, I'm talking about /etc/snmp/snmpd.conf, what command should I use to reinstall it?
|
For restoring the configuration files you can use:
sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall packagename
So in this case the command (for snmpd) would be:
sudo apt-get -o Dpkg::Options::="--force-confmiss" install --reinstall snmpd
Credits to this site
| How to restore configurations files ? (SNMP) |
1,356,894,572,000 |
I'm using Pandoc to convert my mardown files to html files, html pages which are Github styled using Github.html5 from the Pandoc-goodies repo which is based on this other repo.
I'm extremely pleased to see that  inserts the complete inline pdf in the html file output, however it's more or less at the minimal size, making it impossible to really read the pdf.
How to tell Pandoc to resize this inline pdf ? I've tried the <div> trick but no luck.
|
Pandoc allows to add attributes to images, which will also work for <embed> elements. The attributes must be given in curly braces. E.g.
{width=100% height=80%}
The advantage is that this syntax, contrary to raw HTML, will work in many output formats supported by pandoc.
| How to change the size of inline pdf in Pandoc generated html files? |
1,356,894,572,000 |
I have a tar.gz called "first.tar.gz". Inside it I have only one folder called "first" (no other folders or files). I want to decompress the tar.gz, so the folder "first" renames to "second".
I tried this:
tar -zxf first.tar.gz --transform s/first/second/
but it didn't work for me. I didn't get any errors / response, it just extracted the "first" folder without renaming.
The version of tar is 1.26
|
When you use --transform with GNU tar and ask for verbose output with -v, the pathnames you see outputted are the un-transformed pathnames.
GNU tar will transform the pathnames according to your --transform expression but will not report these in the output unless you use the option --show-transformed-names.
Example:
$ tree
.
`-- archive.tar
0 directories, 1 file
$ gtar -t -f archive.tar
first/
first/dir/
first/dir/first.txt
first/dir/file
$ gtar -xv -f archive.tar --transform='s/first/second/'
first/
first/dir/
first/dir/first.txt
first/dir/file
Note how the above command reports the pathnames stored in the archive. Below, we see that the pathnames were transformed appropriately.
$ tree
.
|-- archive.tar
`-- second
`-- dir
|-- file
`-- first.txt
2 directories, 3 files
| How to change name of a folder inside tar.gz before decompressing? |
1,356,894,572,000 |
When using tab completion for mkdir I found a binary existing called mkdict. But I can't find a man page or other details. A Google search only yields info on a python library with this name, but I don't think this command can be that. What is it?
I am running Oracle Linux 8 in a VM, without GUI (just cli). Here is some results of commands I tried to find info:
Location:
$ which mkdict
/usr/sbin/mkdict
Trying to find something:
$whatis mkdict
mkdict: nothing appropriate.
$man mkdict
No manual entry for mkdict
$help mkdict
-bash: help: no help topics match 'mkdict'
$mkdict --help
-d, --decrompressdecompress
$dnf info mkdict*
Error: No matching Packages to list
If I try to run it, then nothing really seems to happen. Perhaps it is waiting for input. The terminal just sits trying to run it until I do a Ctrl-Break.
A Google search for mkdict + linux only gives results on a Python package of the same name, at least in the results I reviewed. But it seems unlikely this is that package, which is has apparently little downloads from PyPI.
What is mkdict and what does it do or is for?
|
Googling for /usr/sbin/mkdict (because it's really interesting that it's in sbin and not bin) finds this bugreport
No man pages found for /usr/sbin/mkdict & /usr/sbin/packer. These binaries are part of the cracklib-dicts RPM package, but no man pages are included in RPM package.
from Red Hat, which fits Oracle Linux.
Cracklib does seem to have a Python port, but was originally C, and apparently can be used to validate passwords by rejecting those that can be easily cracked. The beginning of the original README goes
CrackLib is a library containing a C function (well, lots of functions
really, but you only need to use one of them) which may be used in a
"passwd"-like program.
The idea is simple: try to prevent users from choosing passwords that
could be guessed by "Crack" by filtering them out, at source.
CrackLib is an offshoot of the the version 5 "Crack" software, and
contains a considerable number of ideas nicked from the new software.
At the time of writing, Crack 5 is incomplete (still awaiting purchase
of my home box) - but I though I could share this with you.
[ Incidentally, if Dell or anyone would like to "donate" a Linuxable
486DX2-66MHz box (EISA/16Mb RAM/640MB HD/AHA1740) as a development
platform for Crack, I'd be more than grateful to hear from you. 8-) ]
It should be easy to find out by looking at the "binaries" if Oracle Linux is using the Python port, or the original.
| What is the mkdict command and what does it do? (It came in distro but has no man page or help) |
1,356,894,572,000 |
I've several files containg POST body requests.
I'd like to send those requests in parallel.
Related curl command is like:
curl -s -X POST $FHIR_SERVER/ -H "Content-Type: application/fhir+json" --data "@patient-bundle-01.json"
Request bodies are files like patient-bundle-xx, where xx is a number. Currently, I'd like to send up to 1500 requests using this incremental pattern.
How could I send above requests using incremental pattern?
How could I do this in parallel?
|
With GNU Parallel:
doit() {
bundle="$1"
curl -s -X POST $FHIR_SERVER/ -H "Content-Type: application/fhir+json" --data "@patient-bundle-$bundle.json"
}
export -f doit
export FHIR_SERVER
seq -w 99 | parallel -j77 doit
Adjust -j77 if you do not want 77 jobs in parallel.
| Make parallel http requests using raw data files |
1,356,894,572,000 |
I read on a stackoveflow post that on Linux, when we delete a file, it is not actually deleted, only the link from the inode table is removed to that file. If that is the case, then why isn't delete a constant time operation?
I also tried an experiment:
I created a folder with 1500 images and created a tar object of these images. Both the directory and tar file are of same size. Timings for deleting the tar object and the directory of 1500 images are as follows
Deleting tar file time rm test.tar:
real 0m0.024s
user 0m0.001s
sys 0m0.024s
Deleting directory: time rm -r test
real 0m0.219s
user 0m0.024s
sys 0m0.191s
As per my understanding, this difference in time is because of
unlinking 1 file vs unlinking 1500 files. But shouldn't the tar object deletion be 1500x faster?
|
Because it's not a simple "mark a single inode deleted" operation: https://www.slashroot.in/how-does-file-deletion-work-linux
At least on ext4 file deletion is a whole lot faster than on ext2/ext3 partitions due to the use of extents.
In case of SSDs file deletion could be slower than necessary due to the use of the "discard" option which tells your SSD to physically discard all the blocks belonging to a file in order to extend your SSD lifespan. It's highly not advisable to disable it.
| Why is deleting files in Ubuntu slow? |
1,356,894,572,000 |
I'm studying the env command and trying to understand how it works. Here's the command synopsis:
env [-iv] [-P altpath] [-S string] [-u name] [name=value ...] [utility [argument ...]]
I decided to play around with it and tried:
env cd /home/username
I get: env: ‘cd’: No such file or directory
The result is the same with either env cd ~ or env cd.
So why do I get an error when using cd as env's utility argument?
|
Because cd is not a "utility", it's a shell "bultin", handled by env's parent shell.
Read man $SHELL.
| Why do I get an error when using cd as env's utility argument? [duplicate] |
1,356,894,572,000 |
I'm trying to perform docker save -o on all images in a docker-compose.yaml file in one command.
what I've managed to do is:
cat docker-compose.yaml | grep image and this will give:
image: hub.myproject.com/dev/frontend:prod_v_1_2
image: hub.myproject.com/dev/backend:prod_v_1_2
image: hub.myproject.com/dev/abd:v_2_3
image: hub.myproject.com/dev/xyz:v_4_6
I need to perform the following command for each image:
docker save -o frontend_prod_v_1_2.tar hub.myproject.com/dev/frontend:prod_v_1_2
What I have managed to achieve is :
cat docker-compose.yml | grep image | cut -d ':' -f 2,3 which gives:
hub.myproject.com/dev/frontend:prod_v_1_2
hub.myproject.com/dev/backend:prod_v_1_2
hub.myproject.com/dev/abd:v_2_3
hub.myproject.com/dev/xyz:v_4_6
I can furthur do:
cat docker-compose.yml | grep image | cut -d ':' -f 2,3 | cut -d '/' -f3 | cut -d ':' -f1,2
which gives:
frontend:prod_v_1_2
backend:prod_v_1_2
abd:v_2_3
xyz:v_4_6
Then I'm not sure what to do. I have tried to use xargs to pass as a variable but I don't know how to change xargs from frontend:prod_v_1_2 to frontend_prod_v_1_2.tar on the fly in the command line. Also, I still need the full image name and tag at the end.
I'm looking for something like:
cat docker-compose.yml | grep image | cut -d ':' -f 2,3 | xargs -I {} docker save -o ``{} | cut -d '/' -f3 | cut -d ':' -f1,2 | xargs -I {} {}.tar`` {}
any bash magicians can offer a hint?
|
Your pipeline approach could convoluted as you add more and more commands. Just use the prowess of your native shell, in this case bash for operations like this. Pipe the output of grep image docker-compose.yml into a while..read loop and perform substitutions with it.
In a proper shell script this could be done as
#!/usr/bin/env bash
# '<(..)' is a bash construct i.e process substitution to run the command and make
# its output appear as if it were from a file
# https://mywiki.wooledge.org/ProcessSubstitution
while IFS=: read -r _ image; do
# Strip off the part up the last '/'
iname="${image##*/}"
# Remove ':' from the image name and append '.tar' to the string and
# replace spaces in the image name
docker save -o "${iname//:/_}.tar" "${image// }"
done < <(grep image docker-compose.yml)
On the command line, instead of xargs I would use awk directly to run the docker save operation
awk '/image/ {
n = split($NF, arr, "/");
iname = arr[n]".tar";
sub(":", "_", iname); fname = $2;
cmd = "docker save -o " iname " " fname;
system(cmd);
}' docker-compose.yml
| How to use xargs to set and change variables on the fly? |
1,356,894,572,000 |
set complete = enhance is put in .cshrc, and we have two files, test_ab_dd.c and test_abc_dd.c.
If I type test_ab_<TAB> in the command line, csh DOES NOT autocomplete to test_ab_dd.c. It suggests both test_ab_dd.c and test_abc_dd.c. I have to type all the way to the end. Shouldn't this no longer be ambiguous? It completes just fine in bash.
This only happens with complete set to enhance. I wanted it that way since it allows case-insensitivity.
Is there any way to keep case-insensitivity while resolving this issue?
|
When you set complete to enhance it considers periods, hyphens and underscores as word separators and not as the characters like you expected.
So basically the answer is no since this is a "feature" of setting complete to enhance.
| csh/tcsh Tab Completion with "complete = enhance" Strange Behavior |
1,356,894,572,000 |
I can login to the server terminal with username and password. But I can't run the screen command. It says Must be connected to a terminal while I ran this screen -R newNodeServer.
I found an answer at Ask Ubuntu: What does the script /dev/null do?, but if I run the command script /dev/null according to that answer, will the running server, websites, and applications be affected?
I am going to run some artisan commands for Laravel in screens. Such as php artisan queue and php artisan websockets:serve
|
My best guess here is:
@testteam is trying to run screen command and getting screen complaining about not being able to open terminal.
In the answer linked, solution mentioned is to use script /dev/null and that should resolve the issue.
@testteam is wondering if that's going to affect any servers running on the machine (probably in the screen session).
This answer has very good explanation what script command is going to do and why screen is complaining:
Why this works is invoking script has a side effect of creating a pseudo-terminal for you at /dev/pts/X. This way you don't have to do it yourself, and screen won't have permission issues - if you su from user A to user B, by directly invoking screen you try to grab possession of user A's pseudo-terminal. This won't succeed unless you're root.
So the fact that script solves the issue with screen is basically side effect. I'm guessing that you want to run some node.js app in the screen session. The fact that you created pts device and used script for that should not affect servers or (web)apps running inside screen session.
| Will the command `script /dev/null` stop or affect the running applications and server? [duplicate] |
1,356,894,572,000 |
is there a nice way to combine:
ls -R | grep "^[.]/" | sed -e "s/:$//" -e "s/[^\/]*\//--/g" -e "s/^/ |/"
(displays directory as tree)
with
du -h ... (for each listed dir)
Without installing any extra package, like tree, dutree ?
|
I think, you should revert the order of du -h with tac and then put some formatting with sed. This one should work for "normal" directory names (without control characters):
du -h | tac | sed 's_[^/[:cntrl:]]*/_--_g;s/-/|/'
Or use find -exec:
find . -type d -exec du -sm {} \; | sed 's_[^/[:cntrl:]]*/_--_g;s/-/|/'
| du + tree command (without tree installation) |
1,356,894,572,000 |
I know this is a Very Bad Idea™️ with regards to security, I'm ok with that. (Everything is over SSL, so meh.)
I have a small wandering Raspberry Pi powered bot, that I'd like to keep in touch with. Which means the bot should try its darnedest to connect to my server. It has Wifi, but no cell chip.
I live in a techie area, so there are plenty "Free WiFi Just Enter Your Email or accept our TOS" access points, so even an open SSID doesn't mean a usable SSID.
Is there a way (small app or script or whatever) that will cycle through all open access points until it finds one that allows a connection to the server - AND will resume scanning if it gets cut off or if the throughput gets too weak?
I'm fine running python, bash, whatever raspbian (Debian) supports. And the "outages" when it doesn't have any connection are the cost of doing business.
|
I do not know of any ready-made tool that can be used for your purpose, but you could build one yourself.
The necessary tools and programs are available on Raspbian.
Luckily, wpa-supplicant allows for interactive control of scanning for networks and manually connecting to networks.
To be specific you can either use the program wpa_cli or use the wpa_ctrl API in c, which is also the basis for wpa_cli.
wpa_cli allows you to
scan for SSIDs and show results:
wpa_cli -i wlan0 SCAN
wpa_cli -I wlan0 SCAN_RESULTS
get detailed information about the SSIDs (including signal strength, encryption, etc.):
wpa_cli -i wlan0 BSS 0
Hint: replace 0 with the idx of the SSID from the SCAN_RESULTS output you want to know more about.
connect to a specific SSID that fits your criteria
sudo wpa_cli -i wlan0 ADD_NETWORK
sudo wpa_cli -i wlan0 SET_NETWORK 0 ssid "SSID"
sudo wpa_cli -i wlan0 SET_NETWORK 0 psk "passphrase"
sudo wpa_cli -i wlan0 ENABLE_NETWORK 0
Note: replace 0 with the number that is printed to stdout after ADD_NETWORK
disconnect from SSID
sudo wpa_cli -i wlan0 DISCONNECT
You will have to do some parsing of the output of those commands obviously.
In order to check, if the connection to your server is possible after connecting to the SSID, you could simply evaluate the results of a ping call.
And now you simply have to put everything together.
| Connect to fastest open WiFi (that can reach the Internet) and keep looking if cut off? |
1,356,894,572,000 |
I have a bunch of directory, sub-directory, and filenames that were created in Linux with the following pattern: YYYY - MM - DD T HH : MM : SS (I added spaces for clarity but no spaces are in the directory/sub/file names; YYYY, MM, DD... are integers and '-', 'T', ':' are constants of the expression).
These directories/files were copied to Windows and then back to Linux, and the ':' got corrupted. Each place where there should be ':' there is '\357\200\242' which shows up as ??? when I do ls.
I know that fixing this should not be too complicated using a combination of mv and sed, but I'm very rusty on my piping, regex, and sed usage.
So far I have this
for a in *T*???*???*; do mv "$(echo "$a" | sed [***])"; done
The [***] should be a regex that changes *T*???*???* to *T*:*:* where the middle two * are each two digits. And this should rename both files and directories, recursively. I also suspect that ??? is not the correct input pattern to use here.
Alternate approach
I've seen a bunch of posts offering a combination of find and rename, but again, I am a bit rusty on the use of regex, and could not arrive at a good solution for this situation.
|
Assuming \357\200\242 are octal numbers. Try:
rename -n 's/\o{357}\o{200}\o{242}/:/g' 2018-*
The command rename works with a Perl regular expression replace. Here it replaces three characters given as octal byte values with a colon.
Because of -n this just prints what it would do. So you are able to test without destroying something.
When you are sure that this does what you want, execute without -n.
If you need to traverse a entire directory tree, combine it with find:
find . -depth -exec rename -n 's/\o{357}\o{200}\o{242}/:/g' {} \;
Don't worry if the directory tree contains files that don't need to be renamed. If the regex replace does doesn't change the file name, the file is not renamed.
| Replacing bad file, directory and subdirectory names using a regex pattern |
1,356,894,572,000 |
I couldn't find an example online but I'm sure I've seen shell coders use ${1:--} to accept user input. Here's an example:
#!/bin/bash
var="${1:--}"
echo "$var"
Then, run it:
$ ./test.sh "this is a test"
My question is: how is using "${1:--}" to accept user input different from "$1"?
|
${1:--} will expand to the string "-" if there is no parameter one or if the parameter is empty.
So ./test.sh "" will return the string "-" as will the command ./test.sh This is considered to be a useful default in many circumstances where an argument of "-" can mean stdin or stdout. Also it makes sure scripts don't break when a parameter is not explicitly set.
| Bash: how to properly accept user input? |
1,356,894,572,000 |
I'm not sure if this would be the right place to ask this question.
I've created a sample website using metaexploitable OS, and since this is a terminal-based OS I've been having trouble coding my html site with the vi editor, I was wondering if anyone know how to can access my root directory (and have control, adding files into the root directory) for a different machine.
Right now I'm running metaexploit on VirtualBox and I'm able to access the root directory and the files from my host machine (Windows 10)
All my files for the server are located in the directory /var/www, and when I do http://server_ip_addr, I get to the directory folder which states index on it, I see a few files that I've created etc.
I've also noticed that there is a myPHPAdmin file which when I went to that section of the site, http://my_ip_addr:80/myPHPAdmin
I required a username and password. Would I need access to this in order to be able to upload files into my server directory? If so does anyone know the default username and password? I did some research and from my understanding (might be completely off) that I would need to install a database in order to even log into that section.
So if anyone has any ideas on what I can do in order to load files into that directory using my host machine, or if there is no means to do that, if anyone could recommend a good text editor where editing isn't such a drag, I would greatly appreciate it!
I'm running Apache/2.2.8 (Ubuntu) DAV/2 Server
Thank you!
|
The PHPMyAdmin login credentials should be these:
User: root
Password: (leave it blank)
Source: Metasploitable 3 Wiki - Vulnerabilities
You should not need to install anything (less yet "a database [server]") to access the system.
If you have a relative lack of experience using vi, I would recommend you nano, which is a bit more friendly.
| Metasploitable server |
1,533,849,977,000 |
I'm writing a small perl wrapper to setup environment variables, etc., then invoke a command by the same name.
./foo.pl -a one -b two -c "1 2 3" -d done
When I output @ARGV, the "" around 1 2 3 have been stripped. Does someone know how to have perl keep these quotes?
|
It's not that perl doesn't keep the quotes, perl never gets them. The quotes are just one way to prevent the shell from splitting the text into multiple arguments. The same effect can be achieved with backslash:
./foo.pl -a one -b two -c 1\ 2\ 3 -d done
The effect is on both cases a string of 1, space, 2, space, 3. You can also put quotes around the other arguments that don't contain spaces, the quotes are still not part of the arguments passed to the program.
If you want to pass arguments to the shell, you can just put quotes around all. Or you can put a backslash before every special shell character.
| Keeping quotes passed to a perl wrapper script |
1,533,849,977,000 |
file1.txt (tab delimiter, with the second column containing a string with spaces):
A Golden fog
B Vibrant rainbow and sunny
C Jumping, bold, and bright
D Chilly/cold/brisk air
file2.txt (tab delimiter):
D01 Ti600 A
D02 Ti500 B
D16 Ti700 C
D20 Ti800 B
desired output for file3.txt (having a tab delimiter):
D01 Ti600 A Golden fog
D02 Ti500 B Vibrant rainbow and sunny
D16 Ti700 C Jumping, bold, and bright
D20 Ti800 B Vibrant rainbow and sunny
or at least this for file3.txt:
D01 Ti600 Golden fog
D02 Ti500 Vibrant rainbow and sunny
D16 Ti700 Jumping, bold, and bright
D20 Ti800 Vibrant rainbow and sunny
I have tried
awk 'NR==FNR{a[$1]=$2;next}{$3=a[$1];}1' file1.txt file2.txt > file3.txt
But I only get:
D01 Ti600
D02 Ti500
D16 Ti700
D20 Ti800
Which has a space deliminator instead of tabs, as well as a space after column 2, but no value in column 3.
Thanks so much for any help with getting the desired output.
|
Although you noted that the files are tab delimited, you did not actually make use of that. Also the common key A, B etc. is in the third field of file2.txt. So:
$ awk 'BEGIN{OFS=FS="\t"} NR==FNR{a[$1]=$2;next}{$4=a[$3];}1' file1.txt file2.txt
D01 Ti600 A Golden fog
D02 Ti500 B Vibrant rainbow and sunny
D16 Ti700 C Jumping, bold, and bright
D20 Ti800 B Vibrant rainbow and sunny
or (slightly shorter)
$ awk -F'\t' 'NR==FNR{a[$1]=$2;next}{print $0"\t"a[$3]}' file1.txt file2.txt
D01 Ti600 A Golden fog
D02 Ti500 B Vibrant rainbow and sunny
D16 Ti700 C Jumping, bold, and bright
D20 Ti800 B Vibrant rainbow and sunny
| Adding a new column in file1 that outputs a string in a reference file file2 that matches the value of another column in file1 |
1,533,849,977,000 |
Is it possible to make lines in the info command wider?
An example :
When running info awk I get the following output, although the terminal size is much wider.
2 Running 'awk' and 'gawk'
**************************
This major node covers how to run 'awk', both POSIX-standard and
'gawk'-specific command-line options, and what 'awk' and 'gawk' do with
nonoption arguments. It then proceeds to cover how 'gawk' searches for
source files, reading standard input along with other files, 'gawk''s
environment variables, 'gawk''s exit status, using include files, and
obsolete and undocumented options and/or features.
I tried to set COLUMNS=200 but it didn't change the output, interestingly the output of pinfo did change according to the COLUMNS variable.
|
Unlike man pages, info pages have the line width set when they are created using makeinfo(1) or texi2any(1) (the --fill-column option). The default is 72 characters, which is why you'll usually see line breaks there.
As far as I can tell, to reflow an info page you would have to regenerate the file from its original texi source.
| How to change line width in "info" command |
1,533,849,977,000 |
I'm using Debian 8 and want to formating my machine to Debian 9. I pretend to do a minimal install with just the right drivers and the necessary X modules. Everything will be done from the CLI. So, how can I discover the necessary drivers that I'm using and find them on Debian 9 (maybe the names have change?).
I have find on the web how discover my video and card drivers, but has something more?
|
You can get the list of the driver through the command lspci then get the package name that provides this driver.
e,g:
Get the list of kernel modules driver.
lspci -knn
A sample output for the wifi driver:
08:00.0 Network controller [0280]: Qualcomm Atheros AR9485 Wireless Network Adapter [168c:0032] (rev 01)
Subsystem: Lite-On Communications Inc AR9485 Wireless Network Adapter [11ad:6617]
Kernel driver in use: ath9k
Kernel modules: ath9k
To get the package name which provide the ath9k module :
apt-file search ath9k | less
The apt-file can be installed and updated through:
apt install apt-file
apt-file update
sample output:
firmware-atheros: /lib/firmware/ath9k_htc/htc_7010-1.4.0.fw
firmware-atheros: /lib/firmware/ath9k_htc/htc_9271-1.4.0.fw
...
From this example the ath9k belong to the firmware-atheros pacakge. Using the package name you can check on the official website if the package is available on debian Stretch.
| How discover (and find) all drivers that I'm using for a new minimal SO install? |
1,533,849,977,000 |
This answered question explains how to search and sort a specific filename, but how would you accomplish this for an entire directory? I have 1 million text files I need to to search for the ten most frequently used words.
database= /data/000/0000000/s##_date/*.txt - /data/999/0999999/s##_data/*txt
Everything I have attempted results in sorting filenames, paths, or directory errors.
I have made some progress with grep, but parts of filenames seem to appear in my results.
grep -r . * | tr -c '[:alnum:]' '[\n*]' | sort | uniq -c | sort -nr | head -10
output:
1145
253 txt
190 s01
132 is
126 of
116 the
108 and
104 test
92 with
84 in
The 'txt' and 's01' come from file names and not from the text inside the text file. I know there are ways of excluding common words like "the" but would rather not sort and count file names at all.
|
grep will show the filename of each file that matches the pattern along with the line that contains the match if more than one file is searched, which is what's happening in your case.
Instead of using grep (which is an inspired but slow solution to not being able to cat all files on the command line in one go) you may actually cat all the text files together and process it as one big document like this:
find /data -type f -name '*.txt' -exec cat {} + |
tr -cs '[:alnum:]' '\n' | sort | uniq -c | sort -nr | head
I've added -s to tr so that multiple consecutive newlines are compressed into one, and I change all non-alphanumerics to newlines ([\n*] made little sense to me). The head command produces ten lines of output by default, so -10 (or -n 10) is not needed.
The find command finds all regular files (-type f) anywhere under /data whose filenames matches the pattern *.txt. For as many as possible of those files at a time, cat is invoked to concatenate them (this is what -exec cat {} + does). cat is possibly invoked many times if you have a huge number of files, but that does not affect the rest of the pipeline as it just reads the output stream from find+cat.
To avoid counting empty lines, you may want to insert sed '/^ *$/d' just before or just after the first sort in the pipeline.
| Using a single command-line command, how would I search every text file in a database to find the 10 most used words? |
1,533,849,977,000 |
So, I have a file with insertions hits with some features marks with following aspect (chr, start, end, chr, star, end, number of overlapped base pairs:
chr1 69744110 69793325 . -1 -1 0
chr1 82791976 82831348 chr1 82792114 82792615 501
chr1 82791976 82831348 chr1 82816285 82817077 792
chr1 82791976 82831348 chr1 82828015 82829891 1876
chr1 88599340 88658398 . -1 -1 0
chr1 137772945 137830035 . -1 -1 0
chr1 137875312 137920590 . -1 -1 0
chr1 193433080 193446861 . -1 -1 0
chr10 26483800 26501370 chr10 26484794 26485295 501
chr10 68069913 68089436 . -1 -1 0
chr10 95098349 95113967 . -1 -1 0
chr10 97310211 97335589 . -1 -1 0
chr10 111083097 111118237 chr10 111088928 111090274 1346
chr10 117904141 117947090 chr10 117905334 117906320 986
chr10 117904141 117947090 chr10 117918966 117919852 886
chr10 117904141 117947090 chr10 117926867 117927368 501
chr11 11521339 11587607 chr11 11523970 11524747 777
chr11 11521339 11587607 chr11 11555497 11559868 4371
chr11 11521339 11587607 chr11 11560639 11562128 1489
chr11 11521339 11587607 chr11 11564617 11565370 753
So what I need is to concatenate the values in column 5 (column5/column5...), column 6 (column6/column6...) and column 7(column/column7)... IF I have a match in the first 3 columns. I would also like to keep column 4, but it is okay if I miss it.
The output should look like:
chr1 69744110 69793325 . -1 -1 0
chr1 82791976 82791976 chr1 82792114/82816285/82828015 82792615/82817077/82829891 501/792/1876
chr1 88599340 88658398 . -1 -1 0
chr1 137772945 137830035 . -1 -1 0
chr1 137875312 137920590 . -1 -1 0
chr1 193433080 193446861 . -1 -1 0
chr10 26483800 26501370 chr10 26484794 26485295 501 (...)
chr10 117904141 117947090 chr10 117905334/117918966/117926867 117906320/117919852/117927368 986/886/501
(...)
I have been in several trials and the best I could do was this:
awk '{ k=$1 FS $2 FS $3; a[k]=(k in a)? a[k]"/"$5 : $5 }
END{ for(i in a) {
split(i,b,FS); b[5]=a[i]"\t"b[5]; r="";
for(j=1;j<=NF;j++) {
r=(r!="")? r"\t"b[j] : b[j]
}
print r}
}' input.bed > output.bed
But with this I am missing values and I am not able to concatenate more than one column.
Can you help me please?
EDIT:
new attempt:
awk -F'\t' -v OFS='\t' '{
if ($2 in a) {
a[$2] = a[$2]";"$5;
b[$2] = b[$2]";"$6;
} else {
a[$2] = $5;
b[$2] = $6;
}
}
END { for (i in a) print i, a[i], b[i] }' input.bed > output.bed
But I continue to lose the fields that are not being evaluated.
|
Question:
Concatenate the values in column 5,6 and 7 IF I have a match in the first 3 columns
Answer:
perl -lane 'if($.==1){@a=@F;next} if($F[0]eq$a[0]&&$F[1]eq$a[1]&&$F[2]eq$a[2]){$a[4].="/$F[4]";$a[5].="/$F[5]";$a[6].="/$F[6]";}else{for($i=0;$i<@a;$i++){printf "\t%s",$a[$i]};print"";@a=@F}END{for($i=0;$i<@a;$i++){printf "\t%s",$a[$i]};print""}' input.bed
Output:
chr1 69744110 69793325 . -1 -1 0
chr1 82791976 82831348 chr1 82792114/82816285/82828015 82792615/82817077/82829891 501/792/1876
chr1 88599340 88658398 . -1 -1 0
chr1 137772945 137830035 . -1 -1 0
chr1 137875312 137920590 . -1 -1 0
chr1 193433080 193446861 . -1 -1 0
chr10 26483800 26501370 chr10 26484794 26485295 501
chr10 68069913 68089436 . -1 -1 0
chr10 95098349 95113967 . -1 -1 0
chr10 97310211 97335589 . -1 -1 0
chr10 111083097 111118237 chr10 111088928 111090274 1346
chr10 117904141 117947090 chr10 117905334/117918966/117926867 117906320/117919852/117927368 986/886/501
chr11 11521339 11587607 chr11 11523970/11555497/11560639/11564617 11524747/11559868/11562128/11565370 777/4371/1489/753
Note:
There might be a shorter or more elegant solution
| concatenate columns horizontally, if matches in previous field. Multiple columns to concatenate |
1,533,849,977,000 |
I want to redirect stdin and stdout and stderr at the same time in bash, is this how it's done:
someProgram < stdinFile.txt > stdoutFile.txt 2> stderrFile.txt
|
Yes, your syntax is correct although the following equivalent one is closer to what the shell actually does:
< stdinFile.txt > stdoutFile.txt 2> stderrFile.txt command arguments
The files used for redirection are open before the command is launched, and if there is a failure in this first step, the command is not launched.
| How to redirect stdin and stdout and stderr at the same time in bash? [duplicate] |
1,533,849,977,000 |
I'm running non-GUI ArchLinux on VMWare 14.0. I installed a ssh server on it (by openssh) and connected to my virtual machine by using Kitty 0.70 on Windows 10 [Version 10.0.15063].
My problem is: When I use multiline command, the output of command in Kitty is really weird.
For example:
On Kitty ssh client:
[ddk@mylinux:~]
14:23:08 $ if [[ -o interactive ]]
if> then
then> echo 'inter'
then> fi
then # not my typing
echo 'inter' # not my typing
fi)inter # not my typing
[ddk@mylinux:~]
14:23:34 $
On terminal in my virtual machine:
[ddk@mylinux:~]
14:23:54 $ if [[ -o interactive ]]
if > then
then > echo interactive
then > fi
interactive
[ddk@mylinux:~]
14:24:37 $
So how do I fix improper output on my Kitty ssh client?
P/S: I am running zsh without any preconfigure scripts like oh-my-zsh. This is my .zshrc.
|
As Stéphane Chazelas remarked, the problem is in your preexec function. When you set the terminal title, you use the command without protecting its special characters. The first newline in the command terminates the escape sequence to set the title, and the other lines get printed.
You would also have a problem with backslashes and percent characters in the command, since print performs backslash expansion and you're also performing prompt percent expansion on the command.
The solution is to remove or encode control characters, and to perform backslash expansion to get control characters separately from the characters in the prompt. For example:
set_title () { print -rn $'\e]0;'${${:-${(%):-$1}$2}//[^[:print:]]/_}$'\a' }
precmd () { set_title '[%n@%M:%~]' '' }
preexec () { set_title '[%n@%M:%~]' " ($1)" }
| Weird output on multiline command in Kitty? |
1,533,849,977,000 |
I have a few thousand subdirectories in a directory, each containing one config.ini file and one JPEG image. The ini file contains (including but not limited to) a section that encodes the time, when the image was taken.
[Acquisition]
Name=coating_filtered_001
Comment=Image acquisition
Year=2017
Month=3
Day=21
Hour=13
Minute=2
Second=34
Milliseconds=567
The image files always have the same exact name, for the sake of this question image.jpg.
I would like to copy all image files to some other (single) directory, and rename them to something like yyyy-mm-ddThh:mm:ss:NNN.jpg or similar, i.e. the filename consisting of the timestamp from the ini file.
Can this be achieved on the command line?
|
It can be achieved on the command line, but a script that would run on the command line would be an easier solution (I think).
Basic steps:
Get a list of directories to iterate over:
find ${directory} -mindepth 1 -type d
Check each directory for the presence of config.ini, and image.jpg.
if [ -f ${subdir}/config.ini -a -f ${subdir}/image.jpg ]; then ...
Check the config.ini for all the right parts of the timestamp.
various grep ^Year= ${subdir}/config.ini or ^Month, etc...
Make a copy of the image.jpg file, using the timestamp.
cp ${subdir}/image.jpg ${copydir}/${timestamp}.jpg
I think it's easier, and potentially safer to put these sequences into a script, where you can more easily put in readable output, error handling, etc.
Here's an example script to do those steps:
#!/bin/bash
imagepath="/path/to/images"
copydir="/path/to/copies"
# step 1: find all the directories
for dir in $(find ${imagepath} -mindepth 1 -type d); do
echo "Procesing directory $dir:"
ci=${dir}/config.ini
jp=${dir}/image.jpg
# step 2: check for config.ini and image.jpg
if [ -f ${ci} -a -f ${jp} ]; then
# step 3: get the parts of the timestamp
year=$(grep ^Year= ${ci} | cut -d= -f2)
month=$(grep ^Month= ${ci} | cut -d= -f2)
day=$(grep ^Day= ${ci} | cut -d= -f2)
hour=$(grep ^Hour= ${ci} | cut -d= -f2)
min=$(grep ^Minute= ${ci} | cut -d= -f2)
sec=$(grep ^Second= ${ci} | cut -d= -f2)
ms=$(grep ^Milliseconds= ${ci} | cut -d= -f2)
# if any timestamp part is empty, don't copy the file
# instead, write a note, and we can check it manually
if [[ -z ${year} || -z ${month} || -z ${day} || -z ${hour} || -z ${min} || -z ${sec} || -z ${ms} ]]; then
echo "Date variables not as expected in ${ci}!"
else
# step 4: copy file
# if we got here, all the files are there, and the config.ini
# had all the timestamp parts.
tsfile="${year}-${month}-${day}T${hour}:${min}:${sec}:${ms}.jpg"
target="${copydir}/${tsfile}"
echo -n "Archiving ${jp} to ${target}: "
st=$(cp ${jp} ${target} 2>&1)
# capture the status and alert if there's an error
if (( $? == 0 )); then
echo "[ ok ]"
else
echo "[ err ]"
fi
[ ! -z $st ] && echo $st
fi
else
# other side of step2... some file is missing...
# manual check recommended, no action taken
echo "No config.ini or image.jpeg in ${dir}!"
fi
echo "---------------------"
done
It's always good to be somewhat conservative with scripts like this, so you don't accidentally delete files. This script only does 1 copy action, so that's pretty conservative, and it shouldn't harm your source files. But you may want to change specific actions or output messages to better suit your needs.
| Copy file to destination based on ini file |
1,533,849,977,000 |
I'm trying to create a dialog menu based on the results of the lsblk command. My goal is to read the first word as the menu item and the remaining words as the menu item description.
For example:
$ dialog --menu "Choose one:" 0 0 0 $(lsblk -lno name,size | grep sda)
This command works, because the dialog --menu option expects two arguments per menu item, in this case "name" and "size".
However, if I try the same command with a multi-word description (using the "read" built-in to write to stdout as two variables), word-splitting occurs in the second variable, even if quoted.
#commmand 1: unquoted variable
$ dialog --menu "Choose one:" 0 0 0 $(lsblk -lno name,type,size | grep sda |
> while read name desc; do
> echo $name $desc
> done) #results in output like below without quotes
#command 2: quoted variable
$ dialog --menu "Choose one:" 0 0 0 $(lsblk -lno name,type,size | grep sda |
> while read name desc; do
> echo $name "$desc"
> done) #results in output like below without quotes
#command 3: escape quoted variable
$ dialog --menu "Choose one:" 0 0 0 $(lsblk -lno name,type,size | grep sda |
> while read name desc; do
> echo $name \"$desc\"
> done) #results in output below
I don't understand why word splitting is occurring in the quoted variable. Can anyone explain and/or suggest a workaround? I've tried writing the output of lsblk to a file and reading the file with the same results. The desired output is:
EDIT: I looked at command quoting as a possible solution, but it results in the lsblk command output being passed to dialog as one argument when two are required.
Thanks.
|
You need to do this in 2 parts:
# 1. read the output of lsblk, 2 words per line, into an array
parts=()
while read -r disk data; do
parts+=("$disk" "$data")
done < <(lsblk -lno name,type,size | grep sda)
# 2. send the elements of the array to the dialog command
dialog --menu "Choose one:" 0 0 0 "${parts[@]}"
The read command will take the first whitespace-separated work into the disk variable, and then the rest of the line into data. It's very important to quote all the variables to avoid word splitting
| Wordsplitting occurring in quoted variable |
1,533,849,977,000 |
I have following command to clear last entry of bash history(terminal history/command-line history). My Ubuntu 14.04 Trusty Tahr.
sed -i '$d' ~/.bash_history
But I want to keep last 1,2...n entries and delete the rest, how can I achieve that ?
Can be with sed/history/awk or any other command, no problem as far as the requirements are met.
|
If you want to keep the last N lines, use tail (for example, the last 20 lines):
tail -n 20 "$HISTFILE" > ff && mv ff "$HISTFILE"
I'm using the HISTFILE variable since this will always point to your history file, even if you've changed its name.
| Clear bash history except last n lines |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.