date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,383,332,844,000 |
Can a command line utility save sub-strings conditionally in different files? I have a file (file.txt) with several lines like the following.
1/1_ABCD4.txt:20020711
1/1_ABCD10.txt:20020731
2/2_ABCD2.txt:20071103
2/2_ABCD5.txt:20071107
3/3_ABCD1.txt:20090225
3/3_ABCD3.txt:20090230
My goal is to save 20020711 together with 20020731 in file 1, 20071103 with 20071107 in file 2, and 20090225 with 20090230 in file 3?
I could extract the desired sub-strings after : with the following command, but would lose the reference digit by doing so:
$ grep -oP 'txt\:\K[A-Z0-9-]+' 'path/to/file.txt'
20020711
20020731
20071103
20071107
20090225
20090230
Is it possible to build three separate files with the first digit before / as target reference while using command line? The destination might be the same directory like the original text file.
File:
20020711
20020731
File:
20071103
20071107
File:
20090225
20090230
Thank you.
|
With awk:
awk -F'[:/]' '{print $NF > $1}' file
We split the row using both / and : as separators. The last field ($NF) is what to print, and the first field ($1) is the output filename.
After running for your test input file:
$ head 1 2 3
==> 1 <==
20020711
20020731
==> 2 <==
20071103
20071107
==> 3 <==
20090225
20090230
Also, depending on your data, it is good to add a condition before this action, to avoid printing to a file with a random name, in case we have more lines with different structure, the input could be dangerous.
A simple example, if we want to print only when the first field (the filename) has only digits:
awk -F'[:/]' '$1 ~ /^[0-9]+$/ {print $NF > $1}' file
| Command line - save sub-strings conditionally |
1,383,332,844,000 |
I've recently switched to btrfs for better compatibility with Windows 10 (for which a third-party driver is available). The btrfs partition (on a SATA SSD) was mounted only for /home, and is formatted and mounted as a system partition when my operating system was installed (Pop OS 20.04).
After that I frequently encounter my chromium browser freezing up for a few seconds, sometimes it even leads to OS perceiving it as unresponsive process and prompt to kill it.
I wonder if it is caused by the btrfs not being mounted using the optimal option (with -o ssd and --autodefrag enabled). But I don't want to take the risk to mount it again and lose access to my home directory.
Is there a way to list all the status quo mounting option using the btrfs commandline tool?
|
The modern and arguably much nicer way is findmnt:
$ findmnt /
# TARGET SOURCE FSTYPE OPTIONS
# / /dev/mapper/cryptroot btrfs rw,relatime,compress-force=zstd:3,ssd,space_cache,autodefrag,subvolid=5,subvol=/
| Show mounted btrfs mounting options? |
1,383,332,844,000 |
cat /dev/urandom generates a random sequence of all possibles "values".
cat /dev/urandom | padsp tee /dev/audio > /dev/null directs these "values" to your audio device, turning them into "random noise" or "random tones" (see: Generating random noise for fun in /dev/snd/)
But how can I do the same but instead of random noise/tones I pick a single value out of all possibles values and cat that to the audio device indefinitely (creating a sequence of the same value rather than random values)?
This should produce a single consistent tone.
Interface
You can manually experiment with different values, but one imaginary "interface" to make it easier to pick/hit the value you want could be:
Frequency (Hz) e.g. 440
Amplitude (0 - 1) e.g. 0.8
I'd rather not use an audio file, e.g. file.wav, file.mp3, file.ogg, etc. Just a bash script and defaultish cli applications (e.g. cat, padsp, etc).
|
You can play around with anything that can do maths (sin especially) and write a number as a character to stdout. For example:
awk --characters-as-bytes 'BEGIN { freq=2200; amp=0.3; for (i=0; i>=0; i++) { printf "%c", 127+ amp*(127.0*sin(2*3.14159265/44100*i*freq)); } }' | padsp tee /dev/audio > /dev/null
Depending on how you set freq it sounds more like a siren... perhaps that's something to play with, depending on your use case.
The amplitude is adjusted with amp, max 1.0.
Please note that I am using GNU awk, therefore --characters-as-bytes works. You do not want characters to be UTF-8 encoded when written to stdout!
Also, depending on your system you may want to replace 44100 by 48000 or a different number if the default sample rate differs.
| How to cat a specific tone to /dev/audio? |
1,383,332,844,000 |
I want to redirect the output of this command firefox &. I know that adding & means that we will run the command in background and when we use it we receive [number of process in background] [PID]. This is what I have done:
firefox & > firefoxFile
But when I open firefoxFile, I found it empty. I don't find [number of process in forground] [PID].
|
If your shell is bash[1], you can try:
exec 3>&2 2>firefoxFile; firefox & exec 2>&3-
It's your shell (eg. bash) which prints that [jobnum] pid background job notification to stderr, not firefox. This kludge temporarily redirects the stderr to the firefoxFile file, capturing into it that notification and whatever firefox will write to stderr during its lifetime.
It will NOT capture the [jobnum] Done firefox bash will print when the background job has terminated.
Your firefox & > file will be parsed as two commands 1. firefox & (which will run firefox in the background) and 2. > file (which will truncate file without writing anything to it). That's most certainly not what you intended.
[1] you can read here why this trick doesn't work in other shells; while it's possible to redirect the stderr in zsh, zsh does not write its prompts and job notifications to stderr, but to another fd pointing to the current terminal, opened especially for this purpose.
| In shell, when I run process in background, how can I get the "[job number] [PID]", redirected to a file? |
1,383,332,844,000 |
Given a pipe of the form C1 | C2, if C2 takes more than one positional argument, is it possible to choose where the output of C1 is going?
Consider the following example.
$ cat myscript
#!/bin/bash
cat $1
cat $2
$ cat world.txt
World
$ echo "Hello" | ./myscript world.txt
World
Hello
I want the final output to be in correct order (Hello World) by altering only the right side of the pipeline.
|
You might want to try this:
echo "Hello" | ./myscript /dev/stdin world.txt
So that standard input of ./myscript feeds into the first "cat"
| Choosing the output of a pipe |
1,383,332,844,000 |
I want to sync 2 directories (/src & /dst) to mirror all the files in both of them.
Here is a steps:
sudo rsync -vaP --stats /src /dst -> completed without errors
sudo rsync -vaP --stats /dst /src -> completed without errors
diff -rq /src /dst-> doesn't show any diffs.
du -s /src && du -s /dst shows different sizes (10% diff).
How it's possible? I'm completely stuck with this dilemma.
|
Sparse files may be expanded on copy when the -S flag is not used. (Will make the destination take more space)
Hard links within the tree may be expanded to separate files on copy when the -H flag is not used. (Will make the destination take more space)
Filesystems may have different allocation sizes. A one-byte file may take up 512 bytes of disk allocation on one filesystem and may take up 4096 bytes (or even more) on another. If your tree has a lot of small files, this will make a large difference. (Destination may take more or less space depending on the particulars)
Directories may be much larger than necessary to hold the current contents on some filesystems. When the contents are copied, the directory will be smaller on the destination. Not normally a big deal, but some pathological directories can be enormous. (Will make the destination take less space)
It's also possible for filesystems to have different compression/deduplication/redundancy settings leading to different storage requirements for the data. But that's less common, and even when present the differences aren't always visible via du.
| rsync completed in both directions, but size of directories is different. How it's possible? |
1,383,332,844,000 |
Let's say we have a file having following text:
hello hel-
lo world wor-
ld test test he-
lo words words
If we just use the space as the delimiter, we would have
hello: 1
world: 1
wor:1
ld:1
he: 1
hel-: 1
test:2
lo: 2
words: 2
In other words, how do we process the word separated by 2 lines using a hyphen and treat it as one word?
|
This should do it:
sed ':1;/-$/{N;b1};s/-\n//g;y/ /\n/' file | sort | uniq -c
| Find N Most Frequent Words in a File and How to Handle Hyphen? |
1,383,332,844,000 |
The school here wants to teach basic Linux and Unix things like terminal and CLI.
The problem is, installing things is not allowed. So, dual boot is out of the question. Just running windows 8.
Next, the systems aren't powerful enough for any VM. Running ancient systems on 4GB RAM. Currently, the school is using Cygwin. But we can't properly use commands like chmod and the like.
I am thinking of
Using git bash terminal. Don't know if permission commands will work properly on Windows 8 or not.
Using live ubuntu on USB. The system will be read-only, so mkdir and chmod is out. Or is there a way to do this? Please suggest.
Using Slackware or Puppy OS.
As suggested in comments, running a distro online.
As a student what can I suggest to the teachers?
Option 3 seems the most viable choice. Any suggestion is welcome.
EDIT:
persistent storage is not a requirement. On the contrary, it is welcomed, cleaning up after all the experiments.
Thank you
|
Any distro with a live image should work.
A word of caution, though: One COULD use a live linux to mount the windows system disk and then cat /dev/zero > /dev/windowsdisk and thus destroy the windows installation.
A more secure setup would be to boot the PCs from the network and start an already preconfigured system. That would offer teachers more control over what is happening.
Debian-Edu might be helpful.
And you probably should have a look at https://serverfault.com/questions/27454/tips-on-setting-up-a-linux-classroom-environment
| Running Linux on Windows at school without actually installing linux |
1,383,332,844,000 |
The Arch Linux git package installs git-gui under /usr/lib/git-core/.
This means git-gui cannot be launched directly from the terminal without specifying the full path:
$ git-gui
bash: git-gui: command not found
$ which git-gui
which: no git-gui in (/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl)
I'm at a doublt as to what would be the way to properly solve this.
Add /usr/lib/git-core/ to system-wide $PATH?
Create symlink to /usr/lib/git-core/git-gui under /usr/local/bin?
Report a bug in the Arch Linux package? Or upstream?
Do nothing - this is not a bug?
Thank you.
|
This is expected behaviour. All git sub-commands are installed to there — you will also find git-commit there, though probably as a link to the main binary for efficiency these days — and the main git command knows where to find them.
Any executable git-X there becomes available as git X automatically, and that's the expected way to access them rather than by path or the hyphenated name. git gui is the normal way to access the git-gui executable, and is also what man git-gui will suggest.
| How to properly solve "git-gui: command not found" on Arch Linux? |
1,383,332,844,000 |
I am wondering about the way the redirection <<< works in bash.
I understand that it redirects the chain after it to the command before as if the content was in virtual file. Examples
$ cut -d. -f1 <<< A.B
A
$ cut -d. -f1 <<< 'A.B
> C.D'
A
C
But I don't understand what it does when used multiple times. Example
$ cut -d. -f1 <<< A.B <<< C.D
C
I would have expected the following output
A
C
Why does the shell only take into account the last redirection ?
How could I add a virtual line to a file ? I would like to do something
like the following example, such that command takes the virtual line
and then processes the file my_file.
command <<< "virtual line" my_file
Note : I'm using bash version 4.4.12(1)-release (x86_64-pc-linux-gnu).
|
<<< redirects the stdin. If you redirect stdin and then redirect it again, the first redirection gets lost.
If the command has a way of saying "process stdin", which e.g. for cat is a dash, you can prepend a line in this way:
cat - input_file <<< 'virtual line'
| Use of multiple <<< |
1,383,332,844,000 |
Extract of lspci -tvv on a server with GPUs:
-+-[0000:b2]-+-00.0-[b3-b8]----00.0-[b4-b8]----08.0-[b5-b8]----00.0-[b6-b8]--+-05.0-[b7]----00.0 NVIDIA Corporation GP104GL
\-0d.0-[b8]----00.0 NVIDIA Corporation GP104GL
What is the format relating to / how can I understand the output?
From what I've found online the tree output is usually:
[domain number:bus number]-+-device number.function number Device Description
|
Bus can be connected to other buses, and cascade; in your case, you have a root device (no longer visible in the diagram), to which a succession of bridges (with device numbers 00) connect successive buses (b2 to b8), to which the two GPUs are connected.
You can get more information on the devices by dropping the -t; you’ll then see the bridges, and be able to map the connections using the primary, secondary and subordinate bus identifiers.
| How to interpret lspci -tvv output |
1,383,332,844,000 |
I run both regular Chrome and Chrome Canary (from now on, Canary). Sometimes I want to kill all subprocesses, Google Chrome Helper, of Canary. The problem is that they have the same name as the subprocesses of regular Chrome so killall "Google Chrome Helper" would kill both Canary's and Chromes' subprocesses.
How can, with a "oneliner" or similar, I kill all subprocesses of Canary without killing the subprocesses of Chrome with the same name?
Mac OS X
|
Try using the -P option of pkill:
-P ppid Restrict matches to processes with a parent process ID in the
comma-separated list ppid.
| How can I kill all child processes of a certain process from the command line? |
1,383,332,844,000 |
The sequence of commands
mkdir -p BASE/a/b/c && cp -a --target-directory=BASE/a/b/c /a/b/c/d
creates a subdirectory a/b/c under BASE, and then copies the directory tree at /a/b/c/d to BASE/a/b/c.
One problem with it is that it entails a lot of repetition, which invites errors.
I can roll a shell function that encapsulates this operation; for example, here's a sketch of it without any error checking/handling:
copytwig () {
local source=$1
local base=$2
local target=$base/$( dirname $source )
mkdir -p $target && cp -a --target-directory=$target $source
}
...but I was wondering if this could already be done with a "standard" Unix utility (where I'm using "standard" as shorthand for "likely to be found in a Unix system, or at least likely to be found in a Linux system").
|
With pax (a mandatory POSIX utility, though not installed by default in some GNU/Linux distributions yet):
pax -s':^:BASE/:' -pe -rw /a/b/c/d .
(note that neither --target-directory nor -a are standard cp options. Those are GNU extensions).
Note that with -pe (similar to GNU's -a), pax will try and copy the metadata (times, owner, permissions...) of the directory components as well (while BASE's metadata will be as if created with mkdir BASE).
| "standard" single-command alternative for `mkdir -p BASE/a/b/c && cp -a -t BASE/a/b/c /a/b/c/d`? |
1,383,332,844,000 |
I copied a file to my Ubuntu vm with a backslash in the name, and I cannot figure out how to rename the file, e.g. mv .\Dockerfile Dockerfile -- the command line does not like that syntax for the filename but I'm not sure how to escape the .\Dockerfile
|
You need to escape the backslash:
mv \\Dockerfile Dockerfile
| mv - how to escape backslash in file name? |
1,383,332,844,000 |
Background
I'm running a larger one-line command. It is unexpectedly outputting (twice per iteration) the following:
__bp_preexec_invoke_exec "$_"
Here is the pared down command (removed other activity in loop):
for i in `seq 1 3`; do sleep .1 ; done
note: after i have played with this a few times it inexplicably stops printing the unexpected output
What I've tried
If I remove sleep .5 I do not get the unexpected output
If I simply run sleep .5 the prompt returns but there is no output
I have googled around for __bp_preexec_invoke_exec, but I am unable to determine how it applies to what I'm doing
Question
What is __bp_preexec_invoke_exec "$_"?
How can I run this without the unwanted output?
More info on solution thanks to @gina2x:
Here is the output of declare -f | grep preexec
preexec_functions+=(preexec);
__bp_preexec_interactive_mode="on"
__bp_preexec_invoke_exec ()
if [[ -z "$__bp_preexec_interactive_mode" ]]; then
__bp_preexec_interactive_mode="";
__bp_preexec_interactive_mode="";
local preexec_function;
local preexec_ret_value=0;
for preexec_function in "${preexec_functions[@]}";
if type -t "$preexec_function" > /dev/null; then
$preexec_function "$this_command";
preexec_ret_value="$?";
__bp_set_ret_value "$preexec_ret_value" "$__bp_last_argument_prev_command"
if [[ -z "${iterm2_ran_preexec:-}" ]]; then
__iterm2_preexec "";
iterm2_ran_preexec="";
__iterm2_preexec ()
iterm2_ran_preexec="yes";
I see in there a lot of "iterm2" information (I'm on a Mac and using iTerm2.app).
In fact, when I try to reproduce using Terminal.app, I am unable to reproduce the unexpected output.
Excellent sleuthing with declare -f - thank you!
|
Seems like __bp_preexec_invoke_exec is part of https://github.com/rcaloras/bash-preexec/blob/master/bash-preexec.sh. And it seems like that there is a bug in that script.
That project adds 'preexec' functionality to bash by adding DEBUG trap, I did not test, but I can imagine that it might not work properly in the way you see it. Check if it is installed in you environment - you could do so by declare -f. Seems like with newer bash you can use PS0 instead of that project, which probably would do the same without problems you see.
| Bash `sleep` outputs __bp_preexec_invoke_exec |
1,383,332,844,000 |
When I run a command like tail ~/SOMEFILE I get, for example:
testenv@vps_1:~# tail ~/SOMEFILE
This is the content of SOMEFILE.
But what if I want to have a carriage return between: testenv@vps_1:~# and the output of: This is the content of SOMEFILE.
So the final result would be like this:
testenv@vps_1:~# tail ~/SOMEFILE
This is the content of SOMEFILE.
Or this:
testenv@vps_1:~# tail ~/SOMEFILE
This is the content of SOMEFILE.
Or this:
testenv@vps_1:~# tail ~/SOMEFILE
This is the content of SOMEFILE.
Note: The first example show one line of spacing between the two parts, the second example show two lines, and the third three lines.
Is there a way to make sure the tail output (or any other output) for that matter would be spaced down as I've shown in the examples, just for this particular command (not for all commands of course), in Bash?
|
The simplest option would be printing manually those extra newlines, something like:
printf '\n\n\n'; tail ~/SOMEFILE
But if you want to:
Do this just for tail
Not write extra commands with every tail invocation
Have a simple yet full control over the quantity of newlines
then I recommend you to add a function to your aliases/rc file.
For example:
# Bash version
# In Bash we can override commands with functions
# thanks to the `command` builtin
tail() {
# `local` limit the scope of variables,
# so we don't accidentally override global variables (if any).
local i lines
# `lines` gets the value of the first positional parameter.
lines="$1"
# A C-like iterator to print newlines.
for ((i=1; i<=lines; i++)); do
printf '\n'
done
# - `command` is a bash builtin, we can use it to run a command.
# whose name is the same as our function, so we don't trigger
# a fork bomb: <https://en.wikipedia.org/wiki/Fork_bomb>
#
# - "${@:2}" is to get the rest of the positional parameters.
# If you want, you can rewrite this as:
#
# # `shift` literally shifts the positional parameters
# shift
# command "${@}"
#
# to avoid using "${@:2}"
command tail "${@:2}"
}
#===============================================================================
# POSIX version
# POSIX standard does not demand the `command` builtin,
# so we cannot override `tail`.
new_tail() {
# `lines` gets the value of the first positional parameter.
lines="$1"
# `i=1`, `[ "$i" -le "$lines" ]` and `i=$((i + 1))` are the POSIX-compliant
# equivalents to our C-like iterator in Bash
i=1
while [ "$i" -le "$lines" ]; do
printf '\n'
i=$((i + 1))
done
# Basically the same as Bash version
shift
tail "${@}"
}
So you can call it as:
tail 3 ~/SOMEFILE
| Display console output 1 or more lines below |
1,383,332,844,000 |
I'm using a grep regex search in a bash script, that does contain quite a lot of search terms.
some commands \
| grep -E 'search1|search2|search3|search4|search5|search6|search7|search8|search9|search10'
is it possible to break this command to make it more readable?
so it would look somehow like this:
some commands \
| grep -E 'search1|search2|search3|search4|search5|\
search6|search7|search8|search9|search10'
|
Refer to option -e
some commands \
| grep -E -e 'search1|search2|search3|search4|search5' \
-e 'search6|search7|search8|search9|search10' \
-e ...\
-e ...
| How to break a grep regex search pattern |
1,383,332,844,000 |
I have a directory of files with filenames of the form <num1>v<num2>.txt. I'd like to find all files for which <num1> is a duplicate. When duplicates are found, we should delete the ones with smaller <num2>.
Is this possible? I could easily write a python script to handle this, but thought it might be a nice application of built-in zsh features.
Example
In the following list of files, the first three have duplicate <num1> parts. As well, the fourth and fifth are duplicate.
012345v1.txt
012345v2.txt
012345v3.txt
3333v4.txt
3333v7.txt
11111v11.txt
I would like to end up with directory containing
012345v3.txt
3333v7.txt
11111v11.txt
|
You could do something like:
files=(<->v<->.txt(n))
typeset -A h
for f ($files) h[${f%%v*}]=$f
keep=($h)
echo rm ${files:|keep}
(remove echo if happy)
<->: any sequence of digits (<x-y> glob operator with no bound specified)
(n): numeric sort
${f%%v*}: standard/ksh greedy pattern stripping from the end.
${files:|keep}: array subtraction.
| zsh globbing - Find files with duplicate filename strings |
1,383,332,844,000 |
I'm trying to use setfattr, but always get Operation not supported
In my home directory, I'm doing the following:
touch delete.me
setfattr -n naomi -v washere delete.me
This returns setfattr: delete.me: Operation not supported.
My home directory is ext4 and delete.me definitely exists. I'm on Fedora 25. Any idea why this is not working?
|
You can't just use any name. You need to select a namespace. For arbitrary attribute name, you'd need to use the user namespace:
setfattr -n user.naomi -v washere delete.me
(see man 5 attr for details).
For ext4, the ext_attr feature must be enabled (on by default). Check with:
sudo debugfs -R stats /dev/block/device | grep -w ext_attr
And to be able to use attributes in the user namespace, the filesystem should be mounted with the user_xattr option enabled (also on by default). Check with:
grep user_xattr /proc/self/mountinfo
If it returns nothing, also check the default mount options in the debugfs output above.
| Cannot set file attribute |
1,383,332,844,000 |
I'm downloading a video as follows
$ youtube-dl url_to_video
The download of the file is very slow
33.1% of 301.31MiB at 19.75KiB/s ETA 02:54:03
It was usually faster.
Could you identify with command-line tools where is the bottleneck (the hop where the speed rapidly slows down)? The command-line tool, should be able to display the slow-down in speed in hop X in contrast to hop X+1.
|
If you have the tools timeout, traceroute and bing installed this script may help.
What this does is to iterate down a traceroute listing, and compare the packet speed to the "current" host with the packet speed of the previous host. This difference (if any) is then reported to the user.
It requires a target hostname. Since you're using youtube-dl you need to get that to tell you the hostname of the server delivering the video. Here is an example of usage to derive the hostname:
youtube-dl --get-url --simulate 'https://www.youtube.com/watch?v=OQZlqeUBbck' 2>/dev/null |
cut -d/ -f3
For me, that gets me the hostname r7---sn-aiglln76.googlevideo.com. With that you can run the script (below). It takes a little while to run, and for the first minute or so you may get no output at all.
#!/bin/bash
#
target="$1" # Hostname
test -z "$target" && read -p "Target destination: " target
phop=0 first=y
timeout 90 traceroute "$target" |
awk '$1+0' |
while read hop rhost rip junk
do
test "$rhost" == '*' && continue
rip="${rip//[()]/}"
# Is the host reachable?
ping -q -c1 -w4 "$rip" >/dev/null 2>&1 || continue
if test -n "$rhost" -a -n "$phost"
then
test -n "$first" && { printf "Hops\tRoute\n"; first=; }
# Test the link speed between these two hosts
bing=$(
bing -c1 -e20 "$pip" "$rip" 2>/dev/null |
tail -1 |
awk '!/zero/ {print $2}'
)
# Report any useful result
printf "%2d-%2d\t%s (%s) to %s (%s): %s\n" "$phop" "$rhop" "$phost" "$pip" "$rhost" "$rip" "${bing:-no measured difference}"
fi
# Save the current host for the next hop comparison
phop="$rhop" phost="$rhost" pip="$rip"
done
Some output from a test run in the UK to a remote office:
Hops Route
1- 4 10.20.1.254 (10.20.1.254) to aaa.obscured (): no measured difference
4- 5 be200.asr01.thn.as20860.net (62.128.207.218) to 195.66.227.42 (195.66.227.42): no measured difference
5- 6 195.66.227.42 (195.66.227.42) to core3-hu0-1-0-5.faraday.ukcore.bt.net (62.172.103.132): no measured difference
6- 7 core3-hu0-1-0-5.faraday.ukcore.bt.net (62.172.103.132) to 195.99.127.60 (195.99.127.60): no measured difference
7- 8 195.99.127.60 (195.99.127.60) to acc1-10gige-0-2-0-0.bm.21cn-ipp.bt.net (109.159.248.25): 512.000Mbps
8- 9 acc1-10gige-0-2-0-0.bm.21cn-ipp.bt.net (109.159.248.25) to 109.159.248.99 (109.159.248.99): no measured difference
9-14 109.159.248.99 (109.159.248.99) to bbb.obscured (): 717.589Kbps
From this is can be seen that my traffic appears to have a big slow down between 9 and 14, where we're dropped down to a typical ADSL upstream.
I should point out that bing cannot measure a speed difference between two remote points if their connection speed exceeds your available connection to those points. My connection is 512Mbps so I can't measure most of the carrier links.
| Identify the slow hop when downloading video |
1,383,332,844,000 |
I am trying to use awk command to find line(s) which the third columns is not digit/date? Suppose there is a file comma "," field separated, has three columns and as code "," measure "," dd/mm/yyyy,
97xx574,26.7,12/30/1997,
97xy575,18,12/30/1997,
code,meas,EXAMDATE,
B529ui,28.2,12/30/1997,
B530sx,26.4,12/30/1997,
J487sxv,21.5,12/30/1997,
I tried:
awk -F "," '/$3[^0-9].*/ {print $0}' <filename>
... but apparently it is not correct!
|
How about this. Where 3rd field does not consist of 0-9 or /, print the line (which is the default action : no need for a print $0.
$3 = third field
!~ = where does not (!) match regular expression
/ = mark start of regular expression
^ = match start of field
[0-9/]+ = match any of the 0123456789/ characters at least once
$ = match end of field
/ = mark end of regular expression
So code, with output:
awk -F, '$3!~/^[0-9/]+$/' filename
code,meas,EXAMDATE,
Introducing a bit more checking, so has to consist of nn/nn/nnnn, try this.
awk -F, '$3!~/^[0-9][0-9]\/[0-9][0-9]\/[0-9][0-9][0-9][0-9]$/' filename
code,meas,EXAMDATE,
Could even use grep if you wanted.
grep -vE '^.*,.*,[0-9/]+,$' x1
code,meas,EXAMDATE,
| How I can find line(s) which the third columns is not digit/date? |
1,383,332,844,000 |
I have folders setup like this:
/path/to/directory/SLUG_1/SLUG_1 - SLUG_2 - SLUG_3
SLUG_2 is a year, and it may have a letter after the year, like "1994" or "2003a".
I would like to rename those files to:
/path/to/directory/SLUG_1/SLUG_2 - SLUG_3
I'm getting pretty close with this command:
find $root -mindepth 2 -maxdeth 2 -type d | sed "s#\(\(/path/to/directory/[^/]*/\).* - \([0-9]\{4\}[a-bA-B]\? - .*\)\)#mv "\1" "\2\3"#
This prints:
mv "/path/to/directory/SLUG_1/SLUG_1 - SLUG_2 - SLUG_3" "/path/to/directory/SLUG_1/SLUG_2 - SLUG_3"
Which is exactly the command I want to execute. But I can't execute it.
I tried assigning the output to a variable and executing it by calling the variable. That didn't work. I tried a few variations on that idea, and I got errors.
It feels like I'm missing some tool here. Some iterating tool that makes this job easy. Any suggestions?
|
Another answer, using GNU find to handle the renaming. This method is robust regardless of what characters may be in the filename.
If I understand your use case rightly, you want to rename directories that start with the full name of their parent directory. In other words, if your directory is named like so:
/some/path/abcdefghi/abcdefghi - something - else/
You want to rename it like so:
/some/path/abcdefghi/something - else/
Since you specify GNU as a tag on this question, you can use the GNU extensions to the find command and handle this like so:
find . -mindepth 2 -maxdepth 2 -type d -regextype posix-egrep -regex '.*/([^/]+)/\1[^/]+' -exec sh -c 'new="$(sed -r "s:/([^/]+)/\\1 ?-? ?([^/]+)\$:/\\1/\\2:" <<<$1)"; mv "$1" "$new"' find-sh {} \;
Test results:
[vagrant@localhost test]$ mkdir -p SLUG_1/SLUG_{1\ -\ SLUG_{2..4}\ -\ SLUG_5,7\ -\ something}
[vagrant@localhost test]$ find . -type d
.
./SLUG_1
./SLUG_1/SLUG_1 - SLUG_2 - SLUG_5
./SLUG_1/SLUG_1 - SLUG_4 - SLUG_5
./SLUG_1/SLUG_1 - SLUG_3 - SLUG_5
./SLUG_1/SLUG_7 - something
[vagrant@localhost test]$ find . -mindepth 2 -maxdepth 2 -type d -regextype posix-egrep -regex '.*/([^/]+)/\1[^/]+' -exec sh -c 'new="$(sed -r "s:/([^/]+)/\\1 ?-? ?([^/]+)\$:/\\1/\\2:" <<<$1)"; mv "$1" "$new"' find-sh {} \;
[vagrant@localhost test]$ find . -type d
.
./SLUG_1
./SLUG_1/SLUG_3 - SLUG_5
./SLUG_1/SLUG_4 - SLUG_5
./SLUG_1/SLUG_2 - SLUG_5
./SLUG_1/SLUG_7 - something
[vagrant@localhost test]$
| Batch rename folders with a single bash command |
1,383,332,844,000 |
I am facing a challenging issue in Linux where I need to print a number that is divisible by 4.
The following below helps me print an even number, but not divisible by 4:
echo 5 | awk -F, '$0%2{$0++}1'
Output:
6
I am not sure how to go about this, but perhaps if I use a while loop to add/increment to the initial value until the value can be divided by four then insert this value into a variable, which I can use.
For example:
val=5
5/4= 1.25 not divisible
val=val+1
val=6
6/4= 1.5 not divisible
val=val+1
val=7
7/4= 1.75
val=val+1
val=8
8/4=2 is divisible
Once the value is divisible by 4, echo the value 8 in this example to an output variable, not the answer, which is divisible by 4.
|
With awk:
echo 5 | awk '{$0=int($0/4+1)*4}1'
Explanation:
$0/4+1 the value is divided by 4 and the result incremented by 1.
int(n) this is then rounded down by awks int().
n*4 now we only have to multiply that with 4 to get the next higher number divisible by 4.
{...}1 the 1 at the end will just print the value.
This will print 16 for the value 12.
If you want the value to stay when it is divisible by 4, then use this awk instead:
awk '{$0=int($0/4+.75)*4}1'
| How to print a number that is divisble by 4 working upwards (while loop) or awk |
1,383,332,844,000 |
I have this setup where I have computer with ssh and a display where I have a user logged in to terminal. What I want to do is send commands like I was using that local session with keyboard. I tried to echo to /dev/tty1 but it just shows what I typed instead executing it. Which makes sense. The system only has bash so no GUI or anything like that.
|
The TIOCSTI ioctl can inject characters into a terminal, or see instead uinput on Linux to generate keyboard (or mouse!) input.
ttywrite.c - sample C implementation
Term::TtyWrite - Perl implementation
$ sudo perl -MTerm::TtyWrite \
-e 'Term::TtyWrite->new("/dev/pts/2")->write("echo hi\n")'
| Send commands to another terminal |
1,383,332,844,000 |
I've noticed that some Linux configuration files (e.g. /etc/samba/smb.conf) expect you to enter the actual settings (key value pairs) in a particular "section" of the file such as [global].
I'm looking for a terminal tool/command which allows you to append lines to a specific section of a specific configuration file.
Example:
configadd FILE SECTION LINE
configadd /etc/samba/smb.conf '[global]' "my new line"
|
You can do the task by sed directly, for example:
sed '/^\[global\]/a\my new line' /etc/samba/smb.conf
NOTE: This is not a solution because such line can be in config already. So firstly you should to test whether is the line present.
| Appending a line to a [section] of a config file |
1,383,332,844,000 |
I'd to Prefix my files (.dat) like this :
CLY_BIZ_COM_PERD.dat -> 20160622CLY_BIZ_COM_PERD.dat
I have tried the following:
key=`date "+%Y%m%d"`
for i in $(ls /Path/*.dat); do mv ${i} "${key}${i}" ;done
But this command suffix my files and not prefix it.
How can I do this?
|
Two changes to your current script:
don't parse ls; instead rely on the shell's globbing
because the files are in a subdirectory, either cd there first and run the loop, or use basename and dirname to pull out the directory and filename portions of the file before adding the prefix.
(Note: I also changed your "/Path" to "./Path" as I didn't want to create a root-level /Path directory. Same principles apply, though.
To set up some sample files:
mkdir Path && cd Path
touch CLY_BIZ_COM_PERD.dat jeff.dat a.dat c\ d.dat
cd ..
Here's a dry run:
for f in ./Path/*.dat
do
printf "mv '%s' '%s'\n" "$f" "$(dirname "$f")/${key}$(basename "$f")"
done
Output of the dry run:
mv './Path/a.dat' './Path/20160622a.dat'
mv './Path/c d.dat' './Path/20160622c d.dat'
mv './Path/CLY_BIZ_COM_PERD.dat' './Path/20160622CLY_BIZ_COM_PERD.dat'
mv './Path/jeff.dat' './Path/20160622jeff.dat'
Once you're content, do it for real:
for f in ./Path/*.dat
do
mv "$f" "$(dirname "$f")/${key}$(basename "$f")"
done
... and the result:
$ ls -1 Path
20160622a.dat
20160622c d.dat
20160622CLY_BIZ_COM_PERD.dat
20160622jeff.dat
| Rename file (Prefix) with full path? |
1,383,332,844,000 |
Say, I have a "Quick&Dirty" Perl script in a gui text editor, and I start the Perl interpreter in a terminal window which is running Bash. I can copy-paste the Perl script to the terminal & press CTRL-D to execute it with Perl. Perl will interpret the script and execute it.
Sometimes, there is a typo in the script, which makes Perl print a FATAL ERROR and exit, but the remaining copy-paste text is given to the Bash shell which tries to execute that; Either that fails (lucky case) or executes but will Do something Different (unlucky case).
Eg : start Perl in Bash Prompt , copy-paste a script of 5 lines : lines 1 & 2 are fine , but line 3 makes Perl exit abnormally , hence line 4 & 5 are executed by Bash.
I am using Perl & Bash only for example; Problem can happen with many other interactive tools ( Eg Python, fdisk ) and other shells ( Eg zsh, csh )
Is there any way to inform the shell that this text is a copy-paste input and can be ignored ?
Something like "If faster than user can normally type, then Ignore" ?
Or something like "If Process finished, then flush input buffer before reading next shell input" ?
[[ Some tools do not have this problem, Eg MySQL cli will never exit on improper input. ]]
|
This is not an answer, but maybe it's an acceptable work-around:
alias p='perl; echo hit control-d again; cat > /dev/null'
Then, if your perl script exits prematurely, you'll harmlessly paste the remainder to /dev/null; if the perl script succeeds, you'll see your friendly reminder and hit control-d to exit the cat catcher.
| Can the Bash shell "Ignore" Excess copy-paste text? |
1,383,332,844,000 |
Folks, I want to know if there exists a command that just highlight some portions of the input text, rather than filtering it like grep does.
To give an example, suppose the following input text:
foo bar
gaz das
xar
grep "bar\|gaz" input would print the first two lines, highlighting bar and gaz, but would not display xar.
I'm aware that I could simply set a big constant to the -C argument, so it would "always" show the context, like: grep -C1000 "bar\|gaz" input, but I'm not sure if that is efficient, or if there is a better tool for that.
|
USE:
egrep --color 'pattern|$' file
or if you want using grep
grep --color -E 'pattern|$' file
"pattern|$" will match lines that have the pattern you're searching for AND lines that have an end -- that is, all of them. Because the end of a line isn't actually any characters, the colorized portion of the output will just be your pattern
| Pattern highlighting command [duplicate] |
1,450,025,459,000 |
Why can't I use if($l =~ $ARGV[0]) but I can use if($l =~ /$ARGV[0]/g?
first case
$ perl script.pl '/^[\w]/g'
second case
$ perl script.pl '^[\w]'
|
Strings and regexes are different primitive types in perl, and all variables placed in the @ARGV array are simply strings given to the program by the kernel at startup; $ARGV[0] is a scalar string, and not a regex.
When you do if($l =~ $ARGV[0]) and $ARGV[0] is '/^[\w]/g' this is equivalent to if($l =~ '/^[\w]/g') instead of if($l =~ /^[\w]/g). In the former case the slashes are simply characters in a string while in the later they are a part of the Perl syntax that delimits a regular expression.
| Why can't I pass a regex in @ARGV on the command line? |
1,450,025,459,000 |
I have process which created multiple PID's. I want to kill all those PID's. I have tried
pkill <process_name>.
But PID not getting killed as they were wait to resource releasing.
I have managed to get PID list with
ps -ef | grep <process_name> | awk '{print $2}'
which gives process ID list but how can I kill all those listed PIDs ?
Thank you.
|
You could pipe the output to xargs e.g.
ps -ef | grep <process_name> | awk '{print $2}' | xargs /bin/kill
But why doesn't your pkill command work?
| How to kill line of PID? |
1,450,025,459,000 |
I am wandering if there is a way to read the full output of a command when it uses more than the screen. I am currently having to output the command into a file, and then using nano to scroll through it.
E.g. $ ls -Al /etc/ only displays the end of the output and cuts of the rest.
|
You need to use less less is a pager, it allows you to view a page at a time. e.g.
command | less
ls -Al /etc | less
The most common command while in less are:
enter advance one line
space advance one page
q quit / exit help
h help
see man less for more info, like how to search.
| Scroll through command output without a temporary file |
1,450,025,459,000 |
I want to execute these two timeout command on the same command but with a different time and instructions. So
timeout --signal=SIGINT 5s command
timeout --signal=SIGKILL 10s command
How to append them in one line?
|
timeout --signal=SIGKILL 10s timeout --kill-after=5 --signal=SIGINT 5s command
| timeout pipeline |
1,450,025,459,000 |
I really don't know what are the advantages of running applications in the background.
Something like Application & via command line.
Why exactly do we run applications in background and when should I decide to do so?
|
Generally applications that take too long to execute and does not require user interaction are sent to background so that we can continue our work in terminal.
Jobs running in background are treated same as jobs running in foreground except that their STDOUT, STDIN and STDERR varry.
If you have a job that take too long, like file compression or backup you can send those jobs to background.
You can list the jobs that are running in background using jobs command.
$ ./job1.sh &
[1] 9747
$ ./job2.sh &
[2] 9749
$ ./job3.sh &
[3] 9751
$ jobs
[1] Running ./job1.sh &
[2]- Running ./job2.sh &
[3]+ Running ./job3.sh &
Here whenever a job is sent to background shell displays the job id and pid of the process. In case if we want the process to get back to foreground we can use fg command to bring it back.
$ fg 1
./job1.sh
But be aware that when you close the terminal (shell) SIGHUP will be sent to all background process that are spawned from the shell causing those processes to die. To prevent this you can use disown command to remove those process from the job table and thus preventing the process from getting killed.
one best way is to start the background process with nohup command so that SIGHUP signals will not kill the process and it will run safe in background.
Preventing SIGHUP to be sent to child processes (bg jobs) can also be prevented by setting huponexit option of the bash shell.
$ shopt -s huponexit
This option is set by default in latest versions of Bash, but in case if its not set, we can add this to ~/.bashrc to make it a default behaviour.
| What is/are the advantage(s) of running applications in backgound? |
1,450,025,459,000 |
For example, I have the following output from command:
loom@loom:$ history | grep MAKE
219 ../build.sh -DCMAKE_BUILD_TYPE=Debug ..
909 history | grep MAKE
How to write a command, that start the first command from the list? Also, I'd like to know how to start n-th command from output of history | grep something?
|
See those numbers on the left of the output? You can use them to refer to that command with shell history expansion; ![number] in most shells.
This works both in bash and zsh:
$ echo "hello"
hello
$ history | grep hello
5057 echo "hello"
$ !5057
echo "hello"
hello
$
| How to start first command from the list printed by command 'history | grep something' |
1,450,025,459,000 |
Whenever I type in a bash command longer than about half the width of the shell window I'm in, the command breaks like it would if I filled the whole screen
3rd command in image - typed a few xs and got the expected result.
4th command - typed a load more xs, and the command broke back to the start as though it had filled the whole line.
I'm connecting through Putty.
I'm running Raspbian (a distro based on Debian)
If I'm not being clear enough please say, it's not easy to explain.
|
I think that your tty is reporting the wrong tty size. Try running
pi@raspberrypi$ stty -aF /dev/ttyO0
There you will see how many rows and columns the tty thinks it has. This size should match the size set in putty. You can also change parameters, such as number of columns, using stty. The command would be something like
pi@raspberrypi$ stty -F /dev/ttyO0 cols 80
You can check more parameters at http://unixhelp.ed.ac.uk/CGI/man-cgi?stty
| When a command is over half the terminal size it breaks |
1,450,025,459,000 |
I've got some large mailboxes and using Thunderbird, that means that I have several mbox files. These single files contain all e-mails in a particular folder. Now, I would like to get some data on the senders in a particular folder. My ideal statistic would be to get all unique senders, and the number of times their e-mail is in that folder. Ej:
John A: 10x
Maria B: 5x
etc.
I've tried some grep options but I also get X-headers if I grep only 'From:', and I'm not sure how to exclude these other headers. Anybody any idea if this can be done from the command line?
|
First, we need to reliably get the From header, which can be done with a restrictive grep regular expression.
% grep --no-filename --ignore-case '^From:' test.eml
From: [email protected]
Next we need to count the number of occurrences, which can be done with uniq -c (which requires a sorted list).
% grep --no-filename --ignore-case '^From:' *.eml | sort | uniq --count
1 From: [email protected]
3 From: [email protected]
We can then sort the output by occurrence, to get the most frequent at the top.
% grep --no-filename --ignore-case '^From:' *.eml | sort | uniq --count | sort --general-numeric-sort --reverse
3 From: [email protected]
1 From: [email protected]
| Creating a list of unique senders from Thunderbird mail files through the command-line |
1,450,025,459,000 |
I've got this directory full of images, and I can do this:
echo *.jpg
image1.jpg image2.jpg image3.jpg # and so on
How can I get the output in a plain text file in this format?
image1.jpg
image2.jpg
image3.jpg
|
Avoid using ls, bash globs can do it better
printf '%s\n' *.jpg >output_file
| Index names of all the files in a plain text file |
1,450,025,459,000 |
I am attempting to force powertop to run when I log in by using the Startup Applications wizard in Ubuntu. Under the 'Command' entry, I've placed gnome-terminal -x /home/***/Documents/programming/scripts/powertop.sh. It's using a one-line bash shell script:
echo "******" | sudo -S powertop
This inputs my superuser password correctly but doesn't allow me to interact with powertop after it's initialized because it's continuously running echo.
How can I get privileges to run powertop using a shell script, yet still be able to use it in gnome-terminal?
|
Assuming your powertop is in /usr/sbin, you can use sudo /usr/sbin/powertop with no password. To do this you need to run visudo and append the followind line, substituting yourusername with the real one:
yourusername ALL=(root) NOPASSWD: /usr/sbin/powertop
| How can I adjust this short shell script to fit my needs? |
1,450,025,459,000 |
I want to try to setting up a headless (terminal-only) Ubuntu Linux server, and am trying to find resources to get started. I've been a GUI Linux/Windows user for a while now, and have run through a tutorial to setup a server on an Ubuntu desktop (with the GUI), but the biggest hurtle I found was when I tried to only use the terminal. Ultimately I want to try to setup a web server and host personal content (personal website, or possibly a personal Confluence site). On the server I'd also like to setup a database (postgres / mysql), I wouldn't shy away from some experience with Samba as well. I've gone far enough to enable ssh on a server so I can ssh in.
Problems I encountered with my only attempt at running a headless server: I think I installed mysql (with apt-get), but didn't grasp almost anything after that, such as how to get the database service to start with the server on server restart, how to check that it did install correctly, or even how to make sure it was running without access to a visual process manager.
Is there a tutorial someone would recommend specifically aimed at people with discomfort with a Linux server, and particularly using a terminal-only interface?
|
First, I'm going to define some things for you so you get a feel for what application is doing what when it comes to web servers.
Apache is an HTTP web server and allows you to serve static HTML and text files "like the Internet". Your web server will take care of inbound requests and all the other stuff you don't really want to have to take care of. Usually, once it is installed, you can go into the htdocs directory and place some files. These files will be available to you if you point your browser to localhost (assuming you've used Apache defaults for which port to run off of, default is 80). This is all you need for a basic website.
You might also want to consider building Apache with support for PHP. PHP is a scripting language used heavily in websites to deliver dynamic content and "spice" up otherwise static html files.
Once you have PHP and Apache working together, consider using a database (MySQL for example) to help store your data. Databases are required by most web software (Wordpress and forums come to mind) and isn't too hard to set up. MySQL has a nice interface called PHPMyAdmin which can be installed on your server and allow you to browse your database from your browser (don't worry, there is a login). The only time you will probably have to interface with MySQL using the command line, is if you want to restart it using kill.
Put all this on a Linux box, and you've created a LAMP server (Linux, Apache, MySQL, PHP).
Forgive me if you already knew all this stuff, I just figured I'd lay down a good foundation of terms for you seeing as you seem new to a Linux/server stuff.
To answer you actual question, I see two possible options.
1- Stream X11 to your workstation. If you're on Windows, you'll need something like Exceed and on your server you'll need to set your DISPLAY environment variable to your computer's hostname like this:
export DISPLAY=you_host_name_here:0.0
This way you'll be able to open up GUI applications on your server but have the windows display on your workstation. However, I don't think this is what you'll benefit from most.
2- Get comfortable with the terminal. This is something you will have to do in a Linux world whether you like it or not :) It's good that you know your way around the filesystem (cd, ls, mkdir, etc). But for serving web content (.html or .php files), you'll need to be comfortable with a terminal based text editor. I use vi, which has a steep learning curve but is very powerful. Alternatives are nano, emacs and pico (there are of course others). Being able to use an editor will allow you to make changes to files without needed to deal with X forwarded GUIs and stuff like that.
One final note, you've installed a lot of applications using a package manager. Have you ever tried building an application from source? I tend to build things myself because I feel I have more control on where things are installed and with what compatibilities. In Apache's case, you'll need to compile with support for PHP and MySQL. This may seem like a lot to chew, but I assure you it's really quite simple and it's a great tool to have when working with Linux or UNIX.
| Recommended Resources To Get Started With Terminal-Only Linux Server? |
1,450,025,459,000 |
The file contains:
dateutkfilename25012009
I want to change the position of character 16th to 17th with 18th to 19th. And then change position of character 16th to 19th with 20th to 23rd... so it will be:
dateutkfilename20090125
I've tried to change position of character 16th to 17th with 18th to 19th using below code, but I don't understand why it not works:
'/dateutkfilename/s/\(.\{16\}\)\([0-9]\{2\}\)\(.*\)/\(.\{18\}\)\([0-9]\{2\}\)\(.*\)/g'
|
Here's the answer to your question:
s/^\(.\{15\}\)\(.\{2\}\)\(.\{2\}\)\(.\{4}\)/\1\4\3\2/
But if you can anchor to the end instead, it gets simpler:
s/\(.\{2\}\)\(.\{2\}\)\(.\{4\}\)$/\3\2\1/
Personally, I'd probably do [0-9] instead of . as well:
s/\([0-9]\{2\}\)\([0-9]\{2\}\)\([0-9]\{4\}\)$/\3\2\1/
As usual, there's more than one way to do it.
| Change character position with sed |
1,450,025,459,000 |
I have...
me@computer:~/gutenberg/euclid$ ls
book01.html book04.html book07.html
book10.html book13.html book02.html
book05.html book08.html book11.html
book03.html book06.html book09.html
book12.html
and I want to join all these .html files into the same big file, in order. What command or command sequence can I use?
|
In this particular case cat book??.html > book.html will work fine, if you don't care about proper HTML format.
For a more general case, say you had "book1.html" instead of "book01.html", "book2.html" instead of "book02.html" and so forth. The file names don't sort lexically the same as logically. You can do something like this:
(echo book?.html | sort; echo book??.html | sort) | xargs cat > book.html
So in general: script_generating_file_names_in_order | xargs cat > all_one_file
That idiom can go a long way.
| command for joining a series of files together |
1,450,025,459,000 |
I would like to check the Base64 value for an integer. There is a base64 linux command but I don't understand how I can apply it on integers.
I have tried with base64 10 but then I get the error message base64: 10: No such file or directory
I think that the problem can be that Base64 is used for Binary to Textual conversion, so there is no point to pass a Textual argument to it. So my argument 10 is read as an textual string and not as a binary number. Is there any way I can turn a texttual argument to a binary?
How can I use the base64 command to get the Base64 value for an integer?
|
Convert the number into hex than use echo to print the according byte sequence and pipe that into base64. So to encode the integer 10 with base64, you can use:
echo -en '\xA' | base64
To explain the result. The byte 10 has the following binary representation:
00001010
What base64 does is, it chunks those into groups of 6 bits. So with padding we get the following two 6lets:
000010 100000
Which in decimal are 2 and 32, which correspond to the letters C and g.
| How can I check the Base64 value for an integer? |
1,450,025,459,000 |
How to stop stdin while command is running in ash (not bash)?
For example:
sleep 10
type echo hello while sleep is still running
observe hello is in stdout after sleep finishes
Desired:
sleep 10
type echo hello while sleep is still running
observe shell as if nothing was typed in while sleep was running
I expect the solution would be something like: tput <stop stdin>; sleep 10; tput <restart stdin>; <enter>
This will not be in a shell script (only needs to work with interactive shell).
|
stty -echo disable the echo of your terminal, but if you type something, it will be memorized and the shell will get the typed key.
Then, before leaving, stty echo (revert back the echo mode), and drain the terminal input with this program :
#include <stdio.h>
#include <unistd.h>
#include <termios.h>
int main(void)
{
tcflush(0, TCIFLUSH);
return 0;
}
The whole thing would be stty raw -echo;sleep 10;stty sane;./my_program.
Using raw, avoid CtrlC CtrlZ to interfere. Sane imply echo and revert the terminal to a sane mode.
| Stop stdin while command is running |
1,450,025,459,000 |
(This may be a XY problem, more on the context at the end)
Is there any way of programmatically splitting a pdf based on the section titles? That is
from this pdf, produce 2 pdfs, where one contains everything up to the section called "XY", and the second contains all the rest.
I know how to split a pdf based on the page number, but is there anything more "semantical" available?
(In short, the original problem: the NSF wants to have the list of references in one document, and the description in another, and since I use LaTeX / pandoc to produce my document, it's simpler to just have everything in 1 document and then split it. Links don't matter, obviously.)
|
Stewart's answer gave me pretty much all the tools, but I made two important edits to their solution:
Use coherent pdf to preserve the table of content (pdftk just delete it),
Make the command split the file in two, instead of splitting the file into as many files as chapters.
Re-using the parser.awk shared by Stewarts, assuming that the source is called source.pdf and that the chapter we want to use to split the document is called "Appendix", this gives:
pdftk source.pdf dump_data | \
awk -f parser.awk | \
grep Appendix | \
{
IFS=";" ; \
read -r title start end; \
./cpdf source.pdf 1-"`expr $start - 1`" -o A.pdf; \
./cpdf source.pdf "$start"-"$end" -o B.pdf;
}
This
Dump the metadata of the file,
Extract the relevant bits (title ; page start ; page ends),
Grab the relevant bit for the Appendix,
Set the separator for read to be ;
Store the title in a title variable, and the start page in start and end page in end,
output all the pages from 1 to start - 1 (which corresponds to the page before the appendix begins) into a file called A.pdf,
output the rest of the document into a file called B.pdf.
| Split pdf document based on section |
1,450,025,459,000 |
There's a way in browsers to query if the user prefers a dark or light theme so that a website developer could adapt the website's colours according to user preference.
Is there also a way to detect that on the command line? Is there a command that outputs light or dark (or some equivalent boolean-valued output indicating light or dark?)
|
You can try this command (tested on gnome desktop environment, ubuntu 22.04):
gsettings get org.gnome.desktop.interface color-scheme
It outputs :
'prefer-dark'
OR
'prefer-light'
| Command for detecting whether the system is using a dark or light desktop theme? |
1,450,025,459,000 |
I have two external drives. One of the drives is named "drive2" and it contains a folder named "Music." This folder has the following structure:
drive2/
/Music/
Pink Floyd/
1982 - Album Name/
01 - Track.flac
02 - Track2.flac
and so on..
So, because of the folder hierarchy I guess I need a recursive sync.
I need the entire Music folder from drive2 to be copied to drive1. I think I can use something like this:
rsync -av drive2/Music/ drive1/Music/
However, there are situations when I modify the metadata of certain songs. Those metadata modifications are small like changing the title of the album, and those modification don't necessarily change the size of the FLAC files, but their md5 fingerprint definitely changes because of altering the metadata. Right?
I noticed that when I use rsync -a, the rsync utility notices the metadata changes of FLAC files on drive2 and updates the files on drive1 as well to be in sync. Only the changed files gets transferred, which is exactly the behaviour I want. It seems to me that the -a (archive) flag implies the -u which tells to only update changed files.
However, I'm curious when rsync transfers the files that got their metadata updated, does the old files get overwritten completely on drive1? I mean they are replaced completely? Are the old files removed before the new ones are copied?
|
when rsync transfers the files that got their metadata updated, does the old files get overwritten completely on drive1?
Yes. The default behavior is that the file currently on the destination is copied to a temporary location. Then any necessary updates are done on that copy. Once the update is complete, the copy is renamed to be the correct file and the previous one is deleted.
Are the old files removed before the new ones are copied?
For an individual file that is being updated, the default is that the file is overwritten only after the copy/update is completed.
For files that are removed from the source, you can control whether such files are deleted before, during, or after the updates/transfers of data.
| Keeping two folders in sync with rsync |
1,450,025,459,000 |
Let's say I have a file with the following content:
var1='a random text'
var2='another random text'
var3='a third random text'
I know that if I use the command eval like the following I'll store all those variables directly on my shell:
$ eval $(cat file)
Doing that, my shell will create $var1, $var2 and $var3 with their respective contents. Knowing that, I could generate a JSON manually like the following:
$ JSON="{ \"var1\" : \"$var1\", \"var2\" : \"$var2\", \"var3\" : \"$var3\"}"
And that would result in a valid JSON:
$ echo $JSON
{ "var1" : "a random text", "var2" : "another random text", "var3" : "a third random text"}
The problem here is that I'm hardcoding the keys var1, var2 and var3... In my case, the file could be bigger and with more variables stored in it (not just var1, var2 and var3). I was thinking if there is an easy way of achieving that using the command line, just like eval does for storing file variables on the shell, but instead of storing the variables, generating a JSON output. Is it possible? Can I directly convert a file structured like that to JSON using the command line?
My alternative solution here would be developing a code (not using shell) that goes char by char inside this file and then I separate everything dynamically in a loop. But I'm making this question because I want to avoid overcomplicating the solution.
|
Using a combination of jo (from here) and jq (from here), without creating shell variables or letting the shell interpret the file at all:
jo <file |
jq --arg sq "'" '.[] |= ( ltrimstr($sq) | rtrimstr($sq) )'
This first uses jo to create the JSON document
{
"var1": "'a random text'",
"var2": "'another random text'",
"var3": "'a third random text'"
}
(but on a single line). It does this by interpreting the variable assignments in your file as key-value pairs.
The jq tool is then used to delete the single quotes from the start and end of each value.
The final result is
{
"var1": "a random text",
"var2": "another random text",
"var3": "a third random text"
}
This will not cope with newlines being embedded in the values. Other special characters will however be automatically JSON-encoded by jo.
| Can I directly convert a file that lists multiple variables to JSON using the command line? |
1,450,025,459,000 |
I'm converting images to a single PDF-file using convert utility:
$ convert "document-*.tiff" -compress jpeg -quality 60 "output.pdf"
Resulting document has the following tags set up:
Title: output
Producer: file:///usr/share/doc/imagemagick-6-common/html/index.html
CreationDate: Fri May 21 19:12:24 2021 +04
ModDate: Fri May 21 19:12:24 2021 +04
Tagged: no
UserProperties: no
Suspects: no
Form: none
JavaScript: no
Pages: 1
Encrypted: no
Page size: 419.52 x 595.2 pts
Page rot: 0
File size: 226476 bytes
Optimized: no
PDF version: 1.3
Is it possible to override default values for tags like Title and Producer?
|
You can do it. You need to change the image registry with a -define
For example:
$ magick -compress jpeg -quality 60 -define pdf:Producer="Stackoverflow" -define pdf:Title="Change tags" "*tiff" "output.pdf"
$ pdfinfo output.pdf
Title: Change tags
Author: https://imagemagick.org
Producer: Stackoverflow
CreationDate: Fri May 21 10:49:33 2021 -03
ModDate: Fri May 21 10:49:33 2021 -03
Tagged: no
[...]
But for that you need at least version 7 (that's why I asked you in the comments). If you don't have it, you can build it yourself, it is surprisingly simple.
For Debian 10, where I just tested, you need to:
$ sudo apt-get install build-essential
$ cd /some/path
$ wget https://www.imagemagick.org/download/ImageMagick.tar.gz
$ cd ImageMagick-7.0.11-13 # that's today's, the version might change
$ ./configure
$ make
You don't need to do a system-wide install. Just run the command like this:
$ /some/path/ImageMagick-7.0.11-13/utilities/magick -compress jpeg -quality 60 -define pdf:Producer="Stackoverflow" -define pdf:Title="Change tags" "*tiff" "output.pdf"
| Set PDF tags while converting images to PDF with ImageMagick |
1,450,025,459,000 |
I used the $(()) command and I seen this error:
bash: 0: command not found
Why did this error occur?
|
The $(( )) is an arithmetic substitution or arithmetic expansion. Within it, you may do (integer) arithmetic operations, and the shell would carry them out and replace the whole expression with the result of those operations.
You often see it used like in
count=$(( count + 1 ))
Since there is nothing for the shell to do here (the arithmetic substitution is empty), your bash shell decides that the result is zero.
You are using this as a command, which means the shell would try to run the result, 0, as a command.
It fails, and tells you why ("0: command not found").
This, an empty arithmetic substitution, seems to be a corner case that is treated differently in different shells. The bash shell, along with zsh and pdksh (ksh on OpenBSD) tries to execute 0, while dash and yash complains:
$ dash -c '$(( ))'
dash: 1: arithmetic expression: expecting primary: " "
$ yash -c '$(( ))'
yash: arithmetic: a value is missing
The POSIX standard says
As an extension, the shell may recognize arithmetic expressions beyond those listed.
... which may be what bash, zsh and pdksh does (i.e., they recognize an empty expression as "zero").
| Why is the output of the "$(())" command 0? |
1,450,025,459,000 |
I have two files.
The first (emails) should be cleaned according to the second (domains).
First is 15 GB, second is 160 MB.
dom=`cat file2.txt | xargs | sed -e "s/ /|/g"` ; sed -r "/$dom/d" file1.txt >> final_file.txt
This command gives me bash: /bin/sed: Argument list too long.
|
Sounds like you just want:
grep -Fvf file2.txt file1.txt > final_file.txt
That is store in final_file.txt the lines of file1.txt that contain none of the lines of file2.txt.
Add the -x option if you want the lines of file1.txt that are not in file2.txt. Or -w to match on words (where bar.com would not match in foobar.com or bar.common, but would still match in foo.bar.com.us for instance).
But if we're talking gigabytes of data and megabytes of different strings to look for, even that is going to take ages.
A faster approach with a shell like ksh, zsh or bash with support for process substitution would be:
export LC_ALL=C
comm -23 <(sort file1.txt) <(sort file2.txt) > final_file.txt
Now if as you clarified in comments, file2.txt is meant to be a list of domains and you mean to filter out of file1.txt the lines that end in @ followed by any of those domains, then a more efficient approach would be to use a hash table:
awk -F@ '
! domains_processed {excluded[$0]; next}
! ($NF in excluded)
' file2.txt domains_processed=1 file1.txt > final_file.txt
Problems with your approach:
useless use of cat (UUOC). cat is to concatenate files. It makes little sense for a single file. You can use xargs < file or < file xargs for xargs stdin to be directly the file instead of a pipe from a cat process which just shoves the contents of the file.
xargs calls echo by default. While echo joins its arguments with space characters, which you want here, it also performs other things the list of which depends on the implementation. Also xargs expects the input in a very specific format. Here I'd expect you want each of the line of file2.txt to be passed as a separate argument to echo for which you'd need the GNU-specific xargs -rd '\n'. Also xargs will run echo as many times as necessary to avoid the limit of the size of arguments. So the output of xargs will have several lines for a 160MB input.
To join the lines of a file with a specific character, the command is paste:
paste -sd '|' file2.txt
Here, you're building a regex for sed -r (-r being a GNU extension) by joining those words with |, but you're not escaping the regexp operators found in those lines. If those are meant to be domain names, then note that . is a regexp operators which matches any character. You'd have bigger problems with other characters. That sed "/$dom/r" would be an arbitrary command execution vulnerability if you didn't have full control over the contents of file2.txt.
If file2.txt is 160MB large, then so will be $dom (more or less). Sizes of command lines is limited. On Linux, the size of a single argument is also limited (to 128KiB), so you can't pass the sed script via arguments. It would have to be passed with -f.
| bash: /bin/sed: Argument list too long |
1,450,025,459,000 |
For example, youtube-dl has different ways to install it onto machine. I downloaded/installed it multiple times in not-same ways inadvertently, so that resulted that I have several youtube-dl executables in my $PATH directories, /usr/local/bin/youtube-dl, /home/username/.local/bin/youtube-dl, and /usr/local/bin/youtube-dl. So even I installed upgrade version "2021.01.03", youtube-dl --version shows me "2020.07.28", because youtube-dl in other directory in $PATH somehow overrided it.
So here, I want to check all of installed same-name files in $PATH so I can check their versions at once and can see which one is the latest, current one and which one should be deleted. So, Is there a way or CLI tool to do that? Thanks.
|
Interactively, I would use:
ls -l $(type -ap youtube-dl)
to find the locations and timestamps of all the youtube-dl programs in my $PATH.
Of course, this doesn't work for executables that have spaces in their names, but youtube-dl isn't one of them.
| How to check duplicate commands in $PATH? |
1,450,025,459,000 |
I've really hard time to understand this behaviour:
stackExchange@test:~$ if [[ "two words" =~ \bwords ]]; then echo hi; fi; #(I'd expect this one worked)
stackExchange@test:~$ if [[ "two words" =~ \\bwords ]]; then echo hi; fi; #(or at least this one...)
stackExchange@test:~$ if [[ "two words" =~ \\\bwords ]]; then echo hi; fi;
stackExchange@test:~$ if [[ "two words" =~ \\\\bwords ]]; then echo hi; fi;
stackExchange@test:~$ put_in_a_variable=\\bwords
stackExchange@test:~$ if [[ "two words" =~ $put_in_a_variable ]]; then echo hi; fi;
hi
stackExchange@test:~$
I understand that my variable contains \bword and this got expanded in the pattern section of the conditional expression, but I really cannot understand why seems impossible to achieve the same behaviour using inline shell escaping.
I don't want to do something like if [[ "two words" =~ $(echo \\bwords) ]]; then echo hi; fi;; too weird...
Thanks,
Francesco
|
The effect of a backslash in the regular expression part of [[ str =~ rex ]] is to quote the following character (exactly like putting it in single quotes), and in bash and since version 3.2, that directs it to do a literal match for it (1). Since b is not special, \b will turn into just b, but '\', "\\" or \\ will turn into \\ in order to match a literal backslash:
[[ abwords =~ \bwords ]] && echo "<$BASH_REMATCH>"
<bwords>
[[ 'a\bwords' =~ \\bwords ]] && echo "<$BASH_REMATCH>"
<\bwords>
# conversely, '|' is just like \|
[[ 'a|words' =~ a'|'words ]] && echo "<$BASH_REMATCH>"
<a|words>
Your idea of putting the regex in a variable is fine. An alternative would be to use a wrapper function:
rematch() [[ $1 =~ $2 ]]
if rematch 'two words' '\bwords\b'; then
echo "<$BASH_REMATCH>"
fi
<words>
In any case, with those work-arounds applied, since \b is a non-standard extended regexp operator (from perl), whether that will work or not will depend on whether the system's regexp library supports it or not. Depending on the system, you may have more luck with some alternative syntaxes for those word-boundary operators such as \</\> or [[:<:]]/[[:>:]].
(1): as documented in its manual:
Any part of the pattern may be quoted to force the quoted portion to be matched as a string
Notice that in the shell, characters which are quoted are actually marked specially, so any subsequent processing by the parser could base decisions on whether a part of a string was quoted or unquoted.
| bash conditional expression and backslash escaping [duplicate] |
1,450,025,459,000 |
I used this snippet of code to start a new column after every 20th row and each of the columns is separated by tabs. I took the code from this post and then tweaked it a bit: How to start a new column after every nth row?
awk '{a[NR%20] = a[NR%20] (NR<=20 ? "" : "\t") $0} END {for (i = 1; i <= 20; i++) print a[i%20]}'
It does exactly what I want it to do. However, I don't really understand how it works. Can someone please explain it to me? I know that $0 will read in the entire record (line) of a file, and that the condition before the question mark is evaluated, and if true, the first statement is executed, and if false, the second is executed. So in this case, if NR<=20 then nothing is printed because we're on the first column, but if NR>20 then a tab is printed to start a new column. I also know that the for loop prints out the elements of an array, starting from a[1%20] which is a[1] and so on, to a[19%20] which is a[19], and finally a[20%20] which is a[0]. But what does a[NR%20] = a[NR%20] do? Why is it set equal to itself? I see that when I omit a[NR%20] = a[NR%20], 20 blank lines are printed out.
|
In awk, expressions that are separated by spaces get joined together. This concatenation is described in the POSIX awk manual in a table of expressions (the formatting on that page isn't very clear, it's easier to read via man 1p awk). a[NR%20] is being joined together with its current value + ""/"\t" + the current record. For the first twenty records, both the array value and the ternary ?: expression will be empty strings. Brackets might make it more clear:
a[NR%n] = (a[NR%n] (NR<=n ? "" : "\t") $0)
| Explanation of awk statement |
1,450,025,459,000 |
ls -d .* lists only hidden "items" (files & directories). (I think) technically it lists every item beginning with ., which includes the current . and above .. directories.
I also know that ls -A lists "almost all" of the items, listing both hidden and un-hidden items, but excluding . and ... However, combining these as ls -dA .* doesn't list "almost all" of my hidden items.
How can I exclude . and .. when listing only hidden items?
|
This has been answered over at Ask Ubuntu, which I will reproduce here:
ls -d .!(|.) with Bash's extended globs (shopt -s extglob to enable)
ls -d .[!.]* ..?* if not
| How can I exclude . and .. when listing only hidden items? |
1,450,025,459,000 |
Whenever I tried to use the sqlcmd command in my terminal im getting and error like
"sqlcmd: error while loading shared libraries: libodbc.so.2: cannot open shared object file: No such file or directory"
|
It seems you didn't install the unixodbc-dev package which depends on the libodbc1 package and contains the missing shared library.
You can install it with
sudo apt update
sudo apt install unixodbc-dev
Related:
Install the Microsoft ODBC driver for SQL Server (Linux)
| Unable to use sqlcmd command shows libodbc.so.2 |
1,450,025,459,000 |
How can I terminate a process upon specific output from that process? For example, running a Java program with java -jar xyz.jar, I want to terminate the process once the line "Started server on port 8000" appears on stdout.
|
That can be accomplished with the following script considering that grep -m1 doesn't work for you:
#!/bin/bash
java -jar xyz.jar &> "/tmp/yourscriptlog.txt" &
processnumber=$!
tail -F "/tmp/yourscriptlog.txt" | awk '/Started server on port 8000/ { system("kill '$processnumber'") }'
Basically, this script redirects the stdout of your java code to a file with the command&> "/tmp/yourscriptlog.txt", the last & on the first line makes your code run as an isolated process and on the next line we have the number of this process with $!. Having the number of the process and a log file to tail we can finally kill the process when the desired line is printed.
| Terminate process upon specific output |
1,450,025,459,000 |
I have a Kali VM that is seized up, during booting it says "resuming from hibernation" and does not progress. There is no disk i/o. I am wondering if it is possible to completely delete the files related to hibernation in attempt to force a normal boot. If so where are these files located, or is there another way to keep the system from trying to resume from a hibernated state?
|
If your VM includes a Linux swap partition, it might contain the hibernation data, so there will not be a file to delete.
Anyway, if you can access the GRUB bootloader of the VM, add the boot option noresume to avoid any attempts to resume from hibernation and execute a full normal start-up instead.
(Some virtualization methods that use paravirtualization may skip GRUB entirely and instead start the VM's operating system in some other way. In that case, you would have to use some other method to enter boot options, and that would be specific to the virtualization system you are using. Since you did not say what virtualization system you are using, this may or may not apply to you.)
| How to delete hibernation files on deb based system |
1,450,025,459,000 |
Preface
Not sure if this question is within scope of the Unix Stack exchange since it is theoretical in nature. I am willing to move it to a different stack exchange.
Context
In the Unix command prompt, a user can type ; to execute multiple commands in order. If one fails, it will not stop the execution of the next command.
Question
What is the theoretical limit to the number of commands a user can chain together in one prompt execution with ;?
|
The theoretical limit on the number of commands that the shell (assuming sh here) can take on a single line is defined in the POSIX standard:
The input file shall be a text file, except that line lengths shall be unlimited. If the input file consists solely of zero or more blank lines and comments, sh shall exit with a zero exit status.
This means that the shell should be able to accept any number of commands on a single line, as long as each individual command is short enough not to be longer than what the execve() function accepts (the length of a single command, with arguments, and the current environment's environment variables and their values, in total, must be less than ARG_MAX bytes).
In practice, this is restricted by the memory resource limits imposed on the shell process.
| What is the theoretical upper limit of commands a user can execute in one line? |
1,450,025,459,000 |
Is there a backup tool which is "intelligent" enough to notice that a folder or large files may have been renamed between two backups? Maybe even if their location changed (not too complicated)?
Is it clear what I try to ask for?
My backup methods for now have all added the new dirs to the existing backup. How to "copy the difference" without touching the unchanged parts?
|
Yes, deduplicating backup tools like restic and borgbackup would do this.
These would detect that a given chunk of data (not necessarily a whole file) was already present in the older backup and would not store it again. It would also detect the same chunk in other files, so your fifteen copies of the same MP3 file would only be stored once.
I use it on a machine where I have two sets of JPEG files, one in macOS's "Photos" album archive and then the same photos as originals in a structured directory hierarchy based on dates. This is 2 * 60 Gb of data, but restic only stores 60 Gb since it's deduplicating it.
Another example is another machine (OpenBSD this time), where I have two or three different checkouts of the same Git repository (don't ask why). These, too, are deduplicated to the degree possible and will only use approximately the size taken by the files that are different (the 270 Mb .git directory is mostly the same and will be stored only once in the backup).
Moving a directory would likewise only ever result in a few kilobytes or so of data being written to the backup (depending on the size of the directory structure). I renamed one of these 270 Mb Git repositories as a test and ran a backup. This wrote just over 500 Kb to the backup (this data would be information about the locations of files and their metadata such as ownership and timestamps etc.)
A deduplicating backup tool would also allow you to backup data from multiple machines to the same location and have that data be deduplicated across the machines so that, for example, your Dropbox folder on three machines does not get stored three times (this is at least possible with restic).
The downside of using a deduplicating backup tool is that you can't browse the backups as files (borgbackup may allow mounting a snapshot as a directory somehow, but I haven't investigated it because it uses Fuse, which is unsupported by OpenBSD). One would have to use the backup tool to restore a snapshot or the wanted files from a snapshot.
I'm using restic because that allows me to back up over SFTP to a server where restic itself is not installed. Another way of doing this would be to run a restic REST server using rclone (rclone server restic ...) on the backup server and let the restic clients talk to that.
borgbackup allows for compression of the data chunks, but I think it requires that borgbackup is installed on the machine where the backups live. borgbackup is also (IMHO) slightly harder to configure.
The most recent versions of restic also support compression (using zstd).
| CLI backup tool |
1,450,025,459,000 |
I am running a Debian stable with Cinnamon graphical interface 3.6.7 and my computer is connected to a multimedia projector. I havea an Intel Graphic card.
The projected image is too big and I can't change neither the place of my multimedia projector nor the place of my wall to reduce the size of the projected image.
Thus I would like to find a command line so that the resolution of the projected image is the same but such that a black band is at the border of my screen (see Figures below). I expect then that the projected image will have a smaller size.
Solution (@Ipor Sircer)
xrandr --output HDMI1 --fb 1620x880
Current configuration:
Expected configuration
|
Use xrandr to detect the default output. Then you can make a black border:
xrandr --output LVDS --set underscan on --set "underscan vborder" 100 --set "underscan hborder" 100
(not working with intel graphic card)
| Reduce size of my screen with a command line |
1,450,025,459,000 |
I am trying to pass standard input into multiple commands and compare their outputs. My current attempt seems close, but doesn't quite work - plus it relies on temporary files which I feel would not be necessary.
An example of what I would want my script to do:
$ echo '
> Line 1
> Line B
> Line iii' | ./myscript.sh 'sed s/B/b/g' 'sed s/iii/III/' 'cat'
1:Line B 2:Line b
1:Line iii 3:Line III
So far I have this:
i=0
SOURCES=()
TARGETS=()
for c in "$@"; do
SOURCES+=(">($c > tmp-$i)")
TARGETS+=("tmp-$i")
i=$((i+1))
done
eval tee ${SOURCES[@]} >/dev/null <&0
comm ${TARGETS[@]}
The issues are:
There seems to be a race condition. By the end of execution comm tmp-0 tmp-1 has the desired output (more-or-less) but when executed from the script the output seems non-deterministic.
This is limited to just 2 inputs, but I need at least 3 (ideally any number)
This creates temporary files that I would have to keep track of and delete afterwards, an ideal solution would only use redirection
The constraints are:
The input may not be ending. In particular the input could be something like /dev/zero or /dev/urandom, so merely copying the input to a file won't work.
The commands may have spaces in them and be fairly complicated themselves
I want a line-by-line, in-order comparison.
Any idea how I could go about implementing this? I basically want something like echo $input | tee >(A >?) >(B >?) >(C >?) ?(compare-all-files) if only such a syntax existed.
|
Since the accepted answer is using perl, you can just as well do the whole thing in perl, without other non-standard tools and non-standard shell features, and without loading unpredictably long chunks of data in the memory, or other such horrible misfeatures.
The ytee script from the end of this answer, when used in this manner:
ytee command filter1 filter2 filter3 ...
will work just like
command <(filter1) <(filter2) <(filter3) ...
with its standard input piped to filter1, filter2, filter3, ... in parallel, as if it were with
tee >(filter1) >(filter2) >(filter3) ...
Example:
echo 'Line 1
Line B
Line iii' | ytee 'paste' 'sed s/B/b/g | nl' 'sed s/iii/III/ | nl'
1 Line 1 1 Line 1
2 Line b 2 Line B
3 Line iii 3 Line III
This is also an answer for the two very similar questions: here and here.
ytee:
#! /usr/bin/perl
# usage: ytee [-r irs] { command | - } [filter ..]
use strict;
if($ARGV[0] =~ /^-r(.+)?/){ shift; $/ = eval($1 // shift); die $@ if $@ }
elsif(! -t STDIN){ $/ = \0x8000 }
my $cmd = shift;
my @cl;
for(@ARGV){
use IPC::Open2;
my $pid = open2 my $from, my $to, $_;
push @cl, [$from, $to, $pid];
}
defined(my $pid = fork) or die "fork: $!";
if($pid){
delete $$_[0] for @cl;
$SIG{PIPE} = 'IGNORE';
my ($s, $n);
while(<STDIN>){
for my $c (@cl){
next unless exists $$c[1];
syswrite($$c[1], $_) ? $n++ : delete $$c[1]
}
last unless $n;
}
delete $$_[1] for @cl;
while((my $p = wait) > 0){ $s += !!$? << ($p != $pid) }
exit $s;
}
delete $$_[1] for @cl;
if($cmd eq '-'){
my $n; do {
$n = 0; for my $c (@cl){
next unless exists $$c[0];
if(my $d = readline $$c[0]){ print $d; $n++ }
else{ delete $$c[0] }
}
} while $n;
}else{
exec join ' ', $cmd, map {
use Fcntl;
fcntl $$_[0], F_SETFD, fcntl($$_[0], F_GETFD, 0) & ~FD_CLOEXEC;
'/dev/fd/'.fileno $$_[0]
} @cl;
die "exec $cmd: $!";
}
notes:
code like delete $$_[1] for @cl will not only remove the file handles from the array, but will also close them immediately, because there's no other reference pointing to them; this is different from (properly) garbage collected languages like javascript.
the exit status of ytee will reflect the exit statuses of the command and filters; this could be changed/simplified.
| Pass input to multiple commands and compare their outputs |
1,518,190,471,000 |
What if we run commands like rm -rf / or mv / /dev/null or dd if=/dev/random of=/dev/hda on the virtual machine? Will it affect the host machine? Or what the results of running these commands?
|
Virtualization provides a relatively strong separation between the virtual machine and the host. This is provided by kernel features backed up by CPU features. The recent "Spectre" CPU flaw is particularly concerning because it potentially provides a way for attackers to break down some of this separation — but that doesn't change what happens with "normal" operations in the guest.
The virtual machine presents the operating system with a virtual disk — a "fake" block device constructed from underlying storage. Short of a bug in the virtualization itself, nothing you do to the virtual disk will "break out" and affect other parts of the underlying storage, or the underlying OS.
From the point of view of the host, the virtual machine is just a process like any other program, and operations on the underlying storage done by that thread are constrained just like any other program. If the guest (the virtual machine) is very busy, it can cause the CPU to heat up and the fans will come on and etc., just like with any other busy program. Writing to disk in the guest is the same — it causes IO operations, and many busy VMs can get in a conflict for resources. (This is often the cause for high load on what seems to be an idle machine.)
From the point of view of the guest, the example commands you give would execute just as you'd expect them to on a physical machine. They will render the guest OS inoperable. But back to the host machine — they're nothing special. You'll just have a VM disk image that you need to restore from backup.
| What if we run commands like `dd if=/dev/random of=/dev/hda` on virtual machine? |
1,518,190,471,000 |
I need to run a lot of similar commands in a quickest possible amount time and using all available resources.
For example my case is processing images, when I'm using following command:
for INPUT in *.jpg do; some_command; done the command is executed one by one and not using all the available resources.
But on the other side executing for INPUT in *.jpg do; some_command &; done makes the machine to run out of resources in a very short time.
I know about at's batch command, but I'm not sure if I can use that in my case. Correct me if I am wrong.
So I was thinking about putting the commands in some kind of queue and executing just a part of them at once. I don't know how to do that in a quick way and that's the problem. I'm sure someone ran in a similar problem before.
Please advise.
|
GNU Parallel is made for exactly this:
parallel some_command {} ::: *.jpg
It defaults to one job per CPU core. In your case you might want to run one more job than you have cores:
parallel -j+1 some_command {} ::: *.jpg
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
For security reasons you should install GNU Parallel with your package manager, but if GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
| run multiple commands at once |
1,518,190,471,000 |
If I run the following command, the command line is cut:
user@host:~$ ps -eo pid,cmd,lstart
Output:
6382 /home/user/bin/pyt Sun Oct 22 18:51:39 2017
How to get the whole command (inclusive all arguments)?
Version: procps-ng version 3.3.5
|
swap columns:
ps -eo pid,lstart,cmd
| ps cuts command, how to get full |
1,518,190,471,000 |
wget can be feeded with -i inputFileURLList and also can use a custom file name with -O customArbitraryFileName.
How can I combine these two capablities so that I feed it with a file containig URL list with a custom file name for each?
|
Write your own trivial shell script and use it as ./get-them.sh < get-then.list
shell script get-them.sh
#!/bin/sh
while read FILE URL; do
wget -O "$FILE" -- "$URL"
done
input file get-them.list
file1 https://unix.stackexchange.com/
file2 https://stackexchange.com/
| How to combine '-i file' & '-O filename' options of wget? |
1,518,190,471,000 |
I was trying to use negation to exclude directories from globbing, but directories still appear in pattern match:
bash-4.3$ ls
file_1.txt testdir
bash-4.3$ shopt extglob
extglob on
bash-4.3$ echo !(*/)
file_1.txt testdir
bash-4.3$
What exactly am I doing wrong ?
Note:I know I can use for loop with [ or find command, but I'm trying to figure out extglob specifically.
|
You can't have a / in the @(...), !(...), *(...)...
The / can only appear between globs, even a[x/y]b is treated as @(a\[x)/@(y\]b). globs are first split on / and each part matched against the content of a directory. When there are x(...) ksh glob extensions, however, there's no splitting on the / that are inside the (...), but each glob part is still matched against file names. In !(*/*), */* is matched against each file name in the current directory. Obviously, no file name may ever contain a /, so it matches nothing, so !(*/*) matches every file.
Here, you'd want to use zsh and its glob qualifiers:
echo *(^/)
For the files of any type except directory. Or to be the opposite of bash's */ (which is any file of type directory after symlink resolution):
echo *(-^/)
(files that are neither directories nor symlinks to directories).
| extglob negation not working as expected |
1,518,190,471,000 |
All of the tons of articles I have found so far all seemed to be focused on obtaining a resulting date, which is still useful, but not what I want in this case.
Example link: Unix & Linux SE -
Quickly calculate date differences.
With datediff in that other Q&A, this is more of what I want referring to a date/time duration result, and it essentially uses Unix timestamp math with those preceding conversions, but I want my granularity down to the second, which is why I don't divide by 86400.
With the duration of seconds available, I now want to format it to something like using the GNU date command this way: date -d "@69600" "+ %Y years %m months %e days %H:%M:%S"
I realize everything is essentially a date/time duration from the Unix epoch, but now I have the problem with the day, month, and year numbers not starting from 0, which would not correctly represent the duration value I want.
I could get into cutting this format and applying more expr commands to essentially subtract 1 or 1970 from each value, but I'm wondering if there is any other easier and simpler way to deal with date/time calculation to achieve a duration result besides chaining math and formatting steps together. Maybe there is some other option in GNU date that I could take advantage of, or another tool that would accept 2 date/time arguments and immediately give me the result I am looking for.
Having it be available in Homebrew for Macs would be a plus :).
Simple date math link: Walker News - Date Arithmetic In Linux Shell Scripts
Random date formatting links:
lifewire - How To Display The Date And Time Using Linux Command Line
ma.ttias.be - Linux Date Format: change the date output for scripts or commands
|
With dateutils's datediff (not GNU sorry), (formerly ddiff, dateutils.ddiff on Debian):
$ dateutils.ddiff -f '%Y years, %m months, %d days, %H:%0M:%0S' \
'2012-01-23 15:23:01' '2017-06-01 09:24:00'
5 years, 4 months, 8 days, 18:00:59
(dates taken as UTC, add something like --from-zone=Europe/London for the dates to be taken as local time in the corresponding time zone. --from-zone=localtime may work for the default timezone in the system; --from-zone="${TZ#:}" would work as long as $TZ specifies the path of a IANA timezone file (like TZ=:Europe/London, not TZ=GMT0BST,M3.5.0/1:00:00,M10.5.0/2:00:00 POSIX-style TZ specification)).
With ast-open date, so still not with GNU tools sorry (if using ksh93 as your shell, that date may be available as a builtin though) you can use date -E to get the difference between two dates in a format that gives 2 numbers with units:
$ date -E 1970-01-01 2017-06-01
47Y04M
$ date -E 2017-01-01 2017-06-01
4M29d
$ date -E 12:12:01 23:01:43
10h49m
Note that anything above hour is ambiguous, as days have a varying number of hours in locales with DST (23, 24 or 25) and months and years have a varying number of days, so it may not make much sense to have more precision than those two units above.
For instance, there's exactly one year in between 2015-01-01 00:00:00 and 2016-01-01 00:00:00 or between 2016-01-01 00:00:00 and 2017-01-01 00:00:00, but that's not the same duration (365*24 hours in one case, 366*24 hours in the other)
There's exactly one day between 2017-03-24 12:00:00 and 2017-03-25 12:00:00 or between 2017-03-25 12:00:00 and 2017-03-26 12:00:00, but when those are local time in European countries (that switched to summer time on 2017-03-26), that one day is 24 hours in one case and 23 hours in the other.
In other words a duration that mentions years, months, weeks or days is only meaningful when associated to one of the boundaries it's meant to be applied to (and the timezone). So you cannot convert a number of seconds to such a duration without knowing what time (from or until) it's meant to be applied to (unless you want to use approximate definitions of day (24 hour), month (30*24 hours) or year (365*24 hours)).
To some extent, even a duration in seconds is ambiguous. For simplification, the seconds in the Unix epoch time are defined as the 86400th part of a given Earth day1. Those seconds are getting longer and longer as Earth spins down.
So something like (with GNU date):
d1='2016-01-01 00:00:00' d2='2017-01-01 00:00:00'
eval "$(date -d "@$(($(date -d "$d2" +%s) - $(date -d "$d1" +%s)))" +'
delta="$((%-Y - 1970)) years, $((%-m - 1)) months, $((%-d - 1)) days, %T"')"
echo "$delta"
Or in fish:
date -d@(expr (date -d $d2 +%s) - (date -d $d1 +%s)) +'%Y %-m %-d %T' | \
awk '{printf "%s years, %s months, %s days, %s\n", $1-1970, $2-1, $3-1, $4, $5}'
Would generally not give you something that is correct as the time delta is applied to 1970 instead of the actual start date 2016.
For instance, above, that gives me (in a Europe/London timezone):
1 years, 0 months, 1 days, 01:00:00
instead of
1 years, 0 months, 0 days, 00:00:00
(that may still be a good enough approximation for you).
1 Technically, while a day is 86400 Unix seconds long, network-connected systems typically synchronise their clock to atomic clocks using SI seconds. When an atomic clock says 12:00:00.00, those Unix systems also say 12:00:00.00. The exception is only upon the introduction of leap seconds, where either there's one Unix second that lasts 2 seconds or a few seconds that last a bit longer to smear that extra second upon a longer period.
So, it is possible to know the exact duration (in terms of atomic seconds) in between two Unix time stamps (issued by systems synchronised to atomic clock): get the difference of Unix epoch time between the two timestamps and add the number of leap seconds that have been added in between.
datediff also has a -f %rS to get the number of real seconds in between two dates:
$ dateutils.ddiff -f %S '1990-01-01 00:00:00' '2010-01-01 00:00:00'
631152000
$ dateutils.ddiff -f %rS '1990-01-01 00:00:00' '2010-01-01 00:00:00'
631152009
The difference is for the 9 leap seconds that have been added in between 1990 and 2010.
Now, that number of real seconds is not going to help you to calculate the number of days as days don't have a constant number of real seconds. There are exactly 20 years between those 2 dates and those 9 leaps seconds rather get in the way to get to that value. Also note that ddiff's %rS only works when not combined with other format specifiers. Also, future leap seconds are not known long in advance, but also information about recent leap seconds may not be available. For instance, my system doesn't know about the ones after 2012-07-01.
| How can I calculate and format a date duration using GNU tools, not a result date? |
1,518,190,471,000 |
I've tried to find the existing topics about this theme and I found something but it's not the 100% what I'm looking for and my internet connection is bad last few days so I needed to quit searching and post a new thread...
So my problem is I have a .txt file with many lines (over 50000), every line has 5 letter string like this:
KKIUB
SDCVG
KJUTT
NGTHH
WWLEE
XGHTP
NJFRT
PPSFF
ZZZLP
XDRFX
JJJJJ
KIEYW
...
I want all lines in a file that contain two (or more) same letters to be deleted. The order of duplicates isn't important, so all lines that contain a letter two times in a line must be deleted. Please note that sometimes there can be all 5 same letters, sometimes 3 same letters, sometimes only 2 same letters + not close to each other, e.g "GOHIG".
KKIUB ---> delete
SDCVG ---> stays
KJUTT ---> delete
NGTHH ---> delete
WWLEE ---> delete
XGHTP ---> stays
NJFRT ---> stays
PPSFF ---> delete
ZZZLP ---> delete
XDRFX ---> delete
JJJJJ ---> delete
KIEYW ---> stays
I'm trying with sed function but was not able to have good results. Also I would like to export it into another .txt file. Any help?
|
sed -e '/\(.\).*\1/d' yourfile > youroutputfile
| Delete all lines that contain duplicate letters |
1,518,190,471,000 |
For a project, I would like to be able to use arecord to do both at the same time :
Recording what is passed to the microphone.
Playing it at the same time in the speakers.
In order to do this, I thought about starting with :
arecord -f cd -d numberofseconds -t raw | lame -x – out.mp3
but I don't know how to redirect at the same time the sound to the speakers. Do you have any idea about how I could do this ? Thank you in advance.
|
This is what I have found :
First, enable audio forwarding to speakers with pactl load-module module-loopback latency_msec=1
Then I record all I want using arecord -f cd -t raw | oggenc - -r -o out.ogg (using mp3 format didn't works)
To finish, I stop audio forwarding using pactl unload-module module-loopback
If you find a way to correctly record as mp3 (using lame) from microphone, don't hesitate to answer this question and tell me. Thank you.
| Record & Play what comes from the microphone at the same time |
1,518,190,471,000 |
I have a C executable that takes in 4 command line arguments.
program <arg1> <arg2> <arg3> <arg4>
I'd like to create a shell script that continually runs the executable with arguments supplied by text files. The idea would be something like this:
./program "$(< arg1.txt)" "$(< arg2.txt)" "$(< arg3.txt)" "$(< arg4.txt)"
Where the arguments supplied for run n would be on line n of each of the files. When I tried doing this, the printf() calls were interfering with each other or some other funny business was going on. I would also be open to a script that takes only one file where the arguments are delimited in some way.
|
while
IFS= read -r a1 <&3 &&
IFS= read -r a2 <&4 &&
IFS= read -r a3 <&5 &&
IFS= read -r a4 <&6
do
./program "$a1" "$a2" "$a3" "$a4" 3<&- 4<&- 5<&- 6<&-
done 3< arg1.txt 4< arg2.txt 5< arg3.txt 6< arg4.txt
That runs the loop until one of the files is exhausted. Replace the &&s with ||s to run it until all the files are exhausted instead (using empty arguments for shorter files).
With GNU xargs, you could also do:
paste -d '\n' arg[1-4].txt | xargs -n 4 -r -d '\n' ./program
(though beware ./program's stdin would then be /dev/null)
| Pass multiple command line arguments to an executable with text files |
1,518,190,471,000 |
I need to remotely install a program on a Linux computer. I do:
./configure
make
make install
However I seem to get issues when I run ./configure (it's a separate problem) where the configuration screen essentially freezes; it doesn't move past a certain check. I need to stop the configuration so I do Ctrl+z, and that lets me use the terminal again.
However, it seems to me the process does not stop. I see the config.log file continue to grow in bytes (gets to be 40+MB). This is a problem since now the process is ongoing and creating this log file that I don't know to what size it will grow to.
I need to reboot the computer in order to stop the configure script now working in the background. I can't see find it's PID when I use the top command to view the processes.
How can I stop ./configure script through the terminal successfully?
|
Control+ Z suspends (TSTP/SIGSTOP signal) the most recent foreground process, which returns you back to your shell. From the shell, the bg command sends the suspended process to the background while the fg commands brings it back to foreground. Try Control+C, which sends SIGINT, killing the process. Some software reacts to SIGINT in other ways, like cleaning up before exiting.
| How to stop ./configure script? |
1,518,190,471,000 |
I currently have 2000 user directories and each of these directories have sub directories.
user_1
---> child1
---> child2
user_29
---> child37
---> child56
etc
I need to loop through all of the user folders and then through each of the child folders and rename the child folders with a prefix 'album_'. My end structure should be:
user_1
---> album_child1
---> album_child2
user_29
---> album_child37
---> album_child56
I was able to rename the user folders with the following command:
for f in * ; do mv "$f" user_"$f" ; done;
I have been trying several different approaches to rename the sub directories such as:
find . -maxdepth 2 -mindepth 2 -type d -exec mv '{}' album_'{}'
The first part of the above query returns all the directories that I need to rename ('find . -maxdepth 2 -mindepth 2 -type d').
How do I access the directory name in the -exec function and then append a prefix to it?
|
Try:
find . -maxdepth 2 -mindepth 2 -type d -execdir bash -c 'mv "$1" "./album_${1#./}"' mover {} \;
Notes:
To form the name for the target directory, we need to remove the initial ./ that will be in the directory name. To accomplish that, we use the shell's prefix removal: ${1#./}.
We use -execdir rather than -exec because it is more robust in case directories are renamed while the command runs.
In the expression bash -c '...' mover {}, bash runs the command in single quotes with $0 assigned to mover and $1 assigned to the file name, {}. $0 is unused unless the shell needs to write an error message.
We don't need any bash features for this code. Any POSIX shell could be used.
If you want to test the command before you run it to make sure it does what want, add echo like this:
find . -maxdepth 2 -mindepth 2 -type d -execdir bash -c 'echo mv "$1" "./album_${1#./}"' mover {} \;
Notes regarding -exec mv '{}' album_'{}'
Don't quote the {}. find handles that.
Because the file name provided by find will start with ./, the form album_{} will not succeed.
Example
Let's consider these directories:
$ ls *
user1:
child1 child2
user2:
child1 child2
Press any key to continue...
Now. let's run our command and see the new directory names:
$ find . -maxdepth 2 -mindepth 2 -type d -execdir bash -c 'mv "$1" "./album_${1#./}"' mover {} \;
$ ls *
user1:
album_child1 album_child2
user2:
album_child1 album_child2
| For all directories - rename all subdirectories with a prefix |
1,518,190,471,000 |
I have a long list of data files that I need to copy over to my server, they have the names
data_1.dat
data_2.dat
data_3.dat
...
data_100.dat
Starting from data_1.dat, I would like to get all the files where the number is increased by 3, i.e. data_4.dat, data_7.dat, data_10.dat, ...
Is there a way to specify this? Right now I am doing in manually using get data_4.dat, but there must be a way to automatize this.
|
On Linux:
printf -- '-get data_%d.txt\n' $(seq 1 3 100) | sftp -b - [email protected]
On BSD (with no seq(1) in sight):
printf -- '-get data_%d.txt\n' $(jot 100 1 100 3) | sftp -b - [email protected]
| sftp: command to select desired files to copy |
1,518,190,471,000 |
I am using a ssh command executor in java which runs the command and gets the output in stderr, stdout and an integer exit value. I am trying run a command with timeout like,
timeout 5s COMMAND
Is there a way to get a response in the stderr or the stdout so that I can know whether the command was timed out or not?
|
From man timeout:
If the command times out, and --preserve-status is not set, then exit
with status 124. Otherwise, exit with the status of COMMAND. If no
signal is specified, send the TERM signal upon timeout. The TERM sig‐
nal kills any process that does not block or catch that signal. It may
be necessary to use the KILL (9) signal, since this signal cannot be
caught, in which case the exit status is 128+9 rather than 124.
So...
timeout 5s command || [ $? -eq 124 ] && echo timeouted
| How to get the output of timeout command without using a shell script |
1,518,190,471,000 |
I am trying to get libnotify (notify-send) to pop-up a notification once a certain character is found while I tail a log file.
Without grep it works fine ...
Here is my code:
tail -f /var/log/mylogfile | grep ">" | while read line; do notify-send "CURRENT LOGIN" "$line" -t 3000; done
When I include grep it passes nothing to notify-send. The code above I modified from https://ubuntuforums.org/showthread.php?t=1411620
Also, how can I change the font size?
|
This page explains grep and output buffering, in short you want to use the --line-buffered flag:
tail -f /var/log/mylogfile | grep --line-buffered ">" | while read line; do notify-send "CURRENT LOGIN" "$line" -t 3000; done
About the font, this AskUbuntu question mentions it's not officially possible, but describes a tool notifyosdconfig that allows some modifications.
| libnotify with bash and grep |
1,518,190,471,000 |
I have a bash script for copying two files from a remote machine (that I cannot control), stored in a a path that needs root access. Here it is:
ssh administrator@host "mkdir ${DIR}"
ssh -t administrator@host "sudo su - root -c 'cp /path/for-root-only/data1/${FILENAME} ${DIR}/'"
ssh administrator@host "mv ${DIR}/${FILENAME} ${DIR}/data1-${FILENAME}"
ssh -t administrator@host "sudo su - root -c 'cp /path/for-root-only/data2/${FILENAME} ${DIR}/'"
scp administrator@host:$DIR/{${FILENAME},data1-${FILENAME}} .
ssh administrator@host "rm -r ${DIR}"
The script prompt for the same password a lot of time. I tried to merge all commands through here document like this for ssh -t:
ssh -t administrator@host << EOF
mkdir ${DIR}
sudo su - root -c 'cp /path/for-root-only/data1/${FILENAME} ${DIR}/'
mv ${DIR}/${FILENAME} ${DIR}/data1-${FILENAME}
sudo su - root -c 'cp /path/for-root-only/data2/${FILENAME} ${DIR}/'
EOF
scp administrator@host:$DIR/{${FILENAME},data1-${FILENAME}} .
ssh administrator@host "rm -r ${DIR}"
but there is this warning:
Pseudo-terminal will not be allocated because stdin is not a terminal.
I would like to ask if there is a proper way to write that script to minimize the number of password prompting
|
You don't need to do it as a HERE document (which is what the << stuff does).
You can simply do ssh remotehost "command1; command2 ; command3"
e.g.
% ssh localhost "date ; uptime ; echo hello"
sweh@localhost's password:
Tue Jul 19 08:07:48 EDT 2016
08:07:48 up 15 days, 31 min, 3 users, load average: 0.33, 0.33, 0.40
hello
The scp however, won't easily merge that way.
So you may want to look into using public key authentication instead of password authentication (also known as "ssh keys"). That's normally how things like this are automated.
| Execute multiple ssh commands with different switch |
1,518,190,471,000 |
I would like to download and run a script in the background so the task is independent of the shell and its exit. Moreover this script should be run as sudo, using:
echo MY_PWD | sudo -u MY_USER -S ...
So it just needs a single line of code in my SSH-Session, which does the authentication and creates the background task including multiple lines. Furthermore the setup-script should be removed after the successful or unsuccessful execution of the NEW_SCRIPT_NAME.sh
The lines of code are:
wget https://URL_TO_SCRIPT.sh -O NEW_SCRIPT_NAME.sh
sed -i 's/\\r$//' NEW_SCRIPT_NAME.sh
chmod +x NEW_SCRIPT_NAME.sh
sudo nohup NEW_SCRIPT_NAME.sh &
rm NEW_SCRIPT_NAME.sh
I tried to this:
echo raspberry | sudo -u pi -S wget https://URL_TO_SCRIPT.sh -O NEW_SCRIPT_NAME.sh && sed -i 's/\\r$//' NEW_SCRIPT_NAME.sh && chmod +x NEW_SCRIPT_NAME.sh && nohup ./NEW_SCRIPT_NAME.sh & && rm NEW_SCRIPT_NAME.sh
This causes the following error:
-bash: syntax error near unexpected token `&&'
Which can be fixed by removing the & in the nohup line, but this will cause the script to be executed in foreground not in background, which is necessary for my purposes.
|
This is wrong syntax in bash:
nohup ./NEW_SCRIPT_NAME.sh & && rm NEW_SCRIPT_NAME.sh
From Shellcheck:
Line 1:
nohup ./NEW_SCRIPT_NAME.sh & && rm NEW_SCRIPT_NAME.sh
^-- SC1070: Parsing stopped here. Mismatched keywords or invalid parentheses?
You can not simply run something & && something, because putting the first command into the background prevents you from waiting for the script return code.
You can either run whole pipeline under nohup in in background (preferably as a script):
nohup ./run.sh &
or if you really want to have everything in one command, you need to wait for the result:
nohup ./NEW_SCRIPT_NAME.sh & wait && rm NEW_SCRIPT_NAME.sh
which will wait for the script to finish, before it will remove it.
| Submit password with sudo and execute script with nohup |
1,518,190,471,000 |
I have a question about groupadd, specifically with password (-p). It says it is not recommended, "This option is not recommended because the password (or encrypted password) will be visible by users listing the processes." Can someone give me a broader explanation? How will a user see the password when viewing the processes, and if that is the case why is this used?
|
It's possible for a user on the system (or a monitoring program that captures ps output) to see the password as a parameter to the groupadd process -- if the user or monitor "happens" to run ps while the groupadd process is running. The risk of that happening is small (the groupadd process will likely finish running fairly quickly), but non-zero.
See an example for yourself with this contrived example; execute these two lines within 10 seconds of each other:
$ sh -c "echo groupadd -p password-here > /dev/null; sleep 10" &
$ ps -ef | grep password
| groupadd -p Not Recommended? |
1,518,190,471,000 |
I'm investigating the relationship between bash and emacs shorcuts. Someone told me that the reason why they're similar is that bash uses emacs as its command line interpreter. However, I haven't found any evidence that supports this thesis.
I know there are "edits modes" in bash and one of them is emacs. But, is it true that the command line interpreter is implemented on emacs?
Please note I'm referring to the actual implementation and not to the similarities between them.
|
The short answer is "no". bash's command-line processing is implemented mostly in bashline.c and its copy of readline, which supports vi-like and Emacs-like behaviours. Emacs itself is written mostly in Emacs Lisp; using it to implement bash would be quite involved since Emacs Lisp isn't designed to be used without Emacs.
| Is bash command line interpreter implemented on emacs? |
1,518,190,471,000 |
I have a directory (let's call it "Movies") which contains many files and folders. I have a long list of file names in a .csv file (around 4000 entries) which refer to files which are located somewhere within the Movies directory sub-folders.
How can I search the Movies directory recursively for the files listed in the .csv and copy them to a separate directory ("Sorted_Media")?
EDIT: Hi, I have attached an example section of the csv. There are two columns of data (from a spreadsheet), which are separated by a comma delimiter in the .csv. The first colum of file names are the ones that I need to search (i.e. NOT the KA* file names). Some of the file names do have spaces so this is something which need to be considered as someone else pointed out.
preservation stocklshots - 16ln916-963.mp4,KA0003773-002.mp4
Preservation Stockshots_ 16LN916-963.mp4,KA0003773-001.mp4
Preservation Stockshots_16LN679-738.mp4,KA0003775-002.mp4
PreservationStockshots_16LN679_738.mp4,KA0003775-001.mp4
Preservation Stockshots_16LN01-52.mp4,KA0003776-002.mp4
Preservation_Stockshots_16LN01_52.mp4,KA0003776-001.mp4
Preservation Stockshots_LN566-LN624.mp4,KA0004507-001.mp4
PreservationStockShots_LN566_LN624.mp4,KA0004507-002.mp4
Preservation Stockshots_LN675-LN705.mp4,KA0004508-001.mp4
PreservationStockshots_LN675_LN705.mp4,KA0004508-002.mp4
Preservation Stockshots_LN706-752.mp4,KA0004509-001.mp4
PreservationStockshots_LN706_LN752.mp4,KA0004509-002.mp4
Preservation Stockshots_LN930-LN972.mp4,KA0004511-001.mp4
PreservationStockShots_LN930_LN972.mp4,KA0004511-002.mp4
Preservation Stockshots_LN1023-LN1059.mp4,KA0004513-001.mp4
PreservationStockShots_LN1023_LN1059.mp4,KA0004513-002.mp4
Preservation Stockshots_LN1152-LN1220.mp4,KA0004515-001.mp4
PreservationStockShots_LN1152_LN1220.mp4,KA0004515-002.mp4
Preservation Stockshots_16LN320-379.mp4,KA0004517-001.mp4
Preservation_Stockshots_16LN320_379.mp4,KA0004517-002.mp4
|
while IFS=, read -r file rest
do
find /path/to/movies_dir -name "${file}" -exec cp '{}' /path/to/Sorted_Media/ \;
done < mylist.csv
That assumes file names don't contain wildcard characters (?, [, * or backslash).
| Search a directory recursively for files listed in a csv, and copy them to another location |
1,518,190,471,000 |
I am trying to make a shell script to print the amount of time user was logged into the system but I encountered a too many arguments error. I tried many methods from the internet but none worked. Can someone spot the mistake?
#!/bin/bash
lt=`who | grep "jeevansai" | cut -c 35-39`
lh=`echo $lt | cut -c 1-2`
lm=`echo $lt | cut -c 4-5`
ld=`who | grep "jeevansai" | cut -c 32-34`
ch=`date +%H`
cm=`date +%M`
cd=`date +%d`
fun()
{
if [ $cm -gt $lm ]
then
{
sm=`expr $cm - $lm`
sh=`expr $ch - $lh`
}
else
{
sm=`expr 60 - $lm - $cm`
sh=`expr $ch - $lh - 1`
}
fi
exit 1
}
if [ $ld -gt $cd ]
then
{
if [ $ch -gt $lh ]
then
{
fun
}
else
{
sh=`expr 24 - $lh + $ch`
sm=`expr 60 - $lm + $cm`
}
fi
}
else
fun
fi
echo "$sh hr $sm min"
Output of bash -x c.sh:
++ who
++ grep jeevansai
++ cut -c 35-39
+ lt='22:27
23:18'
++ echo 22:27 23:18
++ cut -c 1-2
+ lh=22
++ echo 22:27 23:18
++ cut -c 4-5
+ lm=27
++ who
++ grep jeevansai
++ cut -c 32-34
+ ld='31
31 '
++ date +%H
+ ch=23
++ date +%M
+ cm=24
++ date +%d
+ cd=31
+ '[' 31 31 -gt 31 ']'
c.sh: line 24: [: too many arguments
+ fun
+ '[' 24 -gt 27 ']'
++ expr 60 - 27 - 24
+ sm=9
++ expr 23 - 22 - 1
+ sh=0
+ exit 1
|
You are assuming that the output of who | grep jeevansai will be a single line, which is wrong.
++ who
++ grep jeevansai
++ cut -c 32-34
+ ld='31
31 '
This is telling you that the command
ld=`who | grep "jeevansai" | cut -c 32-34`
set the variable ld to "31 31", rather than to a single number as you were expecting. Later, you try to do math on it ...
if [ $ld -gt $cd ]
... expands to ...
if [ 31 31 -gt 31 ]
and that's too many arguments to [.
What you need to do is take the minimum of all the dates that come back from who. Unfortunately, date arithmetic is not easy in shell (note that your program currently ignores the month field altogether). I'd personally reach for Perl or Python instead.
EDIT: For anyone who might be wondering why who prints more than one line of output for a single user: on a modern Unix workstation (OSX included) every shell window you have open will usually get its own entry, because each shell window allocates a pseudoterminal, and the utmp database that who uses, counts each active terminal (pseudo- or otherwise) as its own login. Similarly, screen and tmux will allocate one pseudoterminal for each pane. You might also have a utmp entry for the entire graphical session. All this stuff was designed in the 1970s and hasn't been changed much since. For example, here's what it looks like on my Mac when I have two shell windows open:
$ who
zwol console Aug 18 09:59
zwol ttys000 Aug 19 09:49
zwol ttys001 Aug 19 10:35
| error ./c.sh: line 24: [: too many arguments in shell program |
1,518,190,471,000 |
How can I let the openssl s_server to reply to every http(s) request directly from the command line or the server it self (the server is using centOS)? is that possible?
from the -help command for the openssl s_server I see that there is the -HTTP flag which should be used for :
-WWW - Respond to a 'GET /<path> HTTP/1.0' with file./<path>
-HTTP - Respond to a 'GET /<path> HTTP/1.0' with file ./<path>
with the assumption it contains a complete HTTP response.
is this could be the answer for my problem? if so and since I couldn't find any example how to use this flag, I would be really glad to know how to exactly use it.
the command that I'm trying to run is simple as:
openssl s_server -key key.pem -cert cert.pem -msg
Thanks in Advance!
|
With -WWW or -HTTP, s_server acts as a static content HTTPS server using files in the current directory. Here's my full set up for demonstration.
$ openssl req -x509 -nodes -newkey rsa -keyout key.pem -out cert.pem -subj /CN=localhost
$ echo 'hello, world.' >index.txt
$ openssl s_server -key key.pem -cert cert.pem -WWW
Using default temp DH parameters
Using default temp ECDH parameters
ACCEPT
s_server is now waiting for HTTPS requests on port 4433. From another shell you can make a request to s_server using curl.
$ curl -kv https://localhost:4433/index.txt
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 4433 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* Server certificate:
* subject: CN=localhost
* start date: 2015-06-01 15:29:02 GMT
* expire date: 2015-07-01 15:29:02 GMT
* issuer: CN=localhost
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /index.txt HTTP/1.1
> User-Agent: curl/7.38.0
> Host: localhost:4433
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 ok
< Content-type: text/plain
<
hello, world.
* Closing connection 0
* SSLv3, TLS alert, Client hello (1):
$ curl -k https://localhost:4433/not-existence
Error opening 'not-existence'
140226499298960:error:02001002:system library:fopen:No such file or directory:bss_file.c:169:fopen('not-existence','r')
140226499298960:error:2006D080:BIO routines:BIO_new_file:no such file:bss_file.c:172:
On each request s_server prints requested path and wait for the next request again.
FILE:index.txt
ACCEPT
If you want to run a script on requests (as CGI does), you might have to use another tool like socat.
$ echo 'echo "Your request is $(head -1)"' > server.sh
$ socat openssl-listen:4433,cert=cert.pem,key=key.pem,verify=0,reuseaddr,fork exec:"bash server.sh"
The result is:
$ curl -k https://localhost:4433/index.txt
Your request is GET /index.txt HTTP/1.1
| How to let openssl respond to http/s get directly from command line while listenning |
1,518,190,471,000 |
I created a text file and put some email addresses in it. Then I used grep to find them. Indeed it worked:
# pattern="^[a-zA-Z0-9]+@[a-zA-Z0-9]+\.[a-z]{2,}"
# grep -E $pattern regexfile
but only as long I kept the -E option for an extended regular expression. How do I need to change the above regex in order to use grep without -E option?
|
Be aware that matching email addresses is a LOT harder that what you have. See
an excerpt from the Mastering Regular Expressions book
However, to answer your question, for a basic regular expression, your quantifiers need to be one of *, \+ or \{m,n\} (with the backslashes)
pattern='^[a-zA-Z0-9]\+@[a-zA-Z0-9]\+\.[a-z]\{2,\}'
grep "$pattern" regexfile
You need to quote the pattern variable
| What is the email matching regex in basic regex for grep? |
1,518,190,471,000 |
I would please like some help with this command because I didn't find anything in documentation that can cover everything I want.
I have some variables that are global, so I would prefer to keep them out of awk.
chr="chr10"
inpfile="exome.bed"
outfile="exons_chr.bed" -> which should be composed according to chr,
so: "exons_" $chr ".bed"
and I want to apply awk in a general form,so that, for any "chr" input by user and any "infile", I could have a one-line command to just create an output according to following condition:
awk '$1=="$chr" $infile > "exons_"$chr".bed"
So, I also want to compose the output filename each time.
When I run it with specific values it works. How can I make it work with variables, to be more general,so that I can use it in a script ?
Is there a way to do it in more lines maybe, like :
awk ' { if ($1=="$chr") -> copy lines to outfile }' infile
|
You have various options...
To pass shell variables to awk and use them in string comparison and let the shell create the file:
awk -v chr="$chr" '$1==chr' "$infile" > "exons_${chr}.bed"
To additionally let awk do the output into the file:
awk -v chr="$chr" '$1==chr { print > "exons_" chr ".bed" }' "$infile"
| awk with variables in condition and in output redirection file |
1,518,190,471,000 |
Is there is a functionality in Linux that allows you to reference multiple files as one?
For example:
linux.txt.0, linux.txt.1, linux.txt.2, linux.txt.3 can be seen as separate files or as linux.txt
A useful example:
echo linux.txt
should print the contents of linux.txt.0, linux.txt.1, linux.txt.2, linux.txt.3 in that particular order.
I don't want to use cat with multiple arguments because it loads the files in memory and usually my files are huge.
|
The easiest way to do that kind of thing is to use the shell's globbing and/or brace expansion features:
cat linux.{0..3}.txt
or
cat linux.*.txt
As others have explained, cat does not load the whole file into memory, it will just read a few bytes from it, print them to screen and repeat until everything has been read.
| Linux split separate files on disk but see as one |
1,518,190,471,000 |
I have a log file that gets updated when my script runs. The script will insert "Start of script" text when it starts and "End of script" text when it finishes running. I am trying to capture the text between the "Start of script" and "End of script". The most recent entries are at the bottom of the log.
Using the following is close, but it is giving me all the occurrences in the log as shown below.
tac /opt/novell/JDBCFanout/activemqstatus.log |awk '/End of script/,/Start of script/'|tac
2014-09-09 12:30:42 - Start of script
2014-09-09 12:30:42 - Monitoring Reset script for the ActiveMQ.
2014-09-09 12:30:42 - The ActiveMQ value is not 1, the ActiveMQ services will not be restarted. The current value is 0.
2014-09-09 12:31:35 - The Fanout driver state is: 0
2014-09-09 12:33:32 - Sleeping for 10 seconds before checking the status of the Fanout driver.
2014-09-09 12:35:05 - The Fanout driver state is: 1
2014-09-09 12:35:05 - ERROR: The Fanout driver failed to start. The Fanout driver needs to be manually restarted.
2014-09-09 12:35:05 - End of script
2014-09-09 13:17:17 - Start of script
2014-09-09 13:17:17 - Reset script for the ActiveMQ.
2014-09-09 13:17:17 - The ActiveMQ flag is 1, shutting down the ActiveMQ services and the Fanout driver.
2014-09-09 13:17:17 - The ActiveMQ flag is now set to 0.
2014-09-09 13:17:17 - Stopping the Fanout driver.
2014-09-09 13:17:27 - The script is now cleaning up the pid's.
2014-09-09 13:17:37 - The script is now archiving the ActiveMQ Logs.
2014-09-09 13:17:37 - No files older than 60 days.
2014-09-09 13:17:47 - The script is now starting the ActiveMQ services.
2014-09-09 13:19:57 - The ActiveMQ service is running,
2014-09-09 13:19:57 - The ActiveMQ Oracle service is running.
2014-09-09 13:19:57 - The ActiveMQ MSSQL service is running.
2014-09-09 13:19:57 - The ActiveMQ Queue Manager service is running.
2014-09-09 13:19:58 - Sleeping for 10 seconds before checking the status of the Fanout driver.
2014-09-09 13:20:09 - The Fanout driver successfully restarted.
2014-09-09 13:20:09 - End of script
Specifically I would like for the output to look like this, and not the all occurences as shown above.
2014-09-09 13:17:17 - Start of script
2014-09-09 13:17:17 - Reset script for the ActiveMQ.
2014-09-09 13:17:17 - The ActiveMQ flag is 1, shutting down the ActiveMQ services and the Fanout driver.
2014-09-09 13:17:17 - The ActiveMQ flag is now set to 0.
2014-09-09 13:17:17 - Stopping the Fanout driver.
2014-09-09 13:17:27 - The script is now cleaning up the pid's.
2014-09-09 13:17:37 - The script is now archiving the ActiveMQ Logs.
2014-09-09 13:17:37 - No files older than 60 days.
2014-09-09 13:17:47 - The script is now starting the ActiveMQ services.
2014-09-09 13:19:57 - The ActiveMQ service is running,
2014-09-09 13:19:57 - The ActiveMQ Oracle service is running.
2014-09-09 13:19:57 - The ActiveMQ MSSQL service is running.
2014-09-09 13:19:57 - The ActiveMQ Queue Manager service is running.
2014-09-09 13:19:58 - Sleeping for 10 seconds before checking the status of the Fanout driver.
2014-09-09 13:20:09 - The Fanout driver successfully restarted.
2014-09-09 13:20:09 - End of script
Thank you in advance for any help you can share!
|
Perhaps a little state machine:
tac file |
awk '/End of script/ {p=1} p {print} p && /Start of script/ {exit}' |
tac
| Use awk to find first occurrence |
1,518,190,471,000 |
What's wrong with this command:
nmcli c up uuid "$nmcli -t -f uuid c"
How can I fix it?
"$nmcli -t -f uuid c" is a uuid needed after nmcli c up uuid.
|
nmcli c up uuid "$(nmcli -t -f uuid c)"
Use backticks or $(cmd) for commmand substitution
Note that nmcli -t -f uuid c can print out more than one uuid. I didn't test it yet, but the command above might not work then. If so, you should make sure that you are using the right uuid like that:
nmcli c up uuid `nmcli -t -f name,uuid c | awk -F':' '/^YOURWIFINAME:/{print $2}'`
| Using a string parameter in terminal |
1,518,190,471,000 |
I have observed that there are often subtle and often annoying differences between the default versions of certain command line tools like paste and sed on modern distributions of Linux versus OSX. This leads to answers that do not work on OS X even though they will work on almost any Linux distribution.
So I wonder, is there a standard way to get the full set of OS X versions of the standard Unix tools on modern Linux distributions so we can test solutions for correct functionaity on OSX, without having a copy?
|
As mentioned in a comment, Apple makes the source code of its versions of these tools available. Many common commands are in the "shell_cmds" package, while paste and sed are in "text_cmds". You can get the source code and compile.
Virtually all of them work fine on Linux systems, although you often have to jump into the source and remove explicitly FreeBSD-specific tests, particularly the __FBSDID("..."); line which is in almost every file. I don't know of any distribution that packages those up. Many distributions do package BSD versions of some common tools, which are very close (often identical) to the OS X versions, although they don't usually include things like paste and sed. For example, Debian includes eight packages called bsdX (none of which have either of those commands). For Apple's versions specifically, or for the many tools that aren't included in those, they're not hard to build yourself when you need them from the instructions above.
All of that said, the common subset of GNU and BSD tools is the POSIX specification, where all the basic commands are defined. You can look up the paste command or sed and see which arguments it is guaranteed to accept everywhere. This is the "morally correct" way of writing portable code: both GNU tools and BSD tools have extensions, but if you stay within POSIX your code will work on both. So check that you're always using only POSIX arguments and then what you say should work on OS X and other systems equally.
| Is it possible to obtain OS X versions of Unix tools on modern Linux distributions (like Ubuntu or RedHat)? |
1,518,190,471,000 |
I am trying to pass arguments to a bash script and then to a php script, I have literally looked at 30+ links, and tried over a dozen examples, and I for whatever reason have not been able to get the following to work, I am seriously frustrated, any help is so very much appreciated.
For the sake of this question, lets say I have the following bash script (test.sh)
#!/usr/bin/env bash
/usr/bin/php test.php $@
and I have the following PHP script (test.php)
<?php
print_r($argv);
and I am trying to execute the bash script with the following arguments
./test.sh hello world "how are you"
the results of the above is the following
Array
(
[0] => test.php
[1] => hello
[2] => world
[3] => how
[4] => are
[5] => you
)
and I am looking for the results to be
Array
(
[0] => test.php
[1] => hello
[2] => world
[3] => how are you
)
Any ideas are greatly appreciated... I am banging my head against the desk....
|
Would have been enough to have a look at the "QUOTING" block in bash's man page... (to find a pointer to the PARAMETERS block where it is explained)
/usr/bin/php test.php "$@"
| Pass bash script arguments in double quotes to php cli script |
1,518,190,471,000 |
Linux supports sparse files aka 'file with holes'
Note following commands
alias mystat='stat -c "%n: %B*%b blocks %s bytes"'
dd if=/dev/zero bs=1024k seek=4096 count=0 of=file-with-holes
ls -l file-with-holes
-rw-r--r-- 1 root root 4294967296 Feb 25 18:33 file-with-holes
mystat file-with-holes
file-with-holes: 512*0 blocks 4294967296 bytes
cp file-with-holes other-file
mystat other-file
other-file: 512*0 blocks 4294967296 bytes
cat file-with-holes > file-without-holes
mystat file-without-holes
file-without-holes: 512*8388608 blocks 4294967296 bytes
The cat command removed all holes from the file.
Is there a way to get the holes back?
|
cp --sparse=always file-without-holes another-file-with-holes
Example:
$ cp --sparse=always file-without-holes another-file-with-holes
$ du --apparent-size another
16384 another-file-with-holes
$ du another-file-with-holes
0 another-file-with-holes
| Looking for a "undo" for file with holes (GNU) |
1,518,190,471,000 |
I'm using the default terminal command prompt on Ubuntu 12.04. When I'm doing something on the prompt (as opposed to editing in VI) the scrolling starts when the text reaches bottom of the screen. I don't like that because I have to keep my eyes always at the bottom of the screen. I would prefer if there were an option to start scrolling things up when we reach a particular fraction of vertical screen size. Currently I just use clear screen (CTRL+L) but it's tedious to do that every time. I don't like resizing command prompt because the background distracts me and I have to resize it each time in a new session. Is there some way to do scroll when prompt fills until a particular fraction (say 70%)?
|
I'm using gnome-terminal as my console and it respects the vt100 set scroll region Control character Sequence.
$ cat setscroll.sh
function min(){
if [[ $1 -le $2 ]]; then echo $1; else echo $2; fi
}
function max(){
if [[ $1 -ge $2 ]]; then echo $1; else echo $2; fi
}
function setscrollregion(){
CLR="\033[2J"
SRGN="\033[1;"$1"r"
echo -ne $CLR$SRGN
}
function calcline(){
set `stty size` $1 # ;echo height=$1 width=$2 perc=$3
bline=$(( ($1 * $3 ) / 100 )) # calculate bottom line
bline=$( min $bline $1) # max is screen height
bline=$( max 5 $bline) # min is 5 lines customise as you wish
echo $bline
}
setscrollregion $(calcline $1)
Then,
$ bash ./setscroll.sh 50
will set the scroll region to 50% of height
| Start scrolling command prompt when filled until a particular fraction |
1,518,190,471,000 |
I have a largish directory with filenames formatted like
Some_Folder-(FOL001)-clean
what I'm trying to do is display the pattern between the brackets at the start like
FOL001 Some_Folder-(FOL001)-clean
so it can be piped into sort
So far what I have is
ls | sed -n -e 's/.*\(\-([A-Z]\{3,4\}[0-9]\{3,4\})-\)\(.*\)/\1 \t \0/p' | sort
which produces
-(FOL001)- Some_Folder-(FOL001)-clean
However, I don't want the "-(" and ")-" on the output of the first pattern, just the contents. I know I can run this through sed again to replace those characters, but wondered if it's possible to include it in the original expression.
|
You can move the grouping parentheses inside the presented parentheses.
This would do:
ls | sed -nre 's/.*-\(([A-Z]{3,4}[0-9]{3,4})\)-.*/\1 \t \0/p'|sort
I also use -r for regexp, it's easier to write brackets and parentheses.
With this option, grouping parentheses are (,), and actual parentheses are \(,\)
| Sed - string substitution with groupings |
1,518,190,471,000 |
I am running Ubuntu 12.04, which came with Cmake v 2.8.7.
I had need for a more current CMake, so I downloaded the source for 12.8.12.1, built, and installed it per directions. The last step, make install I ran sudoed.
./bootstrap
make
sudo make install
Now I want to run it, but I find that the old version is still invoked when I execute cmake from the command line:
jdibling@hurricane:/$ cd /; cmake --version; which cmake
cmake version 2.8.7
/usr/local/bin/cmake
jdibling@hurricane:/$
Odd, I think. So I su and try it from there:
root@hurricane:~# cd /; cmake --version; which cmake
cmake version 2.8.12.1
/usr/local/bin/cmake
root@hurricane:/#
Why does which report the same directory, but cmake --version reports different versions? How can I find where the new cmake was actually installed?
As suggested, I ran type:
jdibling@hurricane:/tmp/cmake-2.8.12.1$ type cmake
cmake is hashed (/usr/bin/cmake)
jdibling@hurricane:/tmp/cmake-2.8.12.1$ sudo su -
root@hurricane:~# type cmake
cmake is /usr/local/bin/cmake
root@hurricane:~#
|
You should use the type command to know what is really under its name, i.e.:
type cmake
That might be an alias that run a different version of cmake, or a function with a similar behavior or finally a previously hashed command that in not any more the first one in your PATH, as you experienced.
| 'which' reports one thing, actual command is another [duplicate] |
1,518,190,471,000 |
Both give the output of the command, so what is the semantic difference between the two? Some reading led me to suspect is that $(command) is Bash syntax, and the back quotes are integrated into Unix somehow; is there any truth to this?
|
The two have identical semantics. Backquotes were the earlier form of command substitution, but they are difficult to nest since the opening and closing delimiters are identical, requiring lots of escaping. $(...) solves that problem, as well as being more readable in certain fonts.
| What's the practical difference between `command` and $(command)? [duplicate] |
1,518,190,471,000 |
I needed to display a list of directories sorted by their creation date and I came up with this code snippet that I thought was kind of clever. Is there a more obvious way to do this that I'm missing?
printf "%-20s\t\t\t%-s\n" "DATE" "CASE"
printf "%-20s\t\t\t%-s\n" "----" "----"
find $idir -mindepth 3 -maxdepth 3 -type d -printf "%T@ %Tc\t\t%p\n" \
| sed "s#${idir}/##" \
| sort -h \
| sed "s#^[^ ]* ##"
Sample output
DATE CASE
---- ----
Fri 06 Sep 2013 11:17:41 AM EDT dir1/Baseline/2013_09_06_11_16_10
Fri 06 Sep 2013 01:44:38 PM EDT dir2/Baseline/2013_09_06_13_42_48
Sun 08 Sep 2013 05:02:46 PM EDT dir3/6 Month/2013_09_08_17_02_05
Fri 13 Sep 2013 02:28:30 PM EDT dir4/Baseline/2013_09_13_14_25_09
Details
The above will look exactly 3 directories deep from the directory $idir. It then prints the creation time of any directories that are found at this depth. I then strip the $idir portion of the path off of the output (don't need it), sort it, and then chop off the %T@ portion of the output. This was only so that I could more easily sort the output.
Without the trailing sed the output looks like this:
DATE CASE
---- ----
1378480661.2612650000 Fri 06 Sep 2013 11:17:41 AM EDT dir1/Baseline/2013_09_06_11_16_10
1378489478.3223970000 Fri 06 Sep 2013 01:44:38 PM EDT dir2/Baseline/2013_09_06_13_42_48
1378674166.7427782000 Sun 08 Sep 2013 05:02:46 PM EDT dir3/6 Month/2013_09_08_17_02_05
1379096910.4431689000 Fri 13 Sep 2013 02:28:30 PM EDT dir4/Baseline/2013_09_13_14_25_09
Showing me a cleaner method!
|
There is no good, portable method to sort files by their time. The most portable methods are:
If you can assume that file names contain only printable characters, call ls -ltd.
Otherwise, use perl.
This is the classical method to sort files by date with GNU tools. You're assuming that the file names contain no newline; this is easily fixed by changing \n to \0 and calling sort with the -z option. Oh, and drop the roundabout sed calls; note that your script won't work if $idir contains any of #*^$\[ because sed will interpret them as special characters.
cd "$idir" &&
find -mindepth 3 -maxdepth 3 -type d -printf '%T@ %Tc\t\t%p\0' |
sort -hz |
tr '\0\n' '\n\0' |
sed 's/^[^ ]* //'
By the way, you are sorting files by their modification time, not by their creation time. Most unix variants other than OSX do not support creation times all the way through (from the filesystem through the kernel to userland tools).
The easy, clean way to sort files by their modification time is to use zsh. The glob qualifier om sorts files in increasing age (use Om to get the oldest file first).
files=(*(om))
echo "The oldest file is $files[-1]"
To get some output like what you show, you can use zstat.
zmodload zsh/stat
describe () {
typeset -A st
zstat -s -H st "$REPLY"
echo "$st[mtime] $REPLY"
}
: */*/*(/+describe)
| Any better method than this for sorting files by their creation date? |
1,518,190,471,000 |
Of course I know the command:
iwconfig
which lists devices (and gives info about whether they have a wireless connection). For the purposes of a shell script, I'm really wondering, is there any way to list only the device names that DO have a wireless connection?
Essentially any other iw command (or something similar)...
Specifically looking for solutions besides iwconfig | grep ...
|
Try ls /sys/class/net | grep w
| Best way to find the active wireless device |
1,518,190,471,000 |
I'm trying to wget all the images pictures from a website. I thought I knew how to use it, but I guess I don't.
The images I'm trying to get come from here.
The command I'm using is:
wget -prA.png http://gameinfo.euw.leagueoflegends.com/en/game-info/champions/
But I only get a index.html back?
Could somebody explain this?
|
The web page uses dynamic HTML to display the champion grid content (just look at the HTML source of the page and search for "Champion Grid", the only thing under that is some empty divs. Since wget doesn't do javascript, it won't execute the code that would generate the grid HTML (and link the images).
| Get all images from website |
1,518,190,471,000 |
In Terminal, one can enter 'xkill' as a command which lets you select a window whose client you wish to kill. In fact, it can even be used to kill panels.
How can I restore an xkilled panel in Linux Mint without a reboot?
Something that I generally do in Windows is Ctrl+Alt+Delete and then restart the explorer.exe service.
|
Pressing Alt+F2 and then enter cinnamon --replace might fix it.
| How can I restore an xkilled panel in Linux Mint without a reboot? |
1,375,280,881,000 |
CTRL+R allows me to reverse search through the command history which is great but can I also find out from which directory that command was run? I am using C-shell in Linux.
|
If you type history you will get a history of the commands you issued. You can locate the command you want and look at the entries above to see what cd commands are there. This may give you the information you need.
| Can I see in history output from which directory I had actually issued a command? |
1,375,280,881,000 |
I'm trying to send email with an html file as the body (it's actually a cucumber results report if that matters) or an attachment (if sending it as the body does not work) via the command line
I've tried the following based on the mutt example in this answer to another question, but it is resulting in an error.
cat <<'EOF' Audit_Results.html | mutt -H -
To: [email protected]
Subject: "test sending html mail"
Content-Type: text/html
EOF
when I do this I get the following error
No recipients were specified.
Mutt was installed using brew install mutt and it looks like that installed version 1.5.21. I am able to send mail via the 'interactive' interface but just tested that with simple text mail, nothing html or with an attachment.
My objective is to send the cucumber results file "Audit_Results.html" file out as an email. Although the file includes some screenshots that are created using webdriver's .screenshot_as(:base64) method, and embedded using cucumber's embed("data:image/png;base64,#{encoded_img}",'image/png') function which seems to pose a bit of a problem. So far the one time I've managed to create a html mail from the report (using sendmail) it did not display properly in gmail. (although if sent as an attachment, it will format properly when downloaded and opened with a browser) Seems like gmail at least does not like html email with images embedded in that format. So it may be that I will end up needing to send the html report as an attachment.
|
If you just want to send Audit_Results.html verbatim, use this syntax:
mutt -e "set content_type=text/html" -s "Your audit results" [email protected] < Audit_Results.html
You won't need to pre-edit Audit_Results.html with mail headers, you can just send it directly.
| Trying to send HTML mail on mac OSX Mountain Lion |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.