date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,375,280,881,000 |
How can I make the next script to run when I'm using sh shell:
#!/bin/bash
bind '^[[3'=prefix-2
bind '^[[3~'=delete-char-forward
bind '^[[1'=prefix-2
bind '^[[1~'=beginning-of-line
bind '^[[4'=prefix-2
bind '^[[4~'=end-of-line
|
bind is a bash command, not an sh command. If you aren't using bash, the bind command won't be available.
Depending on the platform, sh may be one of several shells. They all provide a common core for scripting. Plain sh hardly has any convenient features for interactive use; in particular, plain sh has no notion of key bindings.
On some systems, sh is bash (which runs in a compatibility mode when invoked as sh) and uses the readline library for command line edition and supports the key bindings through the bind builtin. Other systems use leaner shells such as dash or ksh which are faster for executing scripts. If you want a decent command line interface, don't call sh.
Note that the script you wrote has no effect when run (except to print warnings that the bind command is pointless in a non-interactive shell). A file containing only key binding definitions cannot be a useful standalone executable, only a shell snippet. So the shebang at the top is not useful.
| How can I make "bind" command to work in sh shell |
1,375,280,881,000 |
I need to convert an RGB raw file into 3 files containing each the red, green and blue channel.
|
Have you tried the netpbm tools? This will work with R8G8B8 and other 8-bit RGB orders.
For a width 100 height 200 RGB order raw file:
rawtoppm -rgb 100 200 input.rgb > image.ppm
ppmtorgb3 image.ppm
You will now have 3 pgm format grey-map files, each suffixed .red .grn and .blu. These .pgm files are almost a raw binary format, except for the short header, so:
tail +4 image.red > image_r.raw
tail +4 image.grn > image_g.raw
tail +4 image.blu > image_b.raw
if you really want the raw channels like that. Or, for further processing:
pgmtoppm red image.red > image_red.ppm
pgmtoppm green image.grn > image_grn.ppm
pgmtoppm blue image.blu > image_blue.ppm
You now have three ppmformat files which are the separated RGB channels (see also rgb3toppm). These can be convert to other formats using ppmtoX, e.g. ppmtopng.
Use "white" instead of the colour in the 2nd parameter to leave each as a grey scale.
Imagemagick's convert may also be also useful, it will handle RGB, RGBA and 16-bit raw formats too, and it has a -separate option to split channels.
for ch in R G B; do
convert -set colorspace RGB -size 100x200 -depth 8 rgb:image.rgb \
-channel ${ch} -separate -depth 8 gray:image_${ch}.raw
done
Check that the -set colorspace option is appropriate for your input.
Newer versions let you do this in a single command, see http://www.imagemagick.org/Usage/color_basics
convert ... -channel RGB -separate gray:image_%d.raw
and R/G/B will be written to image_0.raw image_1.raw image_2.raw files
Note, convert command was updated based on help and feedback from Stephane Chazelas, there were several changes to colorspace behaviour from ImageMagic-6.7.7 which cause problems due to (I believe) sRGB being used instead of RGB.
# colorspace changes mean this works differently after ImageMagick-6.7.6
convert -size 100x200 -depth 8 rgb:image.rgb \
-channel ${ch} -separate -depth 8 gray:image_${ch}.raw
| Splitting RGB raw file into 3 files, one for each channel? |
1,375,280,881,000 |
What tool in Linux can I use to merge multiple flv files ? Files are already in order, just put them altogether seamlessly
|
I didn't test the following yet, so see it just as some hints.
You could use ffmpeg. From the manual
* You can put many streams of the same type in the output:
ffmpeg -i test1.avi -i test2.avi -vcodec copy -acodec copy -vcodec copy -acodec copy test12.avi -newvideo -newaudio
In addition to the first video and audio streams, the resulting output file
test12.avi will contain the second video and the second audio stream found in the
input streams list.
The "-newvideo", "-newaudio" and "-newsubtitle" options have to be specified
immediately after the name of the output file to which you want to add them.
You can also use mencoder for example:
mencoder -forceidx -ovc copy -oac pcm -o output.flv input1.flv input2.flv
Another option is maybe flvcat.
You could also try avidemux-cli.
| Merge multiple flv files? |
1,375,280,881,000 |
I have several video files in a directory and I want to convert all of them into other video formats.
Is there any way that I can convert all of them in just one go using FFMPEG. I mean without having to make a shell script for doing so.
|
The easiest way would be to use a for loop of your shell of choice. This task is so simple, you can just use the prompt, there's no need to create a shell script.
Here is the one-liner as an example for the widely-used bash (and compatible):
for i in *.mkv; do ffmpeg -i "$i" … ;done
| How to convert a group of video files using FFMPEG? |
1,375,280,881,000 |
I am trying to create a file occurent within my /tmp directory of each file containing a speicific string.
The problem is that the call to basename {} does not seem to work. Neither this, neither echo basename {}.
grep -R 'mystring' . | cut -d: -f 1 | uniq | xargs -n 1 -I {} touch /tmp/`basename {}`
Does anybody know how could I get the basename function executed on the xargs parameter ?
|
try this:
grep -R 'mystring' . | cut -d: -f 1 | uniq| xargs -n 1 -I {} -t -i ksh -c "touch /tmp/$(basename {})"
| Inner function call with xargs parameters |
1,375,280,881,000 |
QUESTION: Is there a Kerberos-friendly web browser usable via an SSH console?
I have tried links but it does not seem to work with Kerberos (the webapp asks me for login/password even though I have a valid Kerberos ticket (which I got with kinit).
CONTEXT:
From my laptop I have command-line access to a server, and need to browse an intranet webapp hosted on that server, using Kerberos authentication.
The server can not ping the laptop, so I can not run Firefox on the server and redirect the DISPLAY to my laptop
|
If you are connected to this server using SSH, you may use "ssh -x" then X will be automatically forwarded (and secured).
| Command-line web browser with Kerberos authentication? |
1,375,280,881,000 |
While I can find some scattered info on the topic, I can't seem to find a straight walkthrough (or even a definite answer)...
Basically I'm too low on the totem pole to start sending plain-text e-mails. My company requires an HTML signature that includes the logo image.
The problem is I'm fed up with GUI mail clients. Is it possible to configure Mutt in a way that would allow me to have all messages I write injected to an HTML template (with an embedded image)? Or will I just have to deal with Thunderbird's chaos?
Thanks for any help!
|
This is a huge hack, but somebody has already done the work for you.
Edit: What if you attached your signature as an HTML file?
mutt -e "set content_type=text/html" [email protected] -s "Hello" < mysig.html
| Sending HTML with Mutt (or another terminal mail client) |
1,375,280,881,000 |
I'm using the OSX application Terminal to work remotely on Amazon EC2 servers through SSH.
Occasionally and seemingly randomly, my keystrokes are delayed. I've tried reconnecting, restarting Terminal, restarting my computers, etc... Nothing seems to solve the problem, and it comes and goes. I've tried other terminal emulators and they seem to have the same problem whenever I'm experiencing it. There is never delay in my keystrokes when I'm working locally on my own machine through Terminal.
What are some causes of this keystroke delay? Is there anything I can do about this?
I'm using Terminal in a location I don't usually work from, so maybe the internet connection has something to do with this. Does Terminal mimic the connection speed to the server as you type?
|
It is your connection. SSH only displays what the remote server tells it to, so it'll only echo your keystrokes once the remote server receives and process them and tells the shell to print them.
| What causes delay in my keystrokes on OSX's Terminal application? |
1,375,280,881,000 |
I'm submitting to my website using cURL. I make cURL post it in timely fashion.
In below code; first line is curl submit; second line is for queuing:
curl -b cookies.txt \
-d title="$(sed '1,/sblmtitle/d;/slpstitle/,$d' sedut.html)" \
-d taxonomy%5Btags%5D%5B1%5D="$(
sed '1,/sblmkategori/d;/slpskategori/,$d' sedut.html
)" \
-d teaser_include=1 \
-d body="$(sed '1,/sblmkonten/d;/slpskonten/,$d' sedut.html)" \
-d field_source%5B0%5D%5Burl%5D="$(
sed '1,/sblmurl/d;/slpsurl/,$d' sedut.html
)" \
-d changed= \
-d form_build_id=form-424f851ad50bd4781c8c25ab7efd5c4c \
-d form_token=0e7cc7437faf816f1ecd96087286bda9 \
-d form_id=post_node_form \
-d op=Save http://www.web.org/submit/post &&
for file in $(ls *.html | sort -r | tail -1); do
mv $file sedut.html
done
If cURL fails to submit, it will print out sedut.html contents. If cURL success submitting, it will print nothing. In my case, sometime cURL fails to submit.
What I want is don't execute second line if first line fails. But using && as above codes is just not works. I've tested it, cURL submit fails and print out sedut.html contents, but second line still executed.
|
curl has no idea that it failed - it had well-defined input, it communicated with the server and got back a 200 OK response. Because of this, you can't rely on the exit code of curl (which is what && does).
Instead, we have to use something else to determine success or failure, such as the output generated by curl.
As a side note, it is generally not a good idea to use ls in a script. Here, it will fail if a file happens to have a newline or a space character. It may be unlikely, but but it's possible. See http://mywiki.wooledge.org/ParsingLs for more details. As a rule of thumb, I only use ls in scripts when displaying the results for human eyes. If I'm processing the results in any way, I use find . -0 ...
| Using "&&" is not effective for cURL submit form (don't execute second line if first line fails) |
1,375,280,881,000 |
How do I clear the contents of a file before writing to it? For example:
echo one > filename.tmp
# filename.tmp now contains "one"
echo two > filename.tmp
# filename.tmp should now contain "two", not "one" and "two"
My goal is:
Start a listener
$ nc -l 7007 > /var/tmp/test.log
Send some data
$ telnet localhost 7007
hi
second_word
Test the file
$ cat /var/tmp/test.log
second_word
I don't want "hi" to show up in the log; I want "second_word" to have replaced it
|
You need a separate program to clear and write the new file since nc doesn't offer that option.
nc -l 7007 | while true; do
while read line; do
echo "$line" > /tmp/test
done
done
You can save everything after the pipe in a separate script that accepts a file path.
save-last-line.sh
while true; do
while read line; do
echo "$line" > $1
done
done
Then it's simply:
nc -l 7007 | save-last-line.sh /var/tmp/test.log
You'll want to add checks to make sure $1 is writable and show usage when $1 isn't specified.
| How to keep overwriting the contents of a file instead of appending to it |
1,375,280,881,000 |
I would like to view files exactly like I can view proccess using pstree.
So is there something like:
treeview --maxdepth 2
that would give me what I need?
|
You can use the tree utility for this:
tree -L 2
| How do I view files as a tree structure? |
1,375,280,881,000 |
I'm struggling to create a FAT-formatted a disk image that can store a file of known size. In this case, a 1 GiB file.
For example:
# Create a file that's 1 GiB in size.
dd if=/dev/zero iflag=count_bytes of=./large-file bs=1M count=1G
# Measure file size in KiB.
LARGE_FILE_SIZE_KIB="$(du --summarize --block-size=1024 large-file | cut --fields 1)"
# Create a FAT-formatted disk image.
mkfs.vfat -vv -C ./disk.img "${LARGE_FILE_SIZE_KIB}"
# Mount disk image using a loopback device.
mount -o loop ./disk.img /mnt
# Copy the large file to the disk image.
cp --archive ./large-file /mnt
The script fails with the following output:
++ dd if=/dev/zero iflag=count_bytes of=./large-file bs=1M count=1G
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 39.6962 s, 27.0 MB/s
+++ du --summarize --block-size=1024 large-file
+++ cut --fields 1
++ LARGE_FILE_SIZE_KIB=1048580
++ mkfs.vfat -vv -C ./disk.img 1048580
mkfs.fat 4.2 (2021-01-31)
Auto-selecting FAT32 for large filesystem
Boot jump code is eb 58
Using 32 reserved sectors
Trying with 8 sectors/cluster:
Trying FAT32: #clu=261627, fatlen=2048, maxclu=262144, limit=65525/268435446
Using sector 6 as backup boot sector (0 = none)
./disk.img has 64 heads and 63 sectors per track,
hidden sectors 0x0000;
logical sector size is 512,
using 0xf8 media descriptor, with 2097144 sectors;
drive number 0x80;
filesystem has 2 32-bit FATs and 8 sectors per cluster.
FAT size is 2048 sectors, and provides 261627 clusters.
There are 32 reserved sectors.
Volume ID is f0de10c3, no volume label.
++ mount -o loop ./disk.img /mnt
++ cp --archive ./large-file /mnt
cp: error writing '/mnt/large-file': No space left on device
How do I create a FAT-formatted disk image that's large enough store a file of known size?
Resources:
https://linux.die.net/man/1/dd
https://linux.die.net/man/8/mkfs.vfat
https://linux.die.net/man/8/mount
https://linux.die.net/man/1/cp
https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system#Size_limits
EDIT 1
My assumption was that mkfs.vfat -C ./disk.img N would create an image that has N KiBs of usable space, but I guess that's not the case.
EDIT 2
It seems like a dead end to try to calculate exactly how big the disk image needs to be to store a file of known size because of the complexities around FAT sector/cluster size limits.
As suggested in the answers, I've settled for adding 20% extra space to the disk image to allow for FAT overhead.
|
It would seem to me you're trying to fit a file on a file system that doesn't have enough space – by design! You're basically saying "For a file that takes N kB, make a disk image exactly N kB in size". Where exactly is FAT going to store the file metadata, directory tables and volume descriptor?
With FAT32, the standard duplicate superblock, and the usual directory table + long file name table, plus 32 reserved sectors somewhere on the disk, from the top of my head, you'll want a couple of MB extra space, around 4 MB, I guess, for file systems this huge¹.
You're also using mkfs.vfat with defaults, which was a sensible configuration for PCs made in the 80s and early 1990s, which means there's smaller sectors, and hence more sectors to keep track of, and hence more consumption by the FAT to store a single file. Maximize the sector size; on a file system with more than 1GB of available space (so, disk image significantly larger than 1GB!), 16 kB should work (minimum "FAT32-legal" cluster count is 65525, divide 1GB by that assuming 4kB sectors,so -S 4096, and maximize the cluster size to 16 sectors, so -s 16).
Also note: with 1 GB you're already dangerously close to the maximum file size of FAT32 – 2 GB. So, if this is intended to be some storage for e.g. backup or file system images, you might quickly find yourself in a situation where FAT32 doesn't suffice. It's a very old filesystem.
Addendum: How to figure how much larger to make the image than the content
As sketched above, FAT is a bit annoying, because it has arbitrary (and surprisingly low!) limits on the number of sectors, which makes it a bit hard to predict how much overhead you'll incur.
However, the good thing is that Linux supports sparse files, meaning that you can make an "empty" image that doesn't "cost" any storage space. That image can be larger than you need it to be!
You then fill it with the data you need, and shrink it back to only the size you want.
Generally, your script from your question does a few questionable things, and there's more sensible ways to achieve the same; I'll comment in code on what my commands are equivalent to. The idea is simply: make a much larger-than-necessary, but "free" in terms of storage, image file first, fill it with the file or the files you need, check how much free space you have left. Subtract that space from your image size, make a new image, be done.
# File(s) we want to store
files=( *.data ) # whatever you want to store.
imgfile="fat32.img"
# First, figure out how much size we'll need
# use `stat` to get the size in bytes instead of parsing `du`'s output
# Replace the new line after each file size with a "+" and add the initial overhead.
initial_overhead=$(( 5 * 2**20 )) # 5 MiB
# Then use your shell's arithmetic evaluation $(( … )) to execute that sum
totalsize=$(( $(stat -c '%s' -- "${files[@]}" | tr '\n' '+') initial_overhead ))
# give an extra 20% (no floating point math in bash…), then round to 1 kiB blocks
img_size=$(( ( totalsize * 120 / 100 + 1023 ) / 1024 * 1024 ))
# Create a file of that size
fallocate -l ${img_size} -- "${img_file}"
mkfs.vfat -vv -- "${img_file}"
# set up loopback device as regular user, and extract loopback
# device name from result
loopback=$(udisksctl loop-setup - "${img_file}" | sed 's/.* \([^ ]*\)\.$/\1/')
# mount loopback device as regular user, and get mount path
mounted=$(udisksctl mount -b "${loopback}" | sed 's/^.* \([^ ]*\)$/\1/')
# make sure we're good so far
[[ -d "${mounted}" ]] || (echo "couldn't get mount"; exit -1)
# copy over files…
cp -- "${files[@]}" "${mounted}"
# … and unmount our file system image
udisksctl unmount -b "${loopback}"
udisksctl loop-delete -b "${loopback}"
# use df to directly get the amount of free space in kilobyte blocks
free_space=$(df --direct --block-size=1K --output=avail -- "${img_file}" | tail -n1)
# We no longer need our temporary image
rm -- "${img_file}"
# subtract 2 kB just to be on the safe side when making new image
new_img_size=$(( free_space - 2 ))
# Make a new image, copy over files
fallocate -l ${new_img_size} -- "${img_file}"
mkfs.vfat -vv -- "${img_file}"
loopback=$(udisksctl loop-setup - "${img_file}" | sed 's/.* \([^ ]*\)\.$/\1/')
mounted=$(udisksctl mount -b "${loopback}" | sed 's/^.* \([^ ]*\)$/\1/')
[[ -d "${mounted}" ]] || (echo "final copy: couldn't get mount"; exit -1)
cp -- "${files[@]}" "${mounted}"
udisksctl unmount -b "${loopback}"
udisksctl loop-delete -b "${loopback}"
# Done!
¹: when FAT32 was introduced, a 1 GB hard drive was still not a small one, and file system structures hail down from FAT12, in 1981, and designed for 360 kB sized floppy disks; the amount of blocks you could possibly have to keep account for that 1 GB hard drive would have was simply not to materialize for another 15 years or so. In essence, smart phones formatting SD cards to FAT32 carry around a time capsule for a file system invented in ~ 1997, which in itself is a relatively slight modification of a file system invented in 1980; so, yay, solving modern storage problems with compromise solutions from 44 years ago.
| Create FAT-formatted disk image that can fit 1G file |
1,375,280,881,000 |
Every now and then I want to recycle a command sequence I recently used, after I adapted it.
Let's imagine, yesterday I executed
foo 42
bar with some strange arguments
baz /my/most/beloved/folder
Now today I need to something similar like
foo -x 42
bar with some more strange arguments
baz /my/most/hated/folder
I can use ctrlR and foo to find yesterday's command (maybe multiple times, because I use foo in different contexts), change it and execute it. No problem so far. But now I'd love to quickly jump to the following line in history without repeating the lengthy search again (in reality often more than ten commands).
The fc command of ksh comes close to what I want, but it is not interactive: If I issue a dozen git commands I may need to react to one of them responding other than expected (or executing the following commands will cause a mess).
The cmd shell of MS Windows has a (for me usually annoying) behaviour: After executing a command from history, the up arrow doesn't take me to the last command in history, but to the last executed command, so after executing the foo line again, arrow up and then arrow down would take me to the bar line. This exact behaviour would not help me, because it only work when executing an unmodified line. If I modify it, the »history pointer« is set to the end again. But it would be the perfect solution to have such a bookmark in history, set it at the foo line and have a key combination to take me back there or even better to the following line, moving the bookmark.
I guess I'm not the only one with such a need, but I did not find any solution to it. Currently I solve it by printing the relevant part of history and use the mouse to paste the line I need next, but this is clumsy and I don't ilke to switch between keyboard and mouse. And please get away with history expansion; I typically need to interactively edit the line, including tab expansion.
Do you know any solution, preferably for zsh?
|
Foolish me! I got the answer elsewhere. What I described as perfect solution seems to exist since long ago, I simply always oversaw it and didn't find the right search terms:
ctrlO executes the current command and brings you to the following line from the history.
What a great feature. Gladly, of more than twenty people reading the request did not know it either.
| How to edit and execute a couple of commands from shell history? |
1,375,280,881,000 |
I'm writing an interactive shell utility (it's a bash script), and I would like to provide a man page for it. It's a script that I expect the user to download and put in their PATH, not something that should require superuser privileges. Any idea where I should put the manual page?
For the moment I just put it next to the script, and from the script I can use something like dirname $0 to locate and display from inside the utility. However it would be nice to also make it available with man. For now I recommend man -l /path/to/script/help.1, but that's awkward and non-intuitive.
Note: I found this previous question, but I don't think the accepted answer actually answers the question correctly.
|
As far as I know, there is no standard recommendation for a user-writable man page directory. The Filesystem Hierarchy Standard 3.0 refers to the XDG Base Directory Specification and the xdg-user-dirs specification regarding things within users' home directories, but neither of those has anything specific to say about man page directories.
Classically, this is because adding your own man page directory tree to the MANPATH environment variable used to be trivial. However, modern implementations of man often leave the MANPATH variable empty, and specify the man page directory hierarchies in e.g. /etc/manpath.config or a similar configuration file.
For backwards compatibility, if the MANPATH variable is set for these modern man implementations, it tends to override the man page search path completely, rather than provide a way to add to it, which can be inconvenient: as a user, you would have to read and understand the appropriate configuration file, and first create a MANPATH value that completely covers all the system man pages you wish to keep available to you, and then add your own custom man page directory to a suitable spot within that ordered list. However, you would generally only need to do this once, unless there are major updates to the system software and/or directory structures, so it should not be that hard to do.
Overall, the classic convention seemed to be that if a directory of executables (suitable for adding to PATH) were located at <some path>/bin/ or <some path>/sbin/, the root of the corresponding man page hierarchy would be at <some path>/man/ respectively. However, FHS 3.0 already deviates from that by specifying /usr/share/man/ as the primary man page hierarchy for the system, corresponding to both /[s]bin/ and /usr/[s]bin/ executable directories.
| What is the recommended user writable manpath? |
1,375,280,881,000 |
With $ as my bash prompt and ⏎ symbolizing me hitting the enter key in the following example, how could I construct a command/alias foo so that
$ foo bar⏎
would enter/input/pre-fill/type "bar" (or any other string I pass to the command) to the command line, so that I can modify "bar" before hitting Enter? E.g.
$ bar
…
$ barbaz⏎
I've unsuccessfully tried echo bar > /dev/pts/123 and would like to do without xdotool. Is this possible?
EDIT: Example use case, a "greppy autocomplete":
I often need long commands with many arguments that are hard to remember. I keep examples in a file:
commands.txt:
ffmpeg -i infile.mp4 -c:v copy -c:a copy -y transcoded.mp4
sox in.wav out.wav remix 1 0
now, if I had the command as specified above, let's call it inject, I could have an alias
grepcomplete () {
inject $(grep $0 commands.txt)
}
so when I remember that I need to remix something, but I don't remember sox and its arguments, I can type grepcomplete remix, and then have sox in.wav out.wav remix 1 0 sit on my command line, as if I typed it out, ready for me to edit and adapt, before I execute it by hitting enter.
Without the need to select, copy, paste anything.
As Kamil suggests in the comments, I could use bash's history search (Ctrl-R), and provide my own "history" by doing something like history -r commands.txt in my bashrc.
Still, my approach has the benefit that I can easily hack it, e.g. by displaying all matches with syntax highlighting.
Please note that I've answered this question myself, where I provide an implementation of this inject command.
|
Here's a small and ugly solution, inspired by a perl solution to insert a string after each prompt, using TIOCSTI adapted to python:
A bash function, e.g. for .bashrc
inject () {
(python -c "import fcntl; import termios; import sys
with open('/dev/stdout', 'w') as fd:
for c in ' '.join(sys.argv[1:]): fcntl.ioctl(fd, termios.TIOCSTI, c)" "$@" &)
}
Usage:
inject foo bar
Notes:
can't handle Unicode at the moment! I suspect the for char in str is to blame?
The background-in-subshell construct ( foo &) prevents double-echo'ing the string before the prompt
It will eat whitespace between arguments: inject foo bar = inject foo bar. Use inject "foo bar".
It looks like it depends on timing, so race conditions might arise?
I had other versions with Python's threads/multiprocessing that can do without the bash subshell and might work better for piping, xargs etc. This was the cleanest and simplest solution for my use case.
| Command to inject string into command line, like a pre-filled command to edit before execution |
1,375,280,881,000 |
I'd like to know if it is possible to repeat part of a command (which hasn't been executed yet) on the same line (i.e., chained).
Let's say I want to execute this command
mkdir -p /some/long/dest/path && rsync -azP /some/long/src/path /some/long/dest/path
Is it possible to type only something similar to
mkdir -p /some/long/dest/path && rsync -azP /some/long/src/path /path/at/x:2
Obviously, x:2 is the index in the array of the last executed command, so I'd like to know if it's possible to get away with chaining strings together and re-using the command at x position of the current string.
|
You could use the line editing function to grab the mkdir argument and paste it to the end of the line. (I use vi-style line editing so that would be very straightforward: ESC to enter editing mode, 0WW to jump to the mkdir path, yW to yank the path, and p to paste it afterwards. I assume there's an equivalent in the default emacs-style line editing.)
Alternatively,
p=/some/long/dest/path; mkdir -p "$p" && rsync -azP /some/long/src/path/ "$p"
Or possibly, since this is an interactive session, view the mkdir result by inspection:
mkdir -p /some/long/dest/path
rsync -azP /some/long/src/path/ !$
In case you've not come across it before, !$ is a substitution for the last argument ($) of the last command. It's in man bash (amongst others). Try this to see how it carries through:
date --date tomorrow
echo !$
date --date !$
Items are referenced from zero, so echo !!:0 etc.
I know of no way to reference an argument in the current command line using such history-style operators.
| Is it possible to repeat part of a string that hasn't been executed yet in bash? |
1,375,280,881,000 |
Suppose I have a command like:
foo() {
echo a
echo b >&2
echo c
echo d >&2
}
I use the following command to process stdout in terminal but send error via notify-send
foo 1> >(cat) 2> >(ifne notify-send "Error")
The issue i am having is, I want to view the stderr (in this case b d) as notify-send body.
I have tried:
foo 1> >(cat) 2> >(ifne notify-send "Error" "$(cat)")
foo 1> >(cat) 2> >(ifne notify-send "Error" "$(</dev/stdin)")
Nothing is working. What can be the solution here?
|
With your try with "$(cat)" you're almost there, but you need this cat to read from ifne, not along ifne.
In case of ifne notify-send "Error" "$(cat)", cat reads from the same stream ifne does, but not simultaneously. The shell handling this part of code can run ifne only after cat exits (because only then it knows what $(cat) should expand to, i.e. what arguments fine should get). After cat exits, the stream is already depleted and ifne sees its input as empty.
This is a way to make a similarly used cat read from ifne:
foo 2> >(ifne sh -c 'exec notify-send "Error" "$(cat)"')
(I'm not sure what the purpose of your 1> >(cat) was. I skipped it.)
Here ifne relays its input to the stdin of whatever it (conditionally) runs. It's sh, but everything this sh runs shares its stdin. Effectively cat reads from ifne. And similarly to your try, exec notify-send can be executed only after cat exits; so even if notify-send tried to read from its stdin, cat would consume everything first.
This method may fail if there is too much data passing through cat. Argument list cannot be arbitrarily long. And because cat will exit only after foo exits, the method works for foo that ever exits and generates not too many messages to its stderr.
Using xargs instead of $(cat) may be a good idea for long-running foo that occasionally generates a line of error. This is an example of such foo:
foo() {
echo a
echo b >&2
sleep 10
echo c
echo d >&2
sleep 20
}
The above solution is not necessarily good in case of this foo (try it). With xargs it's different. foo may even run indefinitely and you will be notified of errors (one line at a time) immediately. If your xargs supports --no-run-if-empty (-r) then you don't need ifne. This is an example command with xargs:
foo 2> >(xargs -r -I{} notify-send "Error" {})
(Note this xargs still interprets quotes and backslashes.)
| notify-send with stderr in notification body if there is stderr |
1,375,280,881,000 |
I am trying to add a new application shortcut in the command line which will load the terminal when you press Ctrl + Alt + T.
I am using the xfconf-query utility to monitor xfce4-keyboard-shortcuts and the output I get when setting my shortcut via the Keyboard GUI is: /commands/custom/<Primary><Alt>t.
I've been able to set other settings from the command line, for example changing the theme using the following command:
xfconf-query -c xsettings -p /Net/ThemeName -s "Adwaita"
However, I am not sure how to apply this a similar logic to my application shortcut, I just keep getting errors. Would anyone happen to have any ideas?
I tried the following command:
xfconf-query -c xfce4-keyboard-shortcuts -p '/commands/custom/<Primary><Alt>t' -s xfce4-terminal
but I got the follow error message:
Property "/commands/custom/<Primary><Alt>t" does not exist on channel "xfce4-keyboard-shortcuts". If a new property should be created, use the --create option.
Thank you in advance.
|
If the property doesn't exist, it's necessary to create it using the --create option (or the synonymous -n) as indicated in the error. The following worked for me...
xfconf-query -c xfce4-keyboard-shortcuts -n -t 'string' -p '/commands/custom/<Primary><Alt>t' -s xfce4-terminal
Note that it was also necessary to add the type of the value, although 'String' as found for a type in the Xfce Settings Editor wouldn't work; it had to be 'string'.
| Creating XFCE4 Application Shortcuts from the terminal (CentOS) |
1,375,280,881,000 |
Is it possible to use awk to calculate the average of each row (with different columns in each row). I have a file like the following, the first column is the names, and I like to calculate the average of each row and print the result in the last column of the input file:
Input-file (data1.csv):
EMPLOYEE1,0.395314,0.384513,,
EMPLOYEE2,5.4908,5.2921,,
EMPLOYEE3,0.0002323,0.00022945,0.00023238,0.00022931
EMPLOYEE4,0.00335516,0.00328432,0.00340309,0.00327163
EMPLOYEE5,1.4816,1.4367,1.4854,1.4353
EMPLOYEE6,7.89E-06,7.93E-06,7.95E-06,7.87E-06
EMPLOYEE7,3.724E-06,3.8745E-06,3.9428E-06,3.7227E-06
EMPLOYEE8,0.699498,0.688892,0.704256,0.683486
EMPLOYEE9,33.5195,31.9736,33.6779,31.742
Desired output:
EMPLOYEE1,0.395314,0.384513,,,0.3899135
EMPLOYEE2,5.4908,5.2921,,,5.39145
EMPLOYEE3,0.0002323,0.00022945,0.00023238,0.00022931,0.00023086
EMPLOYEE4,0.00335516,0.00328432,0.00340309,0.00327163,0.00332855
EMPLOYEE5,1.4816,1.4367,1.4854,1.4353,1.45975
EMPLOYEE6,7.89E-06,7.93E-06,7.95E-06,7.87E-06,7.91E-06
EMPLOYEE7,3.72E-06,3.87E-06,3.94E-06,3.72E-06,3.82E-06
EMPLOYEE8,0.699498,0.688892,0.704256,0.683486,0.694033
EMPLOYEE9,33.5195,31.9736,33.6779,31.742,32.7282
I tried awk like the following, but it doesn't calculate average for rows with columns less than maximum NF.
awk -F',' '{ s = 0; for (i = 2; i <= NF; i++) s += $i; print $1, (NF > 1) ? s / (NF - 1) : 0; }' data1.csv
and
awk -F',' '{sum=0; for (i=2;i<=NF;i++)sum+=$i; print $0,sum/(NF-1)}' data1.csv
But my code doesn't change NF row. is it possible to change NF for each row and get desired output?
|
Here's one way:
$ awk -F',' -v OFS=',' '{
s=0;
numFields=0;
for(i=2; i<=NF;i++){
if(length($i)){
s+=$i;
numFields++
}
}
print $0, (numFields ? s/numFields : 0)}' data1.csv
EMPLOYEE1,0.395314,0.384513,,,0.389914
EMPLOYEE2,5.4908,5.2921,,,5.39145
EMPLOYEE3,0.0002323,0.00022945,0.00023238,0.00022931,0.00023086
EMPLOYEE4,0.00335516,0.00328432,0.00340309,0.00327163,0.00332855
EMPLOYEE5,1.4816,1.4367,1.4854,1.4353,1.45975
EMPLOYEE6,7.89E-06,7.93E-06,7.95E-06,7.87E-06,7.91e-06
EMPLOYEE7,3.724E-06,3.8745E-06,3.9428E-06,3.7227E-06,3.816e-06
EMPLOYEE8,0.699498,0.688892,0.704256,0.683486,0.694033
EMPLOYEE9,33.5195,31.9736,33.6779,31.742,32.7282
Note that awk prints 0.389914 as the result of 0.779827/2 which means that the average on the first line will be 0.389914 and not 0.389915. This is because awk will round to the nearest even number and its default print mode (controlled by the OFMT variable) is %0.6g. If you require more accuracy, you can do:
$ awk -F',' -v OFS=',' -v OFMT='%0.7g' '{
s=0;
numFields=0;
for(i=2; i<=NF;i++){
if(length($i)){
s+=$i;
numFields++
}
}
print $0, (numFields ? s/numFields : 0)}' data1.csv
EMPLOYEE1,0.395314,0.384513,,,0.3899135
EMPLOYEE2,5.4908,5.2921,,,5.39145
EMPLOYEE3,0.0002323,0.00022945,0.00023238,0.00022931,0.00023086
EMPLOYEE4,0.00335516,0.00328432,0.00340309,0.00327163,0.00332855
EMPLOYEE5,1.4816,1.4367,1.4854,1.4353,1.45975
EMPLOYEE6,7.89E-06,7.93E-06,7.95E-06,7.87E-06,7.91e-06
EMPLOYEE7,3.724E-06,3.8745E-06,3.9428E-06,3.7227E-06,3.816e-06
EMPLOYEE8,0.699498,0.688892,0.704256,0.683486,0.694033
EMPLOYEE9,33.5195,31.9736,33.6779,31.742,32.72825
| Using awk to calculate average of each row with different number of columns |
1,375,280,881,000 |
My use-case is that I want that whenever I copy something to CLIPBOARD, it is also saved in PRIMARY. It's mostly assumed that to copy something you need to select it so most of the time this is not needed.
However, sometimes I just click the classic "copy to clipboard" button and get something to CLIPBOARD that it's not in PRIMARY. I use Shift+Insert a lot for pasting and having to track which selection I'm using makes me confused.
I know there are a tools like clipit or parcellite that do something like this, but I wan't something without a GUI, something like a simple systemd service I can launch and forget.
I tried using a systemd service for autocutsel configured like
ExecStartPre=autocutsel -f
ExecStart=autocutsel -f --selection PRIMARY
However this also synchronizes PRIMARY -> CLIPBOARD, which breaks some very usual workflow like selecting text and then replacing it with the contents of the clipboard.
I've looked for this option in the manpage of autocutsel, but I find it kinda confusing, with a lot of mentions to cutbuffer (which I don't think it's used anymore) and Windows which I don't use. So I don't even know if this is possible with autocutsel.
|
Here's a quick Python program to do so, using the PyGObject bindings for GTK. I'm not an expert in this, so this is just an example that works for me, using rpm pygobject2 on an old Fedora release. You will have to find the equivalent packages yourself.
#!/usr/bin/python3
# copy clipboard to primary every time it changes
# https://unix.stackexchange.com/a/660344/119298
import signal, gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, Gdk
# callback with args (Gtk.Clipboard, Gdk.EventOwnerChange)
def onchange(cb, event):
text = clipboard.wait_for_text() # convert contents to text in utf8
primary.set_text(text, -1) # -1 to auto set length
signal.signal(signal.SIGINT, signal.SIG_DFL) # allow ^C to kill
primary = Gtk.Clipboard.get(Gdk.SELECTION_PRIMARY)
clipboard = Gtk.Clipboard.get(Gdk.SELECTION_CLIPBOARD)
clipboard.connect('owner-change', onchange) # ask for events
Gtk.main() # loop forever
| What's a lightweight way to synchronize CLIPBOARD -> PRIMARY selections? |
1,375,280,881,000 |
I use this command to get the name of my network interfaces and their mac address
ip -o link | awk '$2 != "lo:" {print $2, $(NF-2)}' | sed 's_: _ _'
out:
enp2s0 XX:XX:XX:XX:XX:XX
wlp1s0 YY:YY:YY:YY:YY
and this one to get the IP:
ip addr show $lan | grep 'inet ' | cut -f2 | awk '{ print $2}'
out:
127.0.0.1/8
192.168.1.23/24
or this one:
ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}'
or another:
ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1
out:
192.168.1.23
What command can I use to know (the order is not relevant):
interface name | IPv4 address | MAC address
example:
enp2s0 192.168.1.23 XX:XX:XX:XX:XX:XX
in a single line, but only from active interfaces (except lo) (for ubuntu 20.04)?
I have tried this solution but it did not work for me
|
In bash this works:
paste <(ip -o -br link) <(ip -o -br addr) | awk '$2=="UP" {print $1,$7,$3}'
but it relies on the ip output being in the same order for link and addr. To be sure, you could use join with sort instead:
join <(ip -o -br link | sort) <(ip -o -br addr | sort) | awk '$2=="UP" {print $1,$6,$3}'
In sh command substitution isn't available, so it couldn't be quite as concise as this.
| how do i get interface name, ip and mac from active interface only (except lo) |
1,375,280,881,000 |
In Terminal, how can set up a custom warning so that when I type a specific command like
git pull origin master
the command doesn't go through and I get a warning output like
Did you mean git rebase origin/master?
I've considered creating a bash script or simply using an alias in my bash profile but I'm not sure what the best method would be.
Thanks.
|
There is not any hooks for pre-pull but you might find this useful https://git-scm.com/docs/githooks#_post_merge
Hooks in general are nice, if you did not know about them.
As for alias you would have to create one for git, as in create a function in .bashrc or .bash_aliases and check if arguments are pull origin master, if not invoke git, else print warning. Or an alias named git to a custom script - in effect a wrapper script for git.
In short because an alias can not contain spaces, hence the alias would have to be for the first "word" aka command.
In .bash_aliases or the like:
mygit()
{
if [ "$1" = "pull" ] && \
[ "$2" = "origin" ] && \
[ "$3" = "master" ]; then
printf 'Did you mean git rebase origin/master?\n' >&2
return 1
else
git "$@"
fi
}
alias git=mygit
Or name function git but call git internally by:
command git "$@"
in stead of git "@$".
Also note:
https://www.gnu.org/software/bash/manual/bash.html#Aliases
For almost every purpose, shell functions are preferred over aliases.
| How to create a custom CLI warning |
1,375,280,881,000 |
I have checked all the previous posts related to this, but I cannot find the way that I wanted to do.
I have a file with some exponential numbers as below. I do not know which columns have exponential numbers.
file1.txt
1 499 5e-29 0.33 1.35 46.65
5 999 0.4444 3e-6 0.556 89.444
many more lines
I want to convert all the exponential numbers to decimal numbers.
If I want to convert only one number, I could do as below.
echo 12.34567E-3 | awk '{printf "%5.10f\n", $1}'
But in this way, does anybody know how to do?
Thank you in advance!
|
You can loop for all fields of each line. Testing for if the field is a numerical value ($i+0==$i) and (&&) if it contains the character e seems good. So we modify only these fields to decimals.
Here using GNU awk sprintf function:
awk '{
for (i=1;i<=NF;i++) if ($i+0 == $i && $i ~ /e/) $i = sprintf("%.10f", $i)
} 1' file
You can use any format instead of .10f into there.
1 at the end means the default action, to print the line.
| Convert all exponential numbers to decimal numbers in a file in Linux? |
1,375,280,881,000 |
Reading the documentation about the Linux file command. I have found this in the magic test description:
Any file with some invariant identifier at a small fixed offset into the file
can usually be described in this way. The information identifying these files
is read from /etc/magic and the compiled magic file /usr/share/misc/magic.mgc,
or the files in the directory /usr/share/misc/magic if the compiled file
does not exist. In addition, if $HOME/.magic.mgc or $HOME/.magic exists,
it will be used in preference to the system magic files.
I can imagine that the magic file is a collection of "MagicNumber -> Kind of file" relation.
That the "file" program runs to find the kind of file, taking the header of the file.
But I no understand why:
Why this file should be compiled?
What is the way than you can create a magic compile file to use with the command?
What is the usage of command options related to this kind of test and file? Ex: -C or -m.
|
Why this file should be compiled?
I suspect it's for performance reasons. The magic database is not small. File would need to parse every human-readable magic source file, construct the structures it uses to detect file formats, compute the strength of every pattern and sort everything by that. This process could be unbearably slow, especially decades ago (file has been around since the '70s).
I guess they could let file build the database when it's executed and cache the results, but given the use case it seems completely unnecessary, complicated and would have a number of other issues.
What is the way than you can create a magic compile file to use with the command?
file -m MAGIC_SOURCE -C
This creates a compiled magic file with the same base name as MAGIC_SOURCE and extension .mgc in your current working directory.
You can use that on a file or directory. Avoid the trailing slash for directories.
For instance, file -m ~/.magic -C compiles all the ~/.magic/* sources into a single .magic.mgc compiled file.
What is the usage of command options related to this kind of test and file? Ex: -C or -m.
-C tells file to compile some source files.
-m tells file which magic file (source or compiled) to use. You can use it without -C too, for instance to use any file, including source ones, to detect a file type.
See man file for more information on the command and man magic for more information on the magic source file syntax.
| What is the useage of the "compiled magic file" named on the "file" command man documentation? |
1,375,280,881,000 |
I would like to use a perl compatible regex engine in the less command line utility. Is that possible?
|
Not out of the box. What you can do as a substitute is to send the input to grep --perl-regexp (or -P) before piping it to less, for example:
some_command | grep -P … | less
If you want to see the rest of the file as well, with the matches highlighted, you can use this trick and pipe the result to less --raw-control-chars (or -r).
| Can I change the regex engine used to search in `less`? |
1,375,280,881,000 |
I installed the beep tool (from normal rep) on my ubuntu 18.04
After modprobe pcspkr everything was OK. But it has been heard from my very small beeper from the PC's box only.
Is there a way to hear it from normal music loud speaker as like any mp3, youtube etc?
I note the echo -e '\007' bell command from my shell is heard from normal music loud speaker it is very good for me.
Using of -e option of beep didn't help.
|
Welcome to Unix & Linux StackExchange!
The beep tool is explicitly designed to use the beeper only.
The echo -e '\007' uses whatever bell effect your terminal emulator is configured to do: on the text-mode console, it typically uses the beeper. In a GUI desktop terminal window, the terminal emulator might have its own configurable beep sound, or it might use the desktop environment's default beep effect, which is usually configurable in the sound settings of the desktop environment.
Having said that, it is sometimes possible to redirect the beeper's sound into the speakers. In order to do that, you need an integrated sound chip (as most modern systems have), with a specific ability to reroute the beeper sound (which is included in some but not all sound chips). To enable that feature in Linux, you'll need to access the sound mixer functions of the actual sound card, instead of the simplified version that you'll get normally if your system includes the PulseAudio subsystem.
For example, if you use the alsamixer command with no options in a PulseAudio-enabled system, you may see the volume control of PulseAudio only. But if you explicitly specify the sound card to use with e.g. alsamixer -c 0 to specify the first sound card/chip of the system, you'll get the full audio mixer settings.
If the list of available sound channels includes "Beep", "Digital Beep" or something like that, it would be the PC beeper redirection channel. It might be just an on/off switch setting, or an adjustable level setting, or both. If you enable/unmute the channel, the beeper's signal should be redirected to the sound chip and output through the normal speakers.
Note: if your system has this feature and you decide to use it, check the volume level of the beep before you reboot the system. If you enable the redirection and set it to full volume, the standard beep at reboot time may come through the speakers and get amplified to unexpected loudness.
(Yes, this happened to me once. Apparently the sound chip is not always fully reset at reboot, at least not until the OS startup has proceeded to load the sound drivers again.)
| Is there a way to switch the output sound of the beep program to the normal loud speaker instead of beeper? |
1,375,280,881,000 |
I am using this command chain to filter out bot/crawler traffic and ban the ip addresses. Is there any way i can make this command chain more efficient?
sudo awk -F' - |\\"' '{print $1, $7}' access.log |
grep -i -E 'bot|crawler' |
grep -i -v -E 'google|yahoo|bing|msn|ask|aol|duckduckgo' |
awk '{system("sudo ufw deny from "$1" to any")}'
Here is a sample log file i am parsing. The default apache2 access.log
173.239.53.9 - - [09/Oct/2019:01:52:39 +0000] "GET /robots.txt HTTP/1.1" 200 3955 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; FSL 7.0.6.01001)"
46.229.168.143 - - [09/Oct/2019:01:54:56 +0000] "GET /robots.txt HTTP/1.1" 200 4084 "-" "Mozilla/5.0 (compatible; SemrushBot/6~bl; +http://www.semrush.com/bot.html)"
157.55.39.20 - - [09/Oct/2019:01:56:10 +0000] "GET /robots.txt HTTP/1.1" 200 3918 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)"
65.132.59.34 - - [09/Oct/2019:01:56:53 +0000] "GET /robots.txt HTTP/1.1" 200 4150 "-" "Gigabot (1.1 1.2)"
198.204.244.90 - - [09/Oct/2019:01:58:23 +0000] "GET /robots.txt HTTP/1.1" 200 4480 "-" "Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/)"
192.151.157.210 - - [09/Oct/2019:02:03:41 +0000] "GET /robots.txt HTTP/1.1" 200 4480 "-" "Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/)"
93.158.161.112 - - [09/Oct/2019:02:09:35 +0000] "GET /neighborhood/ballard/robots.txt HTTP/1.1" 404 31379 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)"
203.133.169.54 - - [09/Oct/2019:02:09:43 +0000] "GET /robots.txt HTTP/1.1" 200 4281 "-" "Mozilla/5.0 (compatible; Daum/4.1; +http://cs.daum.net/faq/15/4118.html?faqId=28966)"
Thanks
|
Using a single awk command:
awk -F' - |\"' 'tolower($7) ~ /bot|crawler/ && tolower($7) !~ /google|yahoo|bing|msn|ask|aol|duckduckgo/{system("sudo ufw deny from "$1" to any")}' access.log
This will filter out only entries that have bot or crawler in the 7th column (what your first grep command does. Only if the 7th column does not contain google|yahoo|bing|msn|ask|aol|duckduckgo (what your second grep command does). Any matching line will have sudo ufw deny from "$1" to any executed on it's first column. (What your final awk command does).
| any way to make this command chain shorter or better |
1,375,280,881,000 |
I was restoring a backup of my Raspberry Pi on a new micro SD card.
The original card was 16Gb and the destination card was 16Gb, too. However, during the transfer dd complained that there isn't any space left.
Now, I know that every card has a different real size, but how can I fix that? Is it possible to "chop" off few bytes and make the disk image fit in the card?
|
Yes, you can "chop" bytes off a raw disk image file using truncate.
truncate -s 15G image.raw
Obviously, this will affect the data within the disk image. You probably want to shrink the contained filesystems so they are not truncated along the way. gparted is a tool with nice UI to achieve this.
gparted image.raw
Just shrink and move partitions until there is enough "unallocated space" at the end of the disk. If your disks partitions are defined in the MBR, you are done at this point. If a GPT is used, you need to leave a few bytes more and re-generate the secondary GPT after truncating.
How do I resize a disk image device? is somewhat related.
| dd: No space left on device |
1,375,280,881,000 |
in the context of automating the installation of a machine, I would like to configure firefox, specifically the proxy settings, from the command line, either by executing commands or by editing configuration files, for example.
Is this possible, and if yes how?
Edit: I forgot to mention that I would like to configure the proxy for all users.
|
You have basically two choices (that i can think of)
Launch firefox, and update your profile with the correct settings (proxy ones for example).
Then close and retrieve your configuration in ~myusername/.mozilla/firefox/xxxxxxx.default/prefs.js. xxxxx is a dynamic string. You can then use this user preferences for your deployement.
Directly update that file, after you've deployed / installed a machine, with the proxy settings.
When you will launch firefox with that user, the settings will be directly applied.
According to the comment of @Sparhawk, the second option would fit better. In that case we keep the original prefs.js as intact as possible, just changing the proxy settings:
user_pref("network.proxy.http", "IPADDRESS OR URL");
user_pref("network.proxy.http_port", 8080);
| Configure firefox without using the gui |
1,375,280,881,000 |
Hello Unix & Linux Community,
I am here to ask how I can translate a Windows Batch File (.bat), into a Linux Bash File (.sh), because I want to allow a program I made, into Linux. But I have no idea how to get it to work in Linux. I understand that some things, like EXEs, are "non-existent" in Linux.
So, the code I want to translate is too long to fit here, so I have posted it elsewhere, and here is a link to it. If there is any way to do this, I would greatly appreciate it!
|
I'm afraid there is no automated way to convert a BAT script into Bash. This leaves you with two options:
Option 1. Convert the script manually.
The script that you linked looks simple enough, which means it shouldn't take much time to convert it once you're familiar with the basics of Bash scripting. This book should be a good starting point in your studies. Appendix N of the book contains a nice glossary that could help you replace your old Batch idioms with Bash ones.
Option 2. Use wineconsole.
wineconsole is part of the Wine compatibility layer that allows executing BAT files on Linux systems:
$ wineconsole MyCode.bat
See the following question for details on how to do that. Although appealing, this may be a dead end if you want run other Linux programs from your script. Furthermore, not all users will have wine installed or consider it an acceptable tradeoff.
| Translate BATCH to Bash |
1,375,280,881,000 |
Goal: Play music on a server, preferably using cmus, using SSH for player control.
First try: cmus
I run cmus in a terminal, literally nothing happens. It just loads (i guess). Tried cmus -vvvvv - Also just loads. Tried this and this - No changes to the issue. But: running it from a physical terminal on the computer works! (Both starting cmus, and playing audio successfully)
Second try: MOC
Running mocp opens it up, I can add files too. When trying to play a song, this message appears: can't open audio: device or resource busy MOC - No possible solution found.
Third try: mp3blaster
It starts. I can add files. It doesn't play: Failed to open sound device
Tried several suggestions (very outdated) from google, nothing helped. The one that seemed to help many others with this was padsc mp3blaster - But again, no help for me.
I am using ALSA with PulseAudio. Audio itself works fine. play or mplayer both work fine, but they don't offer libraries and playlists. Both work fine over SSH too.
I'm really clueless to what to do here since cmus doesn't print anything and mplayer works fine. I checked alsamixer and nothing is muted or disabled. There's only one single soundcard.
Not trying to have a broad suggest-me-something question here, I just added the other players since it might help find the issue, but the question aims to focus on getting cmus to work.
I've tried DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus cmus too (found here), same result.
Edit: TO BE CLEAR: The music is on the machine I SSH into and I want to play the music locally from the machine, controlling the CLI music player via SSH. Sorry for the confusion. I'm not trying to stream audio over SSH. I just want to use cmus in an SSH Terminal to play music that already is on the server I'm connecting to.
Edit: mplayer doesn't work anymore either, it used to all the time until I just tried:
AO: [pulse] Init failed: Connection refused
Failed to initialize audio driver 'pulse'
[AO_ALSA] alsa-lib: pcm_hw.c:1602:(snd_pcm_hw_open) open '/dev/snd/pcmC0D0p' failed (-16): Device or resource busy
[AO_ALSA] alsa-lib: pcm_dmix.c:1052:(snd_pcm_dmix_open) unable to open slave
[AO_ALSA] Playback open error: Device or resource busy
Failed to initialize audio driver 'alsa'
[AO SDL] Samplerate: 32000Hz Channels: Stereo Format s16le
[AO SDL] using aalib audio driver.
[AO SDL] Unable to open audio: No available audio device
Failed to initialize audio driver 'sdl:aalib'
Could not open/initialize audio device -> no sound.
Audio: no sound
Video: no video
Same for play:
ALSA lib pcm_dmix.c:1052:(snd_pcm_dmix_open) unable to open slave
play FAIL formats: can't open output file `default': snd_pcm_open error: Device or resource busy
I've tried it as root too, same result. But: If I run either as the user which is logged into the X session (even over SSH), it works.
|
I solved the issue with something rather obvious I missed this whole time. I had to allow other users (not the user logged into the X session under which the pulseaudio deamon runs) access to PA.
On the user under which the PA deamon runs:
# create pulse config dir in $HOME if it doesn't exist yet
mkdir .pulse
# copy the default PA config file
cp /etc/pulse/default.pa .pulse/default.pa
# edit the file
nano .pulse/default.pa
Then add onto the end of the file:
# make PA accessible by all users
load-module module-native-protocol-unix auth-anonymous=1 socket=/tmp/acpulsesocket
Then, logged in as the user from which you want to play audio:
# create pulse config dir in $HOME of the controlling user
mkdir .pulse
# create client configuration file
nano .pulse/client.conf
And paste the following into the file:
default-server = unix:/tmp/acpulsesocket
Save, restart pulseaudio: pulseaudio -k
Now cmus and every other player is working just fine.
| Almost every CLI music player doesn't work (in an SSH terminal) |
1,375,280,881,000 |
Why are files listed alphabetically, ignoring file name length in the terminal?
Perhaps I shouldn't say "ignoring" file name length, but rather, why is there a difference in displaying files in the terminal vs. a GUI.
This is certainly a trivial question, but I've been a bit curious about this one for a while.
In the terminal, a vanilla ls command using the -l option (with no other sorting options specified) lists files in alphabetical order starting from the top line of the list moving down. Say I have a directory full of files created with the following:
$ touch file1{1..16}
ls in that same directory would display the following:
-rw-r--r--. 1 user user 0 May 24 11:14 file1
-rw-r--r--. 1 user user 0 May 24 11:14 file10
-rw-r--r--. 1 user user 0 May 24 11:14 file11
-rw-r--r--. 1 user user 0 May 24 11:14 file12
-rw-r--r--. 1 user user 0 May 24 11:14 file13
-rw-r--r--. 1 user user 0 May 24 11:14 file14
-rw-r--r--. 1 user user 0 May 24 11:14 file15
-rw-r--r--. 1 user user 0 May 24 11:14 file16
-rw-r--r--. 1 user user 0 May 24 11:14 file2
-rw-r--r--. 1 user user 0 May 24 11:14 file3
-rw-r--r--. 1 user user 0 May 24 11:14 file4
-rw-r--r--. 1 user user 0 May 24 11:14 file5
-rw-r--r--. 1 user user 0 May 24 11:14 file6
-rw-r--r--. 1 user user 0 May 24 11:14 file7
-rw-r--r--. 1 user user 0 May 24 11:14 file8
-rw-r--r--. 1 user user 0 May 24 11:14 file9
My question is why does "file10" follow "file1" in this way in the terminal? When viewing files in a details or list view ordered by name or type in a GUI environment, those same files are listed as "file1", "file2", "file3", etc.
Lists of files in a GUI seem to prioritize alphabetical order by file name length, listing files from smallest length to largest. Is this correct? Is there a more technical reason for this? Is the ls command "going out of its way" to order files the way it does, or likewise with a GUI?
|
The default ordering for ls is alphabetical. In this scenario, digits are not numbers just characters. So file1 is a shorter name than file10, but otherwise identical, and therefore comes before it in the list.
If you want natural versioned order you could try ls -l --sort=version (or ls -lv)
-rw-r--r--+ 1 roaima 0 May 24 18:50 file0
-rw-r--r--+ 1 roaima 0 May 24 18:50 file1
-rw-r--r--+ 1 roaima 0 May 24 18:50 file2
...
-rw-r--r--+ 1 roaima 0 May 24 18:50 file9
-rw-r--r--+ 1 roaima 0 May 24 18:50 file10
-rw-r--r--+ 1 roaima 0 May 24 18:50 file11
-rw-r--r--+ 1 roaima 0 May 24 18:50 file12
-rw-r--r--+ 1 roaima 0 May 24 18:50 file13
There are a number of other sorting options available in ls; see man ls for details.
| Why are files listed alphabetically, ignoring file name length in the terminal? |
1,375,280,881,000 |
On my local box "machineA", I have two folders "/primary" and "/secondary". These two folders have some files in it. Now on the remote server "machineB" I have one folder "/bat/snap/" which contains lot of files.
All the files in "/primary" and "/secondary" folders in "machineA" should be there in "machineB" remote server in this directory "/bat/snap/". Now I need to compare checksum of all files in "/primary" and "/secondary" folder on local box "machineA" with remote server in this directory "/bat/snap/". If there is any mismatch in checksum then I want to report all those files that has issue in "machineA"
Do I need to use md5checksum here?
Update
This is the command I am running on "machineA":
find /primary/ /secondary/ -type f | xargs md5sum | ssh machineB '(cd /bat/snap/ && md5sum -c)' | egrep -v 'OK$'
Below is the error I am getting and after that I stopped my above command. I checked both the servers and I can see this file is present so what's wrong then?
md5sum: /primary/abc_monthly_134_proc_7.data: No such file or directory
/primary/abc_monthly_134_proc_7.data: FAILED open or read
|
This is what the various md*sum files are written for.
On machine A:
find primary secondary -type f | xargs md5sum > checksum.md5
(copy file to machine B)
Machine B:
md5sum -c checksum.md5
Edit: Combined into a single command: find primary secondary -type f | xargs md5sum | ssh machineB '(cd /location_on_B/ && md5sum -c)' | egrep -v 'OK$'
(Another option is to tell rsync to run in dry-run mode with --checksum.)
| compare checksum of files between two remote servers |
1,375,280,881,000 |
I have the following script to monitor some running processes:
$ cat ./print_running.sh
#!/bin/bash
ps aux | grep 'python energydata.py' | sed -r 's/\{"spectral": 0//;s/, "pp_enable.*y": 0.9,//;s/ "consumer.*ressure": 0.15,//; s/ "gamma.*//'
When I run it from the terminal I get output like below:
user 27994 37.6 1.0 706712 125148 pts/19 R 15:46 0:46 python energydata.py , "l_max": 0.25 "level_flexibility": 5, "pi": 0.6489865, "z_input": 0.4450386,
user 28138 37.9 1.0 706716 124852 pts/19 R 15:47 0:32 python energydata.py , "l_max": 0.75 "level_flexibility": 5, "pi": 0.3928358, "z_input": 0.4836933,
user 28255 38.2 1.0 706716 125004 pts/19 R 15:47 0:12 python energydata.py , "l_max": 0.5 "level_flexibility": 5, "pi": 0.4088101, "z_input": 0.5011101,
user 28267 38.5 1.0 706712 125152 pts/19 R 15:47 0:13 python energydata.py , "l_max": 0.75 "level_flexibility": 5, "pi": 0.5223562, "z_input": 0.4416432,
user 28287 39.3 1.0 706712 125084 pts/19 R 15:48 0:10 python energydata.py , "l_max": 0.5 "level_flexibility": 5, "pi": 0.1073945, "z_input": 0.6377535,
user 28291 39.1 1.0 706708 124824 pts/19 R 15:48 0:10 python energydata.py , "l_max": 0.5 "level_flexibility": 5, "pi": -0.0572559, "z_input": 0.746638,
user 28305 38.9 1.0 706708 124812 pts/19 R 15:48 0:07 python energydata.py , "l_max": 0.5 "level_flexibility": 5, "pi": 0.2459162, "z_input": 0.5575399,
user 28328 38.9 1.0 706716 124836 pts/19 R 15:48 0:03 python energydata.py , "l_max": 0.5 "level_flexibility": 5, "pi": 0.7709965, "z_input": 0.3985181,
user 28329 41.1 1.0 706712 125020 pts/19 R 15:48 0:04 python energydata.py , "l_max": 0.75 "level_flexibility": 5, "pi": 0.6900067, "z_input": 0.3973544,
user 28365 0.0 0.0 14232 1092 pts/2 S+ 15:48 0:00 grep python energydata.py
But when I run the command through watch to monitor the output in real-time, The sed fildering doesn't seem to be working properly. The output is like below:
$ watch ./print_running.sh
user 28427 38.2 1.0 706712 125128 pts/19 R 15:49 0:38 python energydata.py , "l_max": 0.75, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63,
user 28445 37.7 1.0 706716 124844 pts/19 R 15:49 0:35 python energydata.py , "l_max": 0.5, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 7
user 28511 38.0 1.0 706712 124820 pts/19 R 15:50 0:18 python energydata.py , "l_max": 0.5, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 7
user 28519 38.5 1.0 706716 124528 pts/19 R 15:50 0:18 python energydata.py , "l_max": 0.75, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63,
user 28533 39.2 1.0 706712 125348 pts/19 R 15:50 0:15 python energydata.py , "l_max": 0.5, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 7
user 28546 38.0 1.0 706708 125392 pts/19 R 15:50 0:14 python energydata.py , "l_max": 0.5, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 7
user 28576 38.4 1.0 706716 125100 pts/19 R 15:50 0:09 python energydata.py , "l_max": 0.5, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 7
user 28605 39.4 1.0 706712 125152 pts/19 R 15:50 0:07 python energydata.py , "l_max": 0.75, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63,
user 28609 37.4 1.0 706716 124868 pts/19 R 15:50 0:07 python energydata.py , "l_max": 0.5, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 7
user 28675 40.2 1.0 706456 124760 pts/19 R 15:50 0:01 python energydata.py , "l_max": 0.25, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63,
user 28689 0.0 0.0 14232 972 pts/2 S+ 15:50 0:00 grep python energydata.py
The problem is when using watch the argument list doesn't appear filtered, so I cannot monitor the parameters that are interesting to me.
Questions:
What causes the difference in output when the script is executed directly or through watch?
How can I update the script so that has the desired output in all cases?
Update:
The full output of ps aux without the sed command is lines like below:
user 32381 24.5 0.7 523028 90816 pts/19 R 16:19 0:00 python energydata.py {"spectral": 0, "l_max": 0.75, "pp_enable": 0.0, "number_of_runs": 1, "profit": 0.0, "consumers_data": {"24": [[63, 73]], "25": [[64, 74]], "26": [[65, 75]], "27": [[66, 76]], "20": [[59, 69]], "21": [[60, 70]], "22": [[61, 71]], "23": [[62, 72]], "28": [[67, 77]], "29": [[68, 78]], "4": [[43, 53]], "8": [[47, 57]], "59": [[66, 76]], "58": [[65, 75]], "55": [[62, 72]], "54": [[61, 71]], "57": [[64, 74]], "56": [[63, 73]], "51": [[58, 68]], "50": [[57, 67]], "53": [[60, 70]], "52": [[59, 69]], "88": [[64, 74]], "89": [[65, 75]], "82": [[57, 67]], "83": [[58, 68]], "80": [[55, 65]], "81": [[56, 66]], "86": [[61, 71]], "87": [[63, 73]], "84": [[59, 69]], "85": [[60, 70]], "3": [[42, 52]], "7": [[46, 56]], "100": [[62, 72]], "39": [[46, 56]], "38": [[45, 55]], "33": [[40, 50]], "32": [[71, 81]], "31": [[70, 80]], "30": [[69, 79]], "37": [[44, 54]], "36": [[43, 53]], "35": [[42, 52]], "34": [[41, 51]], "60": [[67, 77]], "61": [[68, 78]], "62": [[69, 79]], "63": [[70, 80]], "64": [[71, 81]], "65": [[40, 50]], "66": [[41, 51]], "67": [[42, 52]], "68": [[43, 53]], "69": [[44, 54]], "2": [[41, 51]], "6": [[45, 55]], "99": [[43, 53]], "98": [[42, 52]], "91": [[67, 77]], "90": [[66, 76]], "93": [[69, 79]], "92": [[68, 78]], "95": [[71, 81]], "94": [[70, 80]], "97": [[41, 51]], "96": [[40, 50]], "11": [[50, 60]], "10": [[49, 59]], "13": [[52, 62]], "12": [[51, 61]], "15": [[54, 64]], "14": [[53, 63]], "17": [[56, 66]], "16": [[55, 65]], "19": [[58, 68]], "18": [[57, 67]], "48": [[55, 65]], "49": [[56, 66]], "46": [[53, 63]], "47": [[54, 64]], "44": [[51, 61]], "45": [[52, 62]], "42": [[49, 59]], "43": [[50, 60]], "40": [[47, 57]], "41": [[48, 58]], "1": [[40, 50]], "5": [[44, 54]], "9": [[48, 58]], "77": [[52, 62]], "76": [[51, 61]], "75": [[50, 60]], "74": [[49, 59]], "73": [[48, 58]], "72": [[47, 57]], "71": [[46, 56]], "70": [[45, 55]], "79": [[54, 64]], "78": [[53, 63]]}, "weight_flexibility": 0.9, "level_flexibility": 5, "consumer_ids": ["100", "99", "98", "97", "96", "95", "94", "93", "92", "91", "90", "89", "88", "87", "86", "85", "84", "83", "82", "81", "80", "79", "78", "77", "76", "75", "74", "73", "72", "71", "70", "69", "68", "67", "66", "65", "64", "63", "62", "61", "60", "59", "58", "57", "56", "55", "54", "53", "52", "51", "50", "49", "48", "47", "46", "45", "44", "43", "42", "41", "40", "39", "38", "37", "36", "35", "34", "33", "32", "31", "30", "29", "28", "27", "26", "25", "24", "23", "22", "21", "20", "19", "18", "17", "16", "15", "14", "13", "12", "11", "10", "9", "8", "7", "6", "5", "4", "3", "2", "1"], "weight_friendship": 0.1, "select_algorithm": [4], "number_of_clusters": 100, "max_pressure": 0.15, "pi": 0.3606668, "z_input": 0.4976814, "gamma": 1.0, "cost": 0.02} {"nodes_dictionary":[{"24":["4","8","13","23","27","29","54","65","66","86","88","99"],"25":["3","8","15","19","21","23","28","33","41","58","64","71","73","75","80","84","87","90","100"],"26":["2","4","9","18","19","38","39","45","52","59","64","69","72","73","74","80","83","91","92","100"],"27":["14","24","30","43","48","61","72","77","78","89"],"20":["16","31","33","38","45","48","49","55","61","62","66","73","86","98"],"21":["11","12","25","33","35","36","40","49","53","71","84","87","88","98"],"22":["16","18","39","40","43","46","51","62","63","86","94"],"23":["1","4","8","9","15","18","24","25","34","35","51","52","61","70","72","78","86","87","97"],"28":["1","14","25","30","33","38","41","48","63","64","65","67","89","92"],"29":["2","8","16","24","39","40","46","56","59","63","73","81","99"],"4":["11","15","23","24","26","38","55","76","81","83","85","86","92"],"8":["1","2","17","23","24","25","29","36","51","69","78","87","95","96"],"59":["5","15","26","29","38","48","54","57","61","62","63","74","79","83","85","89","90","92"],"58":["9","11","12","18","25","44","47","51","86","93"],"55":["4","6","14","18","20","31","32","38","44","47","60","61","62","66","68","84","94","96"],"54":["3","17","24","30","40","42","45","48","52","57","59","65","68","73","74","80","81"],"57":["14","15","34","36","37","54","59","62","67","86","93","94","95","96","98"],"56":["2","14","17","18","29","46","53","60","64","74","82","85","97","98"],"51":["8","13","15","16","17","22","23","30","36","37","40","47","58","62","63","68","70","79"],"50":["7","10","17","18","85","93"],"53":["12","21","37","47","56","60","61","62","67","73","74","78","81","90","97"],"52":["11","18","23","26","36","44","47","54","69","71","74","76","78","84","100"],"88":["6","7","18","21","24","32","61","62","82","87","94"],"89":["13","15","18","27","28","31","33","46","59","60","65","66","78","87","97"],"82":["9","11","13","14","32","35","36","47","56","64","67","72","76","84","88","92"],"83":["4","6","26","30","32","37","44","49","59","62","74","90"],"80":["3","25","26","36","54","61","62","73","84","99"],"81":["4","11","15","19","29","35","38","47","53","54","61","72","73","78","79","91"],"86":["2","4","11","19","20","22","23","24","34","37","47","57","58","60","68","75","97","99"],"87":["1","8","12","15","21","23","25","34","37","47","61","62","66","75","88","89","98"],"84":["2","7","13","21","25","32","40","48","49","52","55","61","62","63","64","71","80","82","94","100"],"85":["4","5","7","9","36","38","47","49","50","56","59","93","98"],"3":["6","14","17","25","43","48","54","62","63","73","80","94"],"7":["46","50","67","72","84","85","88","100"],"100":["7","9","13","25","26","39","40","42","44","47","48","52","77","79","84"],"39":["2","5","6","12","22","26","29","31","35","38","43","46","49","65","98","100"],"38":["4","20","26","28","39","41","42","45","55","59","66","68","74","81","85","90","97","99"],"33":["5","10","14","20","21","25","28","32","35","40","64","66","89","96","98"],"32":["9","10","16","17","18","33","42","55","73","76","82","83","84","88","91"],"31":["9","20","39","42","48","55","66","72","73","89","98","99"],"30":["9","17","27","28","43","46","51","54","60","62","71","83"],"37":["14","51","53","57","65","66","68","83","86","87","93","94","96","97"],"36":["8","11","16","18","21","34","51","52","57","67","68","80","82","85","99"],"35":["1","9","13","19","21","23","33","39","60","65","66","68","69","73","76","81","82","94","98"],"34":["19","23","36","42","43","57","64","75","86","87"],"60":["6","9","11","13","17","30","35","42","45","47","49","53","55","56","72","86","89"],"61":["15","20","23","27","53","55","59","71","76","79","80","81","84","87","88"],"62":["3","20","22","30","51","53","55","57","59","77","80","83","84","87","88","93","96","97"],"63":["3","5","10","16","18","22","28","29","47","49","51","59","70","72","75","84","91","92","94","95"],"64":["6","11","13","19","25","26","28","33","34","41","46","56","69","79","82","84","90","91","92"],"65":["9","12","14","24","28","35","37","39","54","89"],"66":["1","6","13","20","24","31","33","35","37","38","40","55","67","79","87","89"],"67":["7","13","14","15","17","19","28","36","41","44","45","48","53","57","66","74","82","90","92","95","97"],"68":["10","15","35","36","37","38","44","47","48","51","54","55","70","75","86","91","95"],"69":["2","8","9","11","26","35","43","46","48","52","64","97","98"],"2":["5","8","10","19","26","29","39","56","69","73","84","86","98"],"6":["3","5","12","15","17","39","43","45","55","60","64","66","73","83","88","92"],"99":["13","24","29","31","36","38","72","80","86","97","98"],"98":["2","9","20","21","31","33","35","39","42","44","45","56","57","69","73","85","87","97","99"],"91":["12","13","18","19","26","32","49","63","64","68","72","81"],"90":["10","15","25","38","48","53","59","64","67","79","83","94","95"],"93":["18","37","50","57","58","62","78","85"],"92":["4","6","10","15","26","28","59","63","64","67","73","79","82"],"95":["8","15","40","44","46","57","63","67","68","70","90"],"94":["3","5","11","22","35","37","40","55","57","63","71","72","75","77","84","88","90"],"97":["14","17","19","23","37","38","47","53","56","62","67","69","78","86","89","98","99"],"96":["5","8","33","37","46","55","57","62","79"],"11":["4","9","10","19","21","36","44","46","48","52","58","60","64","69","72","76","78","81","82","86","94"],"10":["2","11","32","33","44","50","63","68","71","75","77","78","90","92"],"13":["18","24","35","40","51","60","64","66","67","76","78","82","84","89","91","99","100"],"12":["6","17","21","39","41","43","53","58","65","76","78","79","87","91"],"15":["4","6","23","25","40","44","51","57","59","61","67","68","70","71","81","87","89","90","92","95"],"14":["1","3","17","18","27","28","33","37","45","48","55","56","57","65","67","73","75","82","97"],"17":["3","6","8","9","12","14","30","32","50","51","54","56","60","67","72","75","76","97"],"16":["20","22","29","32","36","43","48","51","63"],"19":["2","9","11","25","26","34","35","64","67","79","81","86","91","97"],"18":["13","14","22","23","26","32","36","50","52","55","56","58","63","79","88","89","91","93"],"48":["1","3","11","14","16","20","27","28","31","44","54","59","67","68","69","72","76","84","90","100"],"49":["5","20","21","39","42","60","63","83","84","85","91"],"46":["1","7","11","22","29","30","39","40","41","47","56","64","69","73","89","95","96"],"47":["5","9","40","44","45","46","51","52","53","55","58","60","63","68","78","81","82","85","86","87","97","100"],"44":["10","11","15","47","48","52","55","58","67","68","74","78","83","95","98","100"],"45":["6","14","20","26","38","47","54","60","67","79","98"],"42":["31","32","34","38","41","49","54","60","98","100"],"43":["3","6","12","16","22","27","30","34","39","40","69","79"],"40":["13","15","21","22","29","33","43","46","47","51","54","66","76","84","94","95","100"],"41":["12","25","28","38","42","46","64","67","73"],"1":["8","14","23","28","35","46","48","66","87"],"5":["2","6","33","39","47","49","59","63","76","85","94","96"],"9":["11","17","19","23","26","30","31","32","35","47","58","60","65","69","82","85","98","100"],"77":["10","27","62","94","100"],"76":["4","5","11","12","13","17","32","35","40","48","52","61","82"],"75":["10","14","17","25","34","63","68","71","73","74","86","87","94"],"74":["26","38","44","52","53","54","56","59","67","75","83"],"73":["2","3","6","14","20","25","26","29","31","32","35","41","46","53","54","75","80","81","92","98"],"72":["7","11","17","23","26","27","31","48","60","63","78","81","82","91","94","99"],"71":["10","15","21","25","30","52","61","75","84","94"],"70":["15","23","51","63","68","95"],"79":["12","18","19","43","45","51","59","61","64","66","81","90","92","96","100"],"78":["8","10","11","12","13","23","27","44","47","52","53","72","81","89","93","97"]}]}
|
It looks like it has to do with ps truncating the output when in watch, see watch cuts off ps aux output when piped for a detailed explanation.
The solution is to manually specify the column width for ps, and then it works even inside watch:
#!/bin/bash
COLUMNS=2000000 ps aux | grep 'python energydata.py' | sed -r 's/\{"spectral": 0//;s/, "pp_enable.*y": 0.9,//;s/ "consumer.*ressure": 0.15,//; s/ "gamma.*//'
| Bash (sed) script works directly from command line but not through watch |
1,375,280,881,000 |
I have an android app and has alot of Log.d(.....); type messages that I would simply like to remove.
I just want a command that removes them all (should go through all directories recursively)
One challenge however is that some Log.d commands go to the next line, and so some are like this:
Log.d("I can be easily deleted", "");
Others are like this
Log.d("I span a new line, "hel"
+ "lo");
Also it is not necessary that the spacing is all the same, one Log.d would be 30 characters from the start of the line, the other could be 14, etc.
I guess something that could be useful is that they all start with Log.d, and end with );
This should just be run on all .java files (*.java)
What is the command to do this? Thank you
|
Basic script
I've used perl because it's simpler to match over multiple lines (unlike, say, sed). The basic script is as follows.
perl -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' input
Explanation
perl -0777 -pe: invoke perl using -0777 to slurp the entire file, i.e. allowing multi-line processing.
's/foo/bar/gm': replace foo with bar, even if there are multiple matches (g); this is a multiline expression (m).
^Log\.d\(.*?(\n.*?)*?\);\n: look for lines starting with Log.d(; this expression needs to be escaped (^Log\.d\(). There may be more characters after this (.*?), possibly a newline with more characters ((\n.*?)), and possibly multiple iterations of this last expression (*?). After this, look for a closing ); followed by a newline (escaped as \);\n). All of these wildcards are non-greedy (*? not *). Hence, they will attempt to match the minimum number of characters possible, instead of deleting from the first ^Log\.d\( to the final \);\n of the entire file.
Test
input.txt:
Keep me A
Log.d("I can be easily deleted", "");
Keep me B
Keep me C
Log.d("I span a new line, "hel"
+ "lo");
Keep me D
Run script:
$ perl -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' input.txt
Keep me A
Keep me B
Keep me C
Keep me D
Iterate over multiple files
After testing the script as above, apply it to multiple files. Use perl's "in-place" option (-i) to modify the original files. Make a backup of the directory first. If the files are all directly in the same directory, you can just send multiple arguments to the script using shell wildcards.
perl -i -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' *.java
However, given you may have nested directories (and I don't know which shell you are using), you can use find here.
find /path/to/dir -name '*.java' -execdir perl -i -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' {} \;
Explanation
find /path/to/dir: look in /path/to/dir.
-name '*.java': only find files matching this expression.
-execdir perl -i -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' {} \;: run the script above in-place (-i) on the matching file {}.
Look at man find for more information on this format.
sed version benchmarking
As don_crissti suggests in the comments, there is a sed command that will do this as well.
sed -e '/Log/{' -e ': do' -e '/);/d;N' -e 'b do' -e '}' input.txt
I tested both commands using the following file as input.
Keep me A
Log.d("I can be easily deleted", "");
Keep me B
Keep me C
Log.d("I span a new line, "hel"
+ "lo");
Keep me D
Log.d("I span a new line, "hel"
+ "lo" +
+ "there");
Keep me E
I did some benchmarking comparing commands. The perl version is marginally faster for this file on my system.
$ time (for i in {1..1000}; do perl -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' input.txt > /dev/null; done)
================
CPU 101%
user 1.484
system 3.372
total 4.793
$ time (for i in {1..1000}; do sed -e '/Log/{' -e ': do' -e '/);/d;N' -e 'b do' -e '}' input.txt > /dev/null; done)
================
CPU 101%
user 2.647
system 2.847
total 5.429
I also created another test file that was 1000 repeats of the input.txt above. In this case, the sed version was faster.
$ time (for i in {1..100}; do perl -0777 -pe 's/^Log\.d\(.*?(\n.*?)*?\);\n//gm' input1000 > /dev/null; done)
================
CPU 100%
user 1.132
system 0.409
total 1.535
$ time (for i in {1..100}; do sed -e '/Log/{' -e ': do' -e '/);/d;N' -e 'b do' -e '}' input1000 > /dev/null; done)
================
CPU 100%
user 0.560
system 0.298
total 0.853
| Delete all lines that start with Log.d |
1,509,615,539,000 |
we add the following user to the group white_house
kuku , trump , karter
usermod -a -G white_house kuku
usermod -a -G white_house trump
usermod -a -G white_house karter
as all know we can verify that users was added to the group by
grep white_house /etc/group
but is it possible to verify it by some command line or more elegant way ?
|
You could use getent
getent group white_house
or if you want to check a specific users groups you can use groups
groups karter
| how to verify users are in group |
1,509,615,539,000 |
I would like to setup a paste functionality allowing to paste copied text (like scripts and configuration files) to any kind of application, including virtual guests and remote sessions running graphical clients such as VNC (where a standard copy-paste is not possible).
To achieve this, I associated a shortcut in my desktop manager to the following command:
sh -c 'sleep 1; xdotool type -- "$(xsel -bo)"'
This works... but only for certain apps (and sadly VNC is not part of them, cruel world!).
If I use this to paste the text into a vi running in xterm on the local host, then this works perfectly: the content of the file is preserved and written as expected. This also seem to work flawlessly in gnome-terminal.
If vi runs in xfce4-terminal for instance still on the local host, all carriage returns are mangled.
Similarly, if I try to paste to paste the text to any application (including xterm) through VNC, the text is correctly typed but here again all on a single line.
Where it becomes weird, it's that if I attach the following command to another keyboard shortcut:
sh -c 'sleep 1; xdotool key Return'
Here xdotool manages to input the carriage return in any application, so it is technically possible.
I tried to build on this as an ugly workaround to enforce the carriage return:
sh -c 'sleep 1; xsel -bo | { while read -r LINE; do xdotool type -- "$LINE"; xdotool key Return; done; }'
Now carriage return "works" this workaround now mangles the tabs and in all cases I don't like it as I won't always want the final carriage return (for instance when filling a web form field without immediately submitting it).
I think I have the same problem as this guy, but sadly the thread has no explanation.
Where is the problem? How can I make this work? Or if for some reason this is not possible is there another lightweight alternative for my initial need?
|
For historical reasons, there are two characters that represent line breaks: line feed (commonly represented as LF, \n, \012, Ctrl+J, …) and carriage return (CR, \r, \015, Ctrl+M). Unix uses LF as the line terminator character, but keyboards send CR when you press Return. Some applications recognize a Linefeed key (which exists on some rare keyboards that weren't made for the PC market), but that's rare.
Experimentally, when there's a line break in the string, xdotool sends a Linefeed key. I'm not surprised that some applications don't recognize that. You can make it send Return instead by replacing the newlines with carriage returns.
sleep 1; xdotool type -- "$(xsel -bo | tr \\n \\r | sed s/\\r*\$//)"
Your workaround can also be made to work. Set IFS to an empty value, otherwise read strips off the leading and trailing whitespace on each line (that's why the tabs are disappearing). And don't send a Return after the last line. (This isn't strictly equivalent to the command substitution method: with a command substitution, all trailing empty lines are removed; with the following method, only the one final newline, if any, is ignored.)
sleep 1
xsel -bo | {
IFS= read -r LINE;
xdotool type -- "$LINE";
while IFS= read -r LINE; do
xdotool key Return;
xdotool type -- "$LINE";
done;
}
Note: I haven't tried anything in VNC, so your mileage may vary.
| `xdotools type` mangles carriage returns |
1,509,615,539,000 |
How can I find all groups of images which have the same exif timestamp in a given directory from the command line in linux?
|
Well, let's say you are using exiftool and a command like
exiftool -sep $'\t' -T -filename -createdate dir
This prints one line per image in directory dir with the filename and its creation timestamp. I don't know if this is the timestamp you had in mind but you can always change that field.
Pipe output of that command to this awk command
awk 'BEGIN { OFS = "\t" }{ datetime = $2 " " $3 } { files[datetime] = files[datetime] " " $1 } END { for (time in files) print time ":" files[time] }'
...like so...
exiftool -sep $'\t' -T -filename -createdate dir | awk 'BEGIN { OFS = "\t" }{ datetime = $2 " " $3 } { files[datetime] = files[datetime] " " $1 } END { for (time in files) print time ":" files[time] }'
And you'll get output of the form
2016:05:05 00:52:03: IMG_0990.JPG IMG_0962.JPG
2016:05:05 00:51:23: IMG_0965.JPG
2016:05:05 00:48:36: IMG_0956.JPG IMG_0966.JPG IMG_0969.JPG
Note: For the sake of simplicity/sanity I am assuming that the image filenames don't have spaces in them or any other funkiness.
Disclaimer: I'm not an awk expert. There may be more elegant ways to do the same thing.
| Find images with same exif timestamp |
1,509,615,539,000 |
I have many separate lists from all of which lines containing "@" as the start are wanted. I am now using sh list_of_commands to process the files. I am wondering if there is a smarter method to do the job.
1) How could I batch process all the files without writing all the command lines one by one?
2) Any method that names the output files by adding a suffix to the names of input files?
Thank you very much!
|
The following script crawls through all the files in the directory in which the script is executed and applies commands to it one by one. Is that the thing you want?
#! /bin/sh
for file in *
do
if [[ -f "$file" ]]; then
# process the file with name in "$file" here
fi
done
You can perform the renaming of a file as requested by using the following statement inside the above loop:
mv "$file" "$(echo" $file" | sed -n "s|^\(.*\)\(\.[0-9a-zA-Z]*\)$|\1_su\2|p")"
It places the suffix "_su" in front of the file extension suffix but at the end of the filename. If the files have no extension then it places the suffix at the end of the filename.
| Batch processing many files [closed] |
1,509,615,539,000 |
My teacher lists the following chaining operators:
& && ( ) ; ;; | || new line
How is ;; a chaining operator and what does it do?
|
Your teacher will probably talk later on about case statements in bash and how each option statement is closed by ;;.
As in:
case $space in
[1-6]*)
Message="All is quiet."
;;
[7-8]*)
Message="Start thinking about cleaning out some stuff. There's a partition that is $space % full."
;;
9[1-8])
Message="Better hurry with that new disk... One partition is $space % full."
;;
99)
Message="I'm drowning here! There's a partition at $space %!"
;;
*)
Message="I seem to be running with an nonexistent amount of disk space..."
;;
esac
See TLDP Bash Guide for Beginners - 7.3. Using case statements
| Is ';;' a chaining operator in Unix? How does it work? [duplicate] |
1,509,615,539,000 |
I have a program, let is call it xcommand. when it is running, it has no stdout, but it writes its output to a file.
What I want to a achieve is like this: After I running xcommand, during its running I want to see its output in real time, and if I found something abnormal, I then turn off the output and at the same time, bring back xcommand.
I tried several ways, for example in the bash script I write
xcommand &
tail -f outputfile
fg
However, the problem is as soon as I press ctrl+c, the whole thing is stopped. And fg is not running at all.
So is it possible to achieve what I want?
|
When non-interactive (like in scripts), shells don't do job control. They don't put asynchronous jobs in background. Those remain in the same process group as the rest of the script, so Ctrl-C will cause a SIGINT to be sent to them as well (assuming the script itself is started in foreground).
You can issue a set -m for the shell to do job control when non-interactive:
#! /bin/bash -
set -m
xcommand & # start as a background job, so won't receive SIGINT upon ^C
tail -f outputfile # start in foreground. Will receive SIGINT.
fg # bring the xcommand job in foreground and wait for it. Will receive SIGINT
For that to work, that script has to be started in foreground. Also beware that if you press Ctrl-Z or if the background job tries to read from the terminal, you'll get into trouble.
In my experience, this kind of trick only "works" in bash or yash, generally not other Bourne-like shells.
| Is it possible to keep a command sequence running after sending ctrl+c in the intermediate step? |
1,509,615,539,000 |
I need to show the total but I have only found the command to list the installed packages with this:
ls -l /var/log/packages/
|
Using wc -l will print the total line count. Pipe your ls content into it using
ls /var/log/packages | wc -l
This will give you the total number of packages installed in /var/log/packages.
The reason I left out -l in my command was because in most cases that will print the total block count at the top of the directory listing, which would contribute to your end line count.
| command to see the total installed packages in Slackware? |
1,509,615,539,000 |
I'm running a raspberry pi 3.0 on rasbian jessie lite and using a flat mac keyboard with the number pad on the right side. I do not have a GUI interface.
I went to change my keyboard layout because it was not correct (I couldn't use |). To do this, i installed console-common which let me select my layout from a list.
sudo apt-get install console-common
I selected mac / Unknown / US american / Standard / Standard. I think I should've chosen extended, but I'm still unsure about this part.
This was the incorrect layout and my keyboard is completely bonked. It seems like every key is mapped randomly. Does anyone have a suggestion of how i can revert this, other than manually writing out my new mapping, which is my last resort
|
To reconfigure the keyboard in Debian, run (as root, or using sudo):
dpkg-reconfigure keyboard-configuration
Link to official Debian documentation here.
If your keys are "random" (I have been there, not fun!), try to find the characters needed to execute the command above.
| Revert setting the keyboard layout |
1,509,615,539,000 |
One way to find the names is to look at /home/ and see whichever entries are on the system.
To look at current users one can use
#users
to see how many users are there.
If a single user has spawned many sessions you will get something like -
root@debian:~# users
shirish shirish shirish shirish shirish shirish shirish
Is there any other way to know about users on the system other than the two shared above ?
|
There are several ways. last, who and ps are all relevant here. last is the most thorough for tracking current and past logins.
From the man page for last (emphasis added):
Last will list the sessions of specified users, ttys, and hosts, in
reverse time order. Each line of output contains the user name, the tty
from which the session was conducted, any hostname, the start and stop
times for the session, and the duration of the session. If the session
is still continuing or was cut short by a crash or shutdown, last will so
indicate.
...
If no users, hostnames or terminals are
specified, last prints a record of all logins and logouts.
So rather than only reporting on the sessions currently in progress, last reports on all logins and logouts.
| How to find out names and numbers of users on your system? |
1,509,615,539,000 |
Recently, I dumped my memory strings (just because I could) using sudo cat /dev/mem | strings. Upon reviewing this dump, I noticed some very interesting things:
.symtab
.strtab
.shstrtab
.note.gnu.build-id
.rela.text
.rela.init.text
.rela.text.unlikely
.rela.exit.text
.rela__ksymtab
.rela__ksymtab_gpl
.rela__kcrctab
.rela__kcrctab_gpl
.rela.rodata
.rodata.str1.8
.rela__mcount_loc
.rodata.str1.1
.rela__bug_table
.rela.smp_locks
.modinfo
__ksymtab_strings
.rela__tracepoints_ptrs
__tracepoints_strings
__versions
.rela.data
.data.unlikely
.rela__verbose
.rela__jump_table
.rela_ftrace_events
.rela.ref.data
.rela__tracepoints
.rela.gnu.linkonce.t6
These lines all seem to be related in some way: they are all (very) near each other in the memory, they all have similar .<name> prefixes, and they all seem to refer to each other.
What would cause these strings to appear, and why?
|
These look very much like section names from the Linux kernel. The ones prefixed by .rela contain relocation information for the named section, e.g. .rela.text is the relocation information for the text section (where kernel object code is stored).
Other sections of interest are:
.modinfo - kernel module information
.rela.__ksymtab - kernel symbol table relocation table
.rela.data - kernel data section relocation table
rodata.str1.1 - read only data section for strings
etcetera.
Running strings on /dev/mem will just find interesting strings in the systems physical memory; hence you managed to find some strings that are in the uncompressed vmlinuz linux kernel.
| What are these memory strings? What do they do? [duplicate] |
1,509,615,539,000 |
Actually I am using
sed -e s/Perro-A//g -i *-a.log
to delete the string Perro-A from many files ending with *-a.log.
But sometimes in the files I might not have Perro-A; I might have strings like Perro-B, Perro-C, Perro-14, Perro-X , Perro-DHFN, etc.
I need to update the previous command to delete any string starting with "Perro". How I can achieve this?
|
Assuming you just wanted to delete all string starting with "Perro", you can use:
sed 's/Perro[^ ]*//g' *-a.log
If you want to edit the file in place, you can use -i option with sed, like
sed -i sed 's/Perro[^ ]*//g' *-a.log
Update
If you don't want to have multiple spaces, you can use:
sed -i sed 's/Perro[^ ]*//g' *-a.log | tr -s " "
sample data,
rahul@rahul: cat a.log
Foo Perro-B Perro-C Bar
Perro-14 cmd Perro-X
Perro-DHFN abc xyz
aBcD Perro-14
rahul@rahul: sed 's/Perro[^ ]*//g' a.log | tr -s " "
Foo Bar
cmd
abc xyz
aBcD
| How to delete a variable string in file |
1,509,615,539,000 |
I am currently having a hard time understanding the following find command:
find / -o -group `id -g` -perm \
-g=w -perm -u=s -o -perm -o=w\
-perm -u=s -o -perm -o=w \
-perm -g=s -ls
Specifically this portion:
find / -o -group `id -g` -perm -g=w -perm -u=s
I understand that -o works just like an or operator. If that were the case would not that particular line mean find all files in / or files with group write permission and with suid set that are the same group as mine. Which still basically means all files in / directory. Can someone explain to me what I am missing?
|
From the find(1) manpage:
The -H, -L and -P options control the treatment of symbolic links.
Command-line arguments following these are taken to be names of files
or directories to be examined, up to the first argument that begins
with -, or the argument ( or !. That argument and any
following arguments are taken to be the expression describing what is
to be searched for. If no paths are given, the current directory is
used. If no expression is given, the expression -print is used (but
you should probably consider using -print0 instead, anyway).
The starting point, / in your case, isn't processed in the same way as expressions. The latter,
-o -group `id -g` -perm \
-g=w -perm -u=s -o -perm -o=w\
-perm -u=s -o -perm -o=w \
-perm -g=s -ls
in your case, are applied to all the files found from the starting point. -o is a binary operator which requires expressions on both sides, so this command actually fails:
find: invalid expression; you have used a binary operator '-o' with nothing before it.
If you remove the first -o, it becomes equivalent to
( -group `id -g` -perm -g=w -perm -u=s )
-o ( -perm -o=w -perm -u=s )
-o ( -perm -o=w -perm -g=s -ls )
which only lists files which are setgid and writable by others. The first two groups of expressions have no action so they are applied but don't have any visible effect.
| Understanding the following find command |
1,509,615,539,000 |
I would like to know how to find all files in the /etc file beginning with P, which I will then store the results of in a new file.
So far I have
$find /etc -name
Unsure of what comes next.
|
If you want to find all regular files beginning with P, you can use:
find /etc -type f -name 'P*'
If you want to no recurse into subdirectories:
find /etc -maxdepth 1 -type f -name 'P*'
| How to find all files in /etc beginning with P |
1,509,615,539,000 |
I am working on a project in which the user can upload videos. Is there any way with FFmpeg I can take some images from the video and create a GIF out of it?
As the project is in Java, I have a way to get an image from a video, but to create a GIF requires multiple images, and it's proving costly.
The server is running a Debian X64 system, so if FFMpeg is not suitable, I am open to other tools on Linux which can do this efficiently.
|
I do scene extracting from videos using vlc for linux. If you don't have it, use
apt-get install vlc
to install it. Once installed, you can use a variance of the following command line to extract frame(s) out of your video. The default image format is png and it is good for my purpose. If you insist on gif images, I suggest installing imagemagick for image format conversions. Here is the command that extracts the frames:
cvlc ${videofile} \
--video-filter=scene \
--vout=dummy \
--start-time=${start-sec} --stop-time=${end-sec} \
--scene-ratio=1 --scene-prefix=${prefix} \
--scene-path=${MyStorePath} \
vlc://quit
where
videofile is an mp4 format video. Other formats might be possible but
didn't test
start-sec is where you want your frame grab starts from, in seconds from the beginning
end-sec is where you want your frame grab ends, in seconds from the beginning. Must be greater than start-sec
prefix is the prefix of the file names for captured images.
MyStorePath is the path where you want to store captured images.
And this command helps you figure out the video length:
ffmpeg -i ${vidfile} 2>&1 | grep Duration | cut -d ' ' -f 4 | sed s/,//
output is in HH:MM:SS.xx format. to convert this into video length in seconds, I use
l=$(ffmpeg -i ${vidfile} 2>&1 | grep Duration | cut -d ' ' -f 4 | sed s/,//)
h=$(echo $l|cut -d: -f1)
m=$(echo $l|cut -d: -f2)
s=$(echo $l|cut -d: -f3|cut -d"." -f1)
(( hh=$h*3600 ))
(( mm=$m*60 ))
(( LengthSeconds=$hh+$mm+$s ))
at this point you can manipulate the LengthSeconds variable to automatically determine start and end times. Unfortunately, for my vlc command to work, you have to specify a time slice to extract frames from.
| FFMpeg : Converting a video file to a gif with multiple images from video |
1,509,615,539,000 |
I'm new to Linux. I wonder if it is possible to make terminal display directory contents automatically while I typing a command with directory arguments?
For example, if I want to do
cp ./fileA ~/folderA/folderB/folderC/fileA
Sometimes I can't memorise the destination directory correctly, as a result I need to use ls repeatedly to find the correct directory before finally using cp command, which is not convenient.
If I cannot remember what's in folderA beforehand, it'll be nice if the contents in folderA is automatically displayed while I typing:
cp ./fileA ~/folderA
Thanks!
|
The usual action in case you don't remember the name is to press Tab. Most shells (including bash, zsh, ksh) will guess as many characters as they could on the first keystroke, then display a list of matching files and directories on the second.
For example, if you have dir1, dir2 and dir3 in your home directory, then typing cp file ~/d and hitting Tab twice would produce
dir1 dir2 dir3
$cp file ~/dir
Here, your shell could guess from the letter "d" you have typed that you want one of the three directories mentioned above, and filled the common part ("dir") in your command for you. All you have to do is to type "1", "2" or "3", and hit Enter.
Tab can be used multiple times while typing the same command. If your target directory is buried deep in the directory three, or if there are many files/directories to chose from, it is convenient to type a few characters, hit Tab, check how many the shell could guess, type a few more, hit Tab again etc. Thanks @EightBitTony for the remark.
Note that command line competition using Tab also works with command names. cp is short enough to type entirely, but if you need something longer like wpa_supplicant then typing wpa_s and hitting Tab will save you a fair amount of keystrokes. Personally, I use zsh which is by default configured to guess command line options, e.g. typing service sshd r and hitting Tab is automatically expanded to service sshd restart.
| Is it possible to display directory contents automatically while typing command? |
1,509,615,539,000 |
Currently my computer is not allowing me to start it up properly (guessing my hard drive is malfunctional), but I've figured out that Ctrl+Alt+(F1,F2,F3,F4,F5,F6,OR F7) allows me to access command-line. So I was hoping that being able to do that would allow me to get all the necessary information from a specific file. If there is a way, and you know it, please let me know here. I'll try anything and post the results.
|
Depending on the user you are using, you may need to type su - or sudo first to get root privileges.
Type lsblk:
> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111,8G 0 disk
├─sda1 8:1 0 1020K 0 part
├─sda2 8:2 0 41G 0 part
├─sda3 8:3 0 11G 0 part
├─sda4 8:4 0 19G 0 part /
├─sda5 8:5 0 33,6G 0 part
├─sda6 8:6 0 2G 0 part
├─sda7 8:7 0 2G 0 part
└─sda8 8:8 0 2G 0 part
Insert an usb drive and type lsblk again.
> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111,8G 0 disk
├─sda1 8:1 0 1020K 0 part
├─sda2 8:2 0 41G 0 part
├─sda3 8:3 0 11G 0 part
├─sda4 8:4 0 19G 0 part /
├─sda5 8:5 0 33,6G 0 part
├─sda6 8:6 0 2G 0 part
├─sda7 8:7 0 2G 0 part
└─sda8 8:8 0 2G 0 part
sdb 8:16 0 931,5G 0 disk
├─sdb1 8:17 0 4G 0 part
├─sdb2 8:18 0 4G 0 part
├─sdb3 8:19 0 4G 0 part
└─sdb4 8:20 0 919,5G 0 part
Identify one partition inside the usb drive, on this case /dev/sdb4
Create a directory, mount /dev/sdb4, copy the .odt file, umount /dev/sdb4 and remove the directory:
> mkdir dir
> mount /dev/sdb4 dir
> cp file.odt dir/
> rmdir dir
| How do I transfer (to a usb drive) or read a .odt file in the command-line? |
1,509,615,539,000 |
I am trying to build a one-line command to monitor when my Internet connectivity drops.
I'd like it so that if I run ping www.google.com indefinitely (one ping per second), if the word "timeout" exists in an output line, another command runs, but the ping command continues to run indefinitely.
I'm running this on OS X, so the other command is say fail so I can audibly hear that there is a problem if I'm not looking at the terminal window.
If it can't be a one-line command, then a bash script would be fine.
I tried this:
ping www.google.com | grep timeout ; say fail
but that only executes the say command after the user manually terminates the ping command.
Then I tried this:
if ping www.google.com | grep timeout ; then say fail ; fi
But that never executes the say command.
|
Your problem comes from the ping command that never exits.
You should make a loop that call ping for one test -c 1:
while [ true ] ; do if ping -c 1 www.google.com | grep timeout ; then say fail ; fi ; sleep 1 ; done
edit
I wrote a bash while loop, may be you should adapt it to your shell program (It's been a long time I play with mac os X),
I added some pause (one second) to prevent the loop to consume too much cpu...
| Execute a command after each output line from ping command |
1,509,615,539,000 |
I am running this command:
ssh -i key user@domainname 'bash -l -c "command"'
The command script calls on another script that asks for input at some point. The problem is that the prompt for the command asking for input comes after I type in the answer to the prompt, which obviously is bad because I don't know what I am responding to. How should I fix this?
|
Try allocating a terminal on the ssh host:
ssh -t -i key user@domainname 'bash -l -c "command"'
| Showing prompt from ssh command |
1,509,615,539,000 |
I have been looking for lesser known commands that reads their standard input in special cases(missing arguments , for example).
I'm thinking "cat" or maybe other command would fit here.
|
There are a few cases that come to mind:
missing arguments,
the special argument "-",
the program detects that the standard input is not a terminal, and
an option (or environment variable) overrides the behavior.
For missing arguments, cat is a useful example. Likewise grep, sed.
The special argument "-" is used in several programs to tell it explicitly to read from the standard input. You can find discussion (with examples) in these:
What's the difference between STDIN and arguments passed to command?
How to read stdin when no arguments are passed?
Pipe, standard input and command line arguments in Bash
For the case where the standard input is not a terminal—offhand, the cases I'm familiar with are less known:
dialog checks on startup if its input is a terminal, and if not, opens the terminal device. This is part of a larger scheme where it can read data from a pipe, e.g., for the gauge widget.
diffstat handles missing arguments by reading its input from the standard input, but in addition, its -v (verbose) option when doing this shows progress, e.g., a "." for each file
piping to vi-like-emacs makes it read the input as a file. vim's comparable feature (implemented [later4), uses the explicit "-" argument.
For special arguments:
dialog has an option --gauge which reads data from the standard input. Also --input-fd tells it which file descriptor to use as its input for pipes.
lynx has an option -stdin telling it to interpret the standard input as html. Otherwise, it accepts configuration options on the standard input, e.g., using -get_data or -post_data.
| Is there any command that read their standard input in special cases? [closed] |
1,509,615,539,000 |
When I run grep -m 1 -Fnxvf file1 file2, for some couple of files I get a different line number than running grep -m 1 -Fnxvf file2 file1 (swapped files).
Why?
I've reduced the files to a minimal example.
file1
Pp: 1 Id pezzo 193 posIn = { x = 132, y = 1432 }
Pp: 1 Id pezzo 193 posIn = { x = 136, y = 1432 }
Pp: 1 Id pezzo 193 posIn = { x = 84, y = 1436 }
Pp: 1 Id pezzo 193 posIn = { x = 88, y = 1436 }
file2
Pp: 1 Id pezzo 193 posIn = { x = 132, y = 1432 }
Pp: 1 Id pezzo 193 posIn = { x = 84, y = 1436 }
Pp: 1 Id pezzo 193 posIn = { x = 88, y = 1436 }
Pp: 1 Id pezzo 193 posIn = { x = 92, y = 1436 }
Results I get:
$ grep -m 1 -Fnaxvf file2 file1
2:Pp: 1 Id pezzo 193 posIn = { x = 136, y = 1432 }
$ grep -m 1 -Fnaxvf file1 file2
4:Pp: 1 Id pezzo 193 posIn = { x = 92, y = 1436 }
The first result is exactly what I'm expecting, but in the second case I expected to see (and usually it is so), the second line of file2.
Long explanation
I'm trying to find (and show) the first difference between two files. I want to show only the first difference, and the line where it happens.
I've found this answer on SO (have a look at my comment to the answer) and it seems to work, but for some couple of files I've noticed the strange behavior showed above.
|
TLDR: you have no guarantee grep will use your pattern in order.
Suppose you have two files with following contents (one letter per line, I fold for readability)
File 1
A B D E
and
File 2
A B C D
The first excluded (since you use -v ) letter from set 2 (A B C D ) in file 1 is E.
The first excluded letter from set 1 in file 2 is C.
Comparison of files is usually:
cmp file1 file2 for binary file, when you don't care about diff (you can even use cmp -s (silent))
diff file1 file2 which show a pseudo sed code to go from file1 to file2 (diff file2 file1 is quite symmetric)
comm -123 file1 file2 to show lines in common (-3) in file1 (-1) in file2 (-2)
| Why grep shows different results when I use file1 as a pattern on file2 and viceversa? |
1,509,615,539,000 |
I need to figure it out how many times a particular string shows up in column 4.
This is my data:
25 48656721 48656734 FAM132B ENSCAFT00000019683 4 0.51
X 53969937 53969950 FAM155B ENSCAFT00000026508 5 0.57
3 42203721 42203906 FAM169B ENSCAFT00000017307 5 0.54
36 28947780 28947831 FAM171B ENSCAFT00000046981 5 0.51
10 45080519 45080773 FAM171B ENSCAFT00000003744 9 -0.53
3 61627122 61627446 FAM193A ENSCAFT00000023571 13 0.64
3 61626373 61626466 FAM193A ENSCAFT00000023571 6 0.51
15 55348822 55349196 FAM193A ENSCAFT00000045012 5 0.52
This is a portion of my data. So, I'd want the output to be:
1 FAM132B
1 FAM155B
1 FAM169B
2 FAM171B
3 FAM193A
And so on - for the rest of my data. What's a command that would work?
|
One simplistic solution would be to use awk to pull column 4; uniq -c to count them; and another sort to put them in order by the second column (the old column 4 data):
awk '{print $4}' < data | uniq -c | sort -k2
On your (updated) sample input, this gives:
1 FAM132B
1 FAM155B
1 FAM169B
2 FAM171B
3 FAM193A
| Listed Frequency of Different Strings in a Particular Column |
1,509,615,539,000 |
I'm designing a terminal-based application, and I want to implement a --silent flag to suppress noise if they don't want it.
In this application, errors are most commonly logged when the application cannot perform a necessary task, warnings are logged when something couldn't be performed, but the application can still operate.
So, that stated, should a --silent flag suppress warnings and errors, or just warnings? What is the general convention on that?
In ruby, ruby -w0 turns off warnings for your script (information via ruby --help)
But in curl, curl --silent supresses all output.
|
As you see with curl / ruby, there is no genereal convention. It greatly depends on your application and what can go wrong with it. It also depends on how it is used. For some application it makes sense to have --quiet and --really-quiet flags, for some it's just overkill. Also a --really-quiet flag is usually not required technically, as you can throw away all messages with 2>/dev/null. As general guidelines I suggest the following:
Have a meaningfull returncode. If your application can destinguish different error classes (like user error, external error), have different returncodes and document them.
If your application can produce lots of warnings, have a flag to filter only warnings. If your application has different loglevels (like INFO, NOTICE, WARNING, ERROR), have a flag to filter them. (Like: -q, -qq, -qqq.)
If your application is used mostly interactivly, suppress warnings but not errors. Especially if the application does not stop after the error.
If your application is used mostly in an automatic setting, suppress warnings and errors, because nobody is looking at them anyway. But only if the application stops after that error and produces a meaningfull returncode.
| Is a "--silent" flag supposed to suppress warnings and errors, or just warnings? |
1,509,615,539,000 |
On Mate systems, many programs correspond to (and are sometimes forks of) other programs with more well-known names:
nautilus -> caja
gnome-terminal -> mate-terminal
ark -> engrampa
gedit -> pluma
Sometimes I get lucky and just replace gnome with mate. Sometimes I have to resort to opening a file through a GUI and play games with xprop or rely on the help menu to find which command was actually called. Other times I get lucky on Google and get some random list.
Is there some command line trick I don't know about to get the name of these Mate programs?
|
I'm afraid not. Most Linux distributions provide some set of basic utility programs - graphical text editor, image viewer, file manager, window manager, terminal emulator, and so on. While some of these programs' names could be possible to get rather easily, if you are checking this on somebody's system and not a fresh install, you can never be sure, since they might have changed default applications.
Well, nevertheless, you might check default applications:
cat /usr/share/applications/defaults.list
I guess that Mate should have something like this too. You could then grep or sed to find out what programs are begin used.
| How to find what the "Mate version" of programs are called |
1,509,615,539,000 |
I executed a command like this: nohup some_command &. Now this command is in the background. I can see it with the command jobs.
Example output:
[1]+ Running nohup some_command &
Is it somehow possible to have a live representation of the status that comes from the jobs command, similar to how top works? So that when nohup some_command & completes, it immediately disappears from the list?
|
If all you want is for job completion notifications to be printed immediately, even if you're at typing a prompt or if some other job is in the foreground, then just run set -o notify.
If you want a foreground command that displays the status of background jobs from the current shell, you can run jobs in a loop. It's easy to do it in full screen:
tput clear
jobs
while sleep 1; do
tput clear
jobs
done
If you want to display the list below the prompt without clearing the screen, save the cursor position at the beginning and restore it on each run:
tput sc
jobs
while sleep 1; do
tput rc
jobs
done
| Live monitoring of background jobs |
1,509,615,539,000 |
For these commands (in both bash and fish):
sudo emerge eix
emerge eix
I get this error:
usage: emerge [-h] [--version] [input [input ...]]
emerge: error: argument input: can't open 'eix': [Errno 2] No such file or directory: 'eix'
Same thing with livestreamer (and "pip install"):
#~/temp> livestreamer http://www.twitch.tv/totalbiscuit
usage: livestreamer [-h] [--version] [input [input ...]]
livestreamer: error: argument input: can't open 'http://www.twitch.tv/totalbiscuit': [Errno 2] No such file or directory: 'http://www.twitch.tv/totalbiscuit'
If a file with the name of the first argument exists, I get the same error for the second argument:
#~/temp> emerge test eix
usage: emerge [-h] [--version] [input [input ...]]
emerge: error: argument input: can't open 'test': [Errno 2] No such file or directory: 'test'
#~/temp> touch test
#~/temp> emerge test eix
usage: emerge [-h] [--version] [input [input ...]]
emerge: error: argument input: can't open 'eix': [Errno 2] No such file or directory: 'eix'
How to reproduce (not really):
Be me, happily coding on a dying keyboard (broken cable, sometimes results in me creating weird files in ~/).
(maybe unrelated) Do sudo pip3 uninstall aiohttp_jinja2 in the process, because I don't need it anymore (wrapper for Jinja2 templating engine for aiohttp.web AsyncIO webserver).
Find out that pip, emerge and livestreamer don't work.
Find a weird empty directory /home/username/~/ (it was an actual directory ~/~/, not a pointer to ~/.), remove it out of frustration with rm -r \~/
Go to sleep after 10 hours of work.
Wake up, tools using Python still don't work after boot, find that ~/~/ directory is there again, remove it again.
Try to change primary Python version to 2.7 from 3.3 (sudo eselect python set 1), doesn't help.
Download https://pypi.python.org/packages/source/a/aiohttp_jinja2/aiohttp_jinja2-0.4.1.tar.gz and install it manually with sudo python3 setup.py install, that doesn't help (probably something still proken in core Python modules, maybe os or configparser, not sure.
Ask a question on http://superuser.com, realize it's too technical and Linux-related, ask here.
iPython is also dead in an interesting way (both ipython and ipython3):
#~> ipython
You are running chardetect interactively. Press CTRL-D twice at the start of a blank line to signal the end of your input. If you want help, run chardetect --help
Any suggestions?
Update: Getting closer.
So /usr/bin/python2.7 /usr/lib/python-exec/python2.7/emerge -av eix works just fine, I think the problem is related to python-exec2 somehow:
#~> file /usr/bin/livestreamer
/usr/bin/livestreamer: symbolic link to ../lib/python-exec/python-exec2
#~> file /usr/bin/emerge
/usr/bin/emerge: symbolic link to ../lib/python-exec/python-exec2
#~> file /usr/bin/pip
/usr/bin/pip: symbolic link to ../lib/python-exec/python-exec2
#~> file /usr/bin/pip3
/usr/bin/pip3: symbolic link to ../lib/python-exec/python-exec2
|
The programs you're having trouble with are all run using the dev-lang/python-exec script wrapper, which appears to have been somehow corrupt.
To attempt to re-install that package, assuming nothing else was severely harmed, you can try (adjust the version number to match your installed packages):
/usr/bin/python2.7 /usr/lib/python-exec/python2.7/emerge -1a dev-lang/python-exec
If your python installation is also broken (or some other critical system package), you should be able to recover by using binary packages. You can download some from Tinderbox.
Depending on how badly the installation is broken, you might have to boot into a Live CD to download the packages and manually mount your filesystems to install the binary packages.
| Python now thinks arguments are files: Broken emerge, pip, livestreamer and most tools using Python |
1,509,615,539,000 |
I'd like to rip the titles from a bunch of word documents. All the CLI tools I've tried for converting .doc to text lose the title... but Abiword's conversion to RTF preserves it, eg:
$ abiword --to=rtf something.doc
gives something.rtf, a text-encoded file that includes the title.
So far so good but I need one line of the file, writing it seems very wasteful. (eg if I could get the output to go to stdout, I'd run this with Python's subprocess, capture it and apply a regex to get a list of titles).
But, unless I'm missing something, Abiword CLI tool doesn't seem to be set up to output to standard out. You can either:
specify output format, giving original file name + new extension, or
specify filename; Abiword infers file type from the extension.
Is there a way to get around this, and just get the output via stdout?
|
There's an example in the abiword man page:
abiword --to=rtf --to-name=fd://1 something.doc
| How to redirect output from file to stdout? |
1,509,615,539,000 |
I am reading up on several commands, some of which are privileged and some that may or may not be installed. My system (gentoo) will respond with command not found sometimes when the program in the system. How do I match the behavior of something like emerge?
Example of behavior I would like:
$ emerge -av mypackage
This action requires superuser access...
What I currently have:
$ lspci
bash: lspci: command not found
$ sudo lspci
00:00.0 Host bridge: ...
I would even prefer a "permission denied" message so I know that I should try to use sudo. Of course I don't want to be experimenting around running as root.
|
The directory containing lspci is likely not in your PATH.
You can find its location using sudo -i which lspci and add the directory to your path.
The likely locations are /sbin or /usr/sbin
To add them you your current PATH, you can run (in a Bourne-based shell) export PATH="$PATH:/usr/sbin:/sbin"
To make the change permanent, add the export command to your .bashrc or .bash_profile (assuming you are using bash as a shell)
| How do I allow privileged commands to fail but respond? |
1,509,615,539,000 |
I currently have 2 bash command line strings I use to gather data needed for a certain task. I was trying to simplify and have only one command used to gather the data without using a script.
First I cat a file and pipe into a grep command and only display the integer value. I then copy and paste that value into an equation which will always be constant except for the grepped value from the first command.
1st command: cat /proc/zem0 |grep -i isrs|grep -Eo '[0-9]+$'
2nd command: echo $(( (2147483633-"**grep value**")/5184000 ))
I'm stumped as to how I can accomplish this. Any guidance on this would be greatly appreciated!
|
Here it is as one command:
echo $(( (2147483633 - $(grep -i isrs /proc/zem0 | grep -Eo '[0-9]+$') )/5184000 ))
How the simplification was done
First consider this pipeline:
cat /proc/zem0 |grep -i isrs`
This can be simplified to:
grep -i isrs /proc/zem0
Thus, the whole of the first command becomes:
grep -i isrs /proc/zem0 | grep -Eo '[0-9]+$'
The last change is to substitute the first command into the second using command substitution: $(...). Thus, we replace:
echo $(( (2147483633-"**grep value**")/5184000 ))
with:
echo $(( (2147483633-$(grep -i isrs /proc/zem0 | grep -Eo '[0-9]+$'))/5184000 ))
One more simplification
If your grep supports perl-style regular expressions, such as GNU grep, then, as suggested by jimmij in the comments, one more simplification is possible:
echo $(( (2147483633-$(grep -Pio 'isrs.*?\K[0-9]+$' /proc/zem0))/5184000 ))
| Using the output of `grep` as variable in second command |
1,509,615,539,000 |
I stumbled across a useful command line recorder that allows the user to copy the text when watching the 'recording'. I just cannot remember the name of the project. I just remember you can play and pause the video and the user can copy and paste the text in the video.
Anyone know the name of the project?
|
Might it be asciinema, showterm or PLAYTERM/ttyrec? Coincidentally a colleague of mine is right now trying to remember something like this as well.
| Cannot remember the name of a CLI recorder software |
1,509,615,539,000 |
I am using OSX 10.9.5
Whenever I open a terminal, it says something like:
Last login: Wed Jan 21 10:29:13 on ttys002
You have mail.
What mail is it talking about ? If I open the mail app, I have no new mail. This is a laptop, and I have not setup a mail server, so how do I find out why it is reporting I have mail ? And how do I get rid of the message ?
|
Well, you probably have mail. ;)
It talks about your local inbox. Use mail or mutt or from to see your local mails. I'm not sure what mail client is installed per default on OSX, but I would expect to find mail on pretty much any unix system.
OSX, at the end, is just another unix and unix is designed to be a multi-user system, i.e. multiple different persons can use the same system at the same time. - In the time unix was designed it was common to have one big server and people were using a terminal for logging in remotely.
Therefore they should be able to communicate with each other. - One way is using mail. You can send someone locally a mail by typing mail otheruser and read your own mail by typing mail.
Even if you are the only user on your computer, system daemons might send you some mails to inform you about what is going on. (They can be configured what to send.)
| Why does terminal say I have mail every time I open it? |
1,421,059,115,000 |
I have recently installed sdcv(console version of Stardict offline dictionary) and have installed 5-6 dictionary files as shown here
Whenever i search any word, using sdcv hello or sdcv cat , it gives the meaning of the word from all the dictionary files, on the console which clutters up the screen.
How can I search the word from a specific dictionary file, eg: I wish to see cat in the usual english context, so how do i search in stardict-oald-2.4.2(file for oxford english dictionary) and if I wish to search cat in the linux context, how do i search using the linux dictionary file alone(stardict-xfardic-gnu-linux-2.4.2)?
|
Use -u option:
-u --use-dict filename
for search use only dictionary with this bookname
To search for a keyword in a specific dictionary create several aliases, for example def for general English, defl for Linux etc. Like this:
$ alias def="sdcv -u WordNet"
$ alias defl="sdcv -u 'GNU/Linux English-English Dictionary'"
Alias usage:
$ def sudo
Found 10 items, similar to sudo.
0)WordNet-->sudor
1)WordNet-->judo
2)WordNet-->kudos
3)WordNet-->ludo
4)WordNet-->Sidon
5)WordNet-->sodom
6)WordNet-->Sudan
7)WordNet-->Sudra
8)WordNet-->suds
9)WordNet-->sudsy
Your choice[-1 to abort]: ^C
$ defl sudo
Found 1 items, similar to sudo.
-->GNU/Linux English-English Dictionary
-->sudo
Provides limited super user privileges to specific users. Sudo is a program designed to allow a sysadmin to give limited root privileges to users and log root activity. The basic philosophy is to give as few privileges as possible but still allow people to get their work done. From Debian 3.0r0 APT http://www.tldp.org/LDP/Linux-Dictionary/
You don't pass a path to dictionary as an argument tu -u but a bookname that is written in .ifo file. For example:
$ cat /usr/share/stardict/dic/stardict-xfardic-gnu-linux-2.4.2/xfardic-gnu-linux.ifo
StarDict's dict ifo file
version=2.4.2
wordcount=16694
idxfilesize=256945
bookname=GNU/Linux English-English Dictionary
author=Binh Nguyen
website=http://www.xfardic.org
description=Made by Hu Zheng.
date=2007.6.5
sametypesequence=m
| Searching a word from a specific dictionary file in sdcv(console version of Stardict offline dictionary) |
1,421,059,115,000 |
Consider a file with key=value pairs, and each key is optionally a concatenation of multiple keys. In other words, many keys can map to one value. The reason behind this is that each key is a relatively short word compared to the length of the value, hence the data is being 'compressed' into lesser lines.
Illustration (i.e. not the real values):
$ cat testfile
AA,BB,CC=a-lengthy-value
A,B,C=a-very-long-value
D,E,F=another-very-long-value
K1,K2,K3=many-many-more
Z=more-long-value
It is valid to assume that all keys are unique, and will not contain the following characters:
key delimiter: ,
key-value delimiter: =
whitespace character:
keys may come in any form in the future (with the above constraints), they currently adhere to the following regex coincidentally: [[:upper:]]{2}[[:upper:]0-9]. Likewise, values will not contain =, so = can be safely used to split each line. There are no multi-line keys or values, so it is also safe to process line-by-line.
In order to facilitate data extraction from this file, a function getval() is defined as such:
getval() {
sed -n "/^\([^,]*,\)*$1\(,[^=]*\)*=\(.*\)$/{s//\3/p;q}" testfile
}
As such, calling getval A will return the value a-very-long-value, not a-lengthy-value. It should also return nothing for a non-existent key.
Questions:
Is the current definition of getval() robust enough?
Are there alternative ways of performing the data extraction that are possibly shorter/more expressive/more restrictive?
For what it's worth, this script will run with cygwin's bash and coreutils that comes with it. Portability is not required here as a result (i.e. only brownie points will be given). Thanks!
edit:
Corrected function, added clarification about the keys.
edit 2:
Added clarification about the format (no multi-lines) and portability (not a requirement).
|
You can write it in much more readable form using awk:
getval() {
awk -F'=' '$1~/\<'"$1"'\>/{print $2}' testfile
}
| Extracting values from a file keyed by multiple keys |
1,421,059,115,000 |
When I execute
echo `man ls` > tem.txt
I get unformatted output in the text file, I mean output without any new lines, just continues sentences. How do I get formatted output ?
For example, unformatted output looks like:
LS(1) User Commands LS(1) NAME ls - list directory contents SYNOPSIS ls
[OPTION]... [FILE]... DESCRIPTION List information about the FILEs (the
current directory by default). Sort entries alphabetically if none of -
cftuvSUX nor --sort is speci‐ fied. Mandatory arguments to long options are
mandatory for short options too. -a.................
|
You don't need to force man's output via process substitution. Redirection works fine for it:
man ls > tem.txt
Even if you so use process substitution, remember to use quotes, otherwise the output will undergo splitting + globbing from the shell:
echo "$(man ls)" > tem.txt
| how to `echo` 'formatted' man page of some command to text file [duplicate] |
1,421,059,115,000 |
I have a script, run.sh, that looks like this:
#!/bin/bash
FILES=$(find corpus/ -type f)
for i in $FILES
do
./individual.sh $i
done
It runs without problem.
I want to do away with the run script by piping each file from find to ./individual.
I would think that I could just do:
find corpus/ -type f | ./individual.sh
But in trying that it just pipes in a file with file name "" (leading to an error).
What is the proper syntax for this?
|
You'll want to use find's -exec option:
find corpus/ -type f -exec ./individual.sh {} \;
For each match that find finds, it'll execute individual.sh, replacing {} with the name of the file it found. \; is how you end an exec with find.
The reason your pipe doesn't work is that the output from find is being provided to individual.sh via STDIN, not as an argument. Your script would need to know to look for EITHER input, or arguments through STDIN to make use of this.
| Confused about piping commands from find to commandX? |
1,421,059,115,000 |
I am using cpufreq to scale my CPU frequency. But I do that by clicking cpufreq icon on the panel of Ubuntu 12.04.
If without a mouse, how can I show and scale CPU frequency by running commands in terminal?
|
cpufreq-info - Utility to retrieve cpufreq kernel information. It will list available frequency steps, available governors, current policy etc.
cpufreq-set - A tool which allows to modify cpufreq settings (try e.g. cpufreq-set -g performance or cpufreq-set -f 2 GHz once you know what frequencies your CPU can be set to)
You can also retrieve information about you cpufreq state directly from /sys/devices/system/cpu/cpu*/cpufreq directory. For example available frequencies are stored in /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies.
| Scale cpu frequency in CLI? |
1,421,059,115,000 |
How to display contents of mounted /boot and '/' root partitions of Debian on SSD drive from Linux Live CD? I know ls -1 to list directory contents, but what is exact steps to get this?
|
Mounting a HDD
To mount a HDD that's physically connected to your system, you first need to identify the device handle that's been assigned to it. I typically use the command line tools blkid or lsblk to find out this information.
blkid
$ sudo blkid
/dev/sda1: UUID="XXXXXX" TYPE="ext4"
/dev/sda2: UUID="XXXXXX" TYPE="LVM2_member"
/dev/mapper/fedora_greeneggs-swap: UUID="XXXXXX" TYPE="swap"
/dev/mapper/fedora_greeneggs-root: UUID="XXXXXX" TYPE="ext4"
/dev/mapper/fedora_greeneggs-home: UUID="XXXXXX" TYPE="ext4"
lsblk
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 465.3G 0 part
├─fedora_greeneggs-swap 253:0 0 7.7G 0 lvm [SWAP]
├─fedora_greeneggs-root 253:1 0 50G 0 lvm /
└─fedora_greeneggs-home 253:2 0 407.6G 0 lvm /home
sr0 11:0 1 1024M 0 rom
So we can see from the above that I've got a ext4 partition on /dev/sda1, and a LVM partition on /dev/sda2. Since you're interested in your /boot device, that's typically formatted as a ext4 partition, so to mount it:
$ sudo mount -r /dev/sda1 /mnt
And it should be accessible to you under /mnt as a read only directory.
Mounting an ISO
If on the other hand you'd like to mount an ISO, you can do so, by using the mount command, along with the loop option.
$ sudo mount -o loop <some.iso> <mount point>
Example
$ sudo mount -o loop VBoxGuestAdditions_4.3.10.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only
And you can now see the ISO's contents:
$ ls -l /mnt/
total 57016
dr-xr-xr-x. 2 root root 2048 Mar 26 14:04 32Bit
dr-xr-xr-x. 2 root root 2048 Mar 26 14:04 64Bit
-r-xr-xr-x. 1 root root 647 Oct 8 2013 AUTORUN.INF
-r-xr-xr-x. 1 root root 6966 Mar 26 13:56 autorun.sh
dr-xr-xr-x. 2 root root 2048 Mar 26 14:04 cert
dr-xr-xr-x. 2 root root 2048 Mar 26 14:04 OS2
-r-xr-xr-x. 1 root root 5523 Mar 26 13:56 runasroot.sh
-r-xr-xr-x. 1 root root 9901516 Mar 26 14:01 VBoxLinuxAdditions.run
-r-xr-xr-x. 1 root root 20784640 Mar 26 14:14 VBoxSolarisAdditions.pkg
-r-xr-xr-x. 1 root root 16900432 Mar 26 13:55 VBoxWindowsAdditions-amd64.exe
-r-xr-xr-x. 1 root root 311584 Mar 26 13:46 VBoxWindowsAdditions.exe
-r-xr-xr-x. 1 root root 10463320 Mar 26 13:47 VBoxWindowsAdditions-x86.exe
| How to display contents of mounted /boot and '/' root partitions? |
1,421,059,115,000 |
When I list processes with ps auxf I often see some that are stuck and I need to manually kill them. How can I do it with one command?
Example ps result:
$ ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
tommass 7971 62.3 1.1 316428 45844 ? R Aug08 29133:14 xxxxxxxx
tommass 7978 0.0 2.6 455072 105964 ? S Aug08 8:56 xxxxxxxx
tommass 7979 0.0 2.6 454436 105360 ? S Aug08 8:57 xxxxxxxx
tommass 15034 67.8 1.1 51828 43760 ? R Aug14 26411:38 xxxxxxxx
tommass 7982 0.0 2.6 455012 105904 ? S Aug08 8:28 xxxxxxxx
How can I kill all processes for a given user "tommass" that take longer then 1 hour
How can I kill all processes for a given user "tommass" whose STAT is "R"
|
to answer 1), start with
ps -u tomass -o pid,time
(depending on you context, you may wish to select time (cpu time), etime (elapsed time))
to answer 2), try
ps -u tomass -o state,pid | awk '$1 == "R" { printf "kill %d\n",$2 ;}' | ksh
You really want to kill Running process ?
| How to kill all processes for a given user that take longer then X time |
1,421,059,115,000 |
I am interested to know if a single command line that would allow me to recursively copy a folder to all of our NGINX Virtual Host htdocs folders:
I need to copy that folder to all hosts located in vhosts :
/var/www/vhosts/*/htdocs/
|
With all due respect, I don't think the above code/answer is correct.
if [ -d dir] is probably an attempt to if [[ -d "$dir" ]].. or [[ -d "$dir" ]];..
The following code should work and do what you want.
vhostdirs=( ./var/www/vhosts/* )
for dir in "$vhostdirs"
do
cp -r "folder_to_be_copied" "$dir/htdocs/"
done
Mind also the quotes " " around the variables which are essential for the white spaces in directory names to be preserved.
| Copy a folder and its content to all Nginx vhosts host |
1,421,059,115,000 |
How do I install software using yum?
Can you tell me another way to install package in Fedora?
|
sudo yum install foo will look for foo in the package repositories and install it if it exists. Sometimes the name of packages is not obvious, so you may want to use yum search foo to see if there are any packages available pertaining to "foo". man yum will give you some details about the packaging program.
| How to install package via yum in Fedora |
1,421,059,115,000 |
What I'm trying to do
I'm using a Logitech MX Revolution mouse, which has a button below the scroll wheel mapped to search, which I remapped to middle click. For doing so, I had to remap XF86Search to middle click using xbindkeys, which works fine – when I change the "Search" hotkey to something else, for example Ctrl+XF86Search in the Gnome Settings.
Now I want another mouse button to invoke the Gnome Activity screen (the one with the overview of open windows). Alt+F1 also opens this view (or Ctrl+XF86Search now would do it, and even just pressing the super key).
Invoking the Gnome Activity screen
I try to send Alt+F1 using
/usr/bin/xvkbd -text "\[Alt_l]\[F1]"
but it seems Gnome 3 does not fetch this key (which is not totally unexpected, as xvkbd -text sends it to the focussed window).
What choices do I have do invoke the Gnome Activity screen?
|
I found this AskUbuntu Q&A titled: Bind a mouse button to show the Gnome Shell Activities overview. The OP from that Q&A posted that this solution worked for him/her using xbindkeys:
"xte 'keydown Alt_L' 'key F1' 'keyup Alt_L'"
release + b:10
There were other suggestions in that Q&A as well, so if the accepted answer doesn't work, then perhaps one of the others would suit your needs.
| How to invoke Gnome 3 activity screen via mouse button? |
1,421,059,115,000 |
I need to replace all unwanted spaces in all files from the current directory and from directories from the current directory (recursive search).
I define the unwanted spaces the spaces and tabs that are at the end of the line and are not followed by any other character than \n (the new line character).
e.g.:
This is a line with unwanted spaces
// this line is supposed to contain only spaces
Another line without unwanted spaces
Another line with unwanted spaces
This snippet will become:
This is a line with unwanted spaces
Another line without unwanted spaces
Another line with unwanted spaces
How can I remove all unwanted spaces from all files (recursive search) from the current directory?
|
Something like the following should suit your needs:
find -type f -exec sed -i 's/[[:blank:]]\+$//' {} \;
Note that sed's -i option is not safe with symlinks -- it will break them. If this is a problem, consider using ed or ex.
| Replace spaces and tabs at the end of the line in all files |
1,421,059,115,000 |
What does ; mean in single line scripts like this:
while true; do sudo -n true; sleep 60; kill -0 '$$' || exit; done 2>/dev/null &
Does it mean new line, or "next command"?
|
It's a separator of commands. Though in the first instance, it might be better to think of it as ending the while statement.
For example, if you wanted to do a loop while some command returns success, you would do something like
while test -f /foo; do some_command; done`
The semicolon is used to indicate the end of the arguments to test. Otherwise it would think that do is another argument to test.
However you can use newlines instead of the semicolon. The following would be exactly equivalent to the example above, just without any semicolons
while test -f /foo
do
some_command
done
In fact with bash, if you run the above command, and then after it finishes (or you CTRL+C it), if you go back in history (up arrow keys or whatnot), you'll notice it replaces the multi-line command with one using semicolons instead.
So yes, the syntax for things like if and while break normal shell behavior.
I've personally always thought the syntax is weird, as the do looks strange. But you get used to it.
| What is the use of ; in a single line command? |
1,421,059,115,000 |
Is there a command that can tell me which my graphic card is and its pixel-depth? I am running vncserver and I would like to learn which is the best parameter for pixeldepth (-depth).
|
xdpyinfo gives you this information. A display can support multiple depths.
xdpyinfo | awk '$1=="depth" && sub(/,$/, "", $2) {print $2}'
If your display consists of multiple screens, they may support multiple depths.
xdpyinfo | awk '
$1=="screen" {screen = substr($2, 2, length($2)-2)}
$1=="depths" {$1=$2=""; print screen, $0}
'
You won't get any benefit from telling VNC to use a larger value, and some applications might misbehave if VNC isn't using a value in the list.
| How do I find out the pixel depth of my graphic card? |
1,421,059,115,000 |
Is there a command where, when entered outputs information such as:
If the network connection is wired or wireless
if it is a wireless network, the name of the wireless network
How strong the signal is
|
Just type in terminal 'iw' and then press Tab and you will see something like
iw
iw iwconfig iwevent iwgetid iwlist iwpriv iwspy
all those are related to wireless internet, try iwconfig to show statistic about signal and network interface.
| Get network information through Ubuntu Terminal |
1,421,059,115,000 |
I have many HTML files containing mixed unicode strings like \303\243 and printable characters like %s. What I'd like to do is converting the unicode strings into printable characters in a safe way.
I found that printf from GNU coreutils converts them automatically, but I also learned the hard way some time ago that printf is not trustworthy if you don't know what is going to be printed:
$ env printf "$(cat filename.htm)"
printf: %20M: invalid conversion specification
I also know that bash can do conversions like this, but I'd have to loop through the file with a regex, which is not safe at all:
$ echo $'\303\243'
ã
Is there such a command-line tool that can process these files in a clean and safe way?
|
If the files don't have other backslashes:
$ printf %b\\n 'aa\303\243'
aaã
If they do, you could double backslashes that aren't followed by integers:
$ printf %b\\n "$(sed -E 's/\\/\\\\/g;s/\\(\\[0-7])/\1/g' <<< '\\a\na\303\243')"
\\a\naã
| Safely convert unicode strings to printable characters |
1,421,059,115,000 |
I have this command
$ cut -f2,3 AIS2F1 | grep [2-9][0-9]* | cut -f1
So my second and third fields are something like this
Ben 434
Me 12
you 56
So, I thought that the logic should be to cut the second and third field then grep numbers that are bigger than 20 and then cut the first field. That should give me the name that is on the same line with number that is bigger than 20, but it gives me this output:
Ben
Me
you
instead of
Ben
you
which is what I want, how can I fix this command?
|
The * in grep means that zero or one of the previous occurrences will be matches. Thus, your grep command matches every line containing a [2-9]. Replace the * with a \+, which means: match one or more occurrence.
| Grepping number in a file |
1,421,059,115,000 |
I am looking for a tool which takes a file in input and a word to search. It should display the file with color the words if it corresponds to the search.
Like grep --colors but displays all the file.
Is there something already exists ?
Example : cat /etc/passwd | colors root
Display all /etc/passwd file and color the words "root"
If I can change the color easily it would be great !
|
A little trick with grep will do the job:
grep --color "^\|root" /etc/passwd
Otherwise look here.
| Display words in color |
1,421,059,115,000 |
How can I run the most recent command again from history in AIX Server? And how to edit the most recent command and run it again in AIX?
|
what shell are you using?
if korn?
'r' will run the previous
bash?
ctrl-p or up-arrow or '!!'
to edit the command try using fc - it will used the $EDITOR env variable and open up the editor. For example if it's vi then it'll open vi with the command and when you save exit (ZZ or wq) it'll run it.
| How to run the most recent command on AIX? |
1,421,059,115,000 |
There is a Linux program I like which is Conky, when I run conky -c mycustomconf.txt it runs fine.
I want this program to run automatically when I start my computer, without having to type in the command again to start it.
How can I do this?
I am using Ubuntu with Xfce4.
|
You can add programs that you wish to start up alongside Xfce to your startup items using xfce4-autostart-editor, which is accessible at Settings, and then "Xfce 4 Autostarted Applications".
| Running a program automatically? |
1,421,059,115,000 |
I'm trying to set up my Arch Linux so that, through non-graphical means, I can connect to a wireless network through profiles on my system.
I have tried net-auto-wireless, and it does everything I have specified, but I want it to be able to connect once it has the ability. For instance, if the first attempt was unsuccessful or if a network becomes within range after the daemon has been started.
Is there a way to do this easily? Is there something with netcfg, net-profiles or similar that I missed?
EDIT:
I read here [ https://bbs.archlinux.org/viewtopic.php?id=110253 ] that netcfg operates in this non-reconnecting way by design (it is said in post #3 that it is in the ArchLinux wiki page for netcfg, but I could not find anything saying this).
If this is the case, is there any way I can seamlessly reconnect? Perhaps through CLI means other than netcfg?
Also, I would rather not use NetworkManager, because the manual for nm-cli (NM's CLI counterpart) stated that polkit-gnome is required to query for non-existing connection credentials, and I just would like a universally-applicable solution (One that will work on an ArchLinux setup that may not have a graphical setup, or headless Linux distributions in general)
|
There is a helpful comparison of wireless management methods on the Arch Wiki.
If you are looking for a combination of automation, ie., you do not want to manually issue commands every time you connect to a network, and are looking for a lightweight solution that can be run both in X and in a TTY, then wicd-curses fulfills the criteria.
It has few dependencies and is also able to manage your wired connection.
For an even simpler approach, there is also a Bash Wifi Connector script that will provide the base functionality with no additional dependencies.1
1. Read the thread on the Arch boards.
| How to Connect to Wireless Automatically? (Non-graphical) |
1,421,059,115,000 |
Lets say in file.php, there is lots of php print text: <?php print t('Blabla'); ?>, <?php print t('Text Here'); ?>, etc.
What I need is to remove <?php print t(' and '); ?> of the php print text.
So, <?php print t('Blabla'); ?> will become Blabla, <?php print t('Text Here'); ?> will become Text Here, etc.
If one php print text in one line, I think I know how to use sed to replace, but how about if one long line contains several php print text
I just wonder how to replace it?
|
I suppose your intention is to remove an old internationalization system from your PHP scripts.
perl -e 'undef$/;$s=<>;$s=~s/<\?php\s+(?:print|echo)\s+t\((['"'"'"])(.*?)\1\);\s+\?>/$2/gs;print$s' apasajja
This has some improvements not asked in the question:
Works for both print or echo.
Works for both single and double quotes.
Allows the <?php .. ?> tags to be in separate lines.
Allows t()'s parameter to span over several lines.
But there are still enough situations in which will fail.
| Replace "<?php print t('Blabla'); ?>" to be "Blabla" |
1,421,059,115,000 |
I have an XML file and I would like to replace everything that is between the open and closing tag within multiple instances of the g:gtin node with nothing.
Is this possible from the command line, using sed or something similar?
<g:gtin>31806831001</g:gtin>
|
A simple solution for simple cases - see my comment:
echo "<g:gtin>31806831001</g:gtin>" | sed 's|<g:gtin>.*</g:gtin>|<g:gtin></g:gtin>|'
Result:
<g:gtin></g:gtin>
It depends on the assumption that start and endtag are on the same line, and not more than one tag is on that line.
Since xml files are often generated the same way, over and over again, the assumption might hold.
| regex replace text in xml file within node from the command line |
1,421,059,115,000 |
I'm using a wonderful program called ExifTool to recursively rename a large batch of files.
Here is example usage:
$ exiftool -r -ext JPG '-FileName<CreateDate' -d %Y%m%d_%H%M%S.jpg .
Error: './folder1/110310_135433.jpg' already exists - ./folder1/source.jpg
Warning: No writable tags found - ./folder2/110404_095111.jpg
68 directories scanned
1650 image files updated
5 image files unchanged
2 files weren't updated due to errors
When processing very large batches of images, the number of files not updated due to errors is often in the hundreds, therefore moving each file individually with mv is out of the question.
I'd like to simultaneously move the files with errors/warnings to a separate directory for further processing.
I need to extract the paths of the problem files from the terminal output and move them together, but I'm not sure how to achieve this.
How would I go about this?
For what it's worth, I am using Ubuntu 11.10.
|
This will extract the filenames from the errors/warnings of exiftool and create a replica directory tree under the folder `unprocessed' with only those files. Didn't try to just move them in a single directory to avoid the risk of overwriting files with the same name but different source dirs.
exiftool ... 2>&1 | tee exiftool.log | egrep '^(Error|Warning)' | \
sed 's/^Error: .* already exists - //;s/^Warning: .* - //' | \
while IFS= read -r img; do
rsync -vR -- "$img" unprocessed/
#rm -v -- "$img"
done
The sed part takes into account only the output of the example provided, I'm not familiar with the tool and its possible output messages.
EDIT: The rm part is commented out cause you better try this out first.
| Simultaneously move long list files to new location |
1,421,059,115,000 |
I've been trying to find a decent way to monitor elapsed time in the shell. Basically, I'm asking for a stopwatch. The end result is easy enough to accomplish; I've seen examples of using time read, and then hitting enter when you're done, but that method makes the current terminal unusable.
Ideally, I'd like something like:
timer start
(do some work, vim, python, etc.)
timer stop
It also would be nice if there was a way to pause the timer. For reference, I was planning on using it to time how long it takes to finish programming assignments from MIT's OCW. As such, since the assignments might be completed in more than one sitting, a way to pause and continue the timer would be beneficial.
Is there anything like this?
|
You may try a shell script (call it timer.sh) like this:
. timer.val
case "$1" in
start)
echo $TIMERSTART
echo "TIMERSTART=`date +%s`" > timer.val
echo $TIMERSTART
;;
stop)
echo $TIMERSTART
TIMEREND=`date +%s`
echo $TIMEREND
let RESULT=$TIMEREND-$TIMERSTART
echo $RESULT
;;
*)
echo "no command"
esac
| Command to monitor elapsed time in background? |
1,421,059,115,000 |
I was wanting to know how to remove newline characters in paragraphs in this book and others for use in a kindle. The desired effect is to have each block that is separated by a blank line to be turned into continuous lines of text. I got the job done for this book with a series of complicated vim substitute commands but I'd rather try and find a better way to get this done for the future.
My hope was to get a vim, perl, sed or awk script I could use for this but I'm open to whatever you people have in mind.
The solution has been found but here is an example input output for those googling this in the future.
Input with newline characters:
Letter 1
_To Mrs. Saville, England._
St. Petersburgh, Dec. 11th, 17—.
You will rejoice to hear that no disaster has accompanied the
commencement of an enterprise which you have regarded with such evil
forebodings. I arrived here yesterday, and my first task is to assure
my dear sister of my welfare and increasing confidence in the success
of my undertaking.
I am already far north of London, and as I walk in the streets of
Petersburgh, I feel a cold northern breeze play upon my cheeks, which
braces my nerves and fills me with delight. Do you understand this
feeling? This breeze, which has travelled from the regions towards
which I am advancing, gives me a foretaste of those icy climes.
Inspirited by this wind of promise, my daydreams become more fervent
and vivid. I try in vain to be persuaded that the pole is the seat of
frost and desolation; it ever presents itself to my imagination as the
region of beauty and delight. There, Margaret, the sun is for ever
visible, its broad disk just skirting the horizon and diffusing a...
Output without newlines in paragraphs:
_To Mrs. Saville, England._
St. Petersburgh, Dec. 11th, 17--.
You will rejoice to hear that no disaster has accompanied the commencement of an enterprise which you have regarded with such evil forebodings. I arrived here yesterday; and my first task is to assure my dear sister of my welfare, and increasing confidence in the success of my undertaking.
I am already far north of London; and as I walk in the streets of Petersburgh, I feel a cold northern breeze play upon my cheeks, which braces my nerves, and fills me with delight. Do you understand this feeling? This breeze, which has travelled from the regions towards which I am advancing, gives me a foretaste of those icy climes. Inspirited by this wind of promise, my day dreams become more fervent and vivid. I try in vain to be persuaded that the pole is the seat of frost and desolation; it ever presents itself to my imagination as the region of beauty and delight. There, Margaret, the sun is for ever visible; its broad disk just skirting the horizon, and diffusing a...
Now the vim commands I used initially for the curious:
ggVG:norm A<space> -- adds a space to the end of each line
:%s/\v^\s*$/<++> -- swaps all blank lines with a unique temporary string
ggVGgJ -- joins all lines without adding a space
:%s/<++>/\r\r/g -- replaces all occurrences of my unique string with two newline characters
|
If the paragraphs already separated by two or more newlines and you only want to remove the newlines inside each paragraph (or better yet, replace newlines with a space), then:
perl -00 -lpe 's/\n/ /g' pg42324.txt > pg42324-new.txt
-00 tells perl to read & process the input one paragraph at a time (a paragraph boundary is two or more newlines)
-l turns on perl's automatic processing of line-endings (or, in this case, paragraph-endings)
-p makes perl run similarly to sed - i.e. read and print the input after any modifications by the script.
-e tells perl that the next argument is the script to run
For more details on these options, see man perlrun.
Or, to do an in-place edit (with the originally backed up with a .bak extension):
perl -i.bak -00 -lpe 's/\n/ /g' pg42324.txt
If there are leading or trailing spaces on any lines within paragraphs, you may want to replace multiple spaces with a single space - add ; s/ +/ /g to the perl script:
perl -00 -lpe 's/\n/ /g; s/ +/ /g'
IMO, though, you're probably better off just treating the entire file as if it's markdown (maybe even going so far as to add markdown formatting for bold, italics, chapter headers, etc) and using pandoc or something to convert it from markdown to epub. After all, markdown is just plain text with optional formatting characters in it. e.g.
pandoc pg42324.txt -o pg42324.epub
A minimal edit would be to just open the file in vim (or whatever) and make sure that there's a blank line between each paragraph.
BTW, Creating an ebook with pandoc is a short but good general introduction to creating .epub books from text or markdown files.
Or, even better, just download the .epub or .mobi version of the book rather than the plain text version - Project Gutenberg provides books in multiple formats.
There are links to download Mary Shelley's Frankenstein in various formats at:
https://www.gutenberg.org/ebooks/42324
| How to remove newline characters inside paragraphs with sed or awk |
1,421,059,115,000 |
What flags do I need to add to
$ curl -o a URI1 -o b URI2 -o c URI3
to make it say
Getting URI1
Getting URI2
Getting URI3
sort of like wget?
No I don't want to need to pipe the output of --verbose etc. through grep, awk, perl, etc. (Yes, --silent gets rid of the timing info. That gets us a little closer to our desired result.)
|
The closest you'll get with only curl seems to be the -w flag:
-w, --write-out Use output FORMAT after completion
$ curl --silent --show-error -w "Download of %{url} finished" -o a URI1 -o b URI2 -o c URI3
To see all of the output control options you can do:
curl --help verbose
| How to have curl print the URL that it is fetching? |
1,421,059,115,000 |
I want to print the lines of updates via this command
dnf check-update --refresh --q --downloadonly | wc -l
However during the output there occurs a blank line which means the true update number is will be less than 1 from the output of the above command.
How can I subtract 1 from the above command, in a one line command ?
|
Just change wc -l by grep -c . to skip the blank line:
dnf check-update --refresh --q --downloadonly | grep -c .
or
dnf check-update --refresh --q --downloadonly | sed '/^$/d' | wc -l
or if you insist to do arithmetic:
printf '%s\n' $(( $(dnf check-update --refresh --q --downloadonly | wc -l) -1))
$((...)) is an arithmetic substitution. After doing the arithmetic, the whole thing is replaced by the value of the expression. See http://mywiki.wooledge.org/ArithmeticExpression.
| Arithmetic operation in terminal from an output |
1,421,059,115,000 |
On Linux I can do:
echo ${ANDROID_KEYSTORE} | base64 -di > android/keystores/staging.keystore
But on macOS, the same commands give:
base64: option requires an argument -- i
Usage: base64 [-hvDd] [-b num] [-i in_file] [-o out_file]
-h, --help display this message
-Dd, --decode decodes input
-b, --break break encoded string into num character lines
-i, --input input file (default: "-" for stdin)
-o, --output output file (default: "-" for stdout)
I have tried to replace -di with --decode --input, but it didn't help.
How do I fix the macOS command?
Is there a command that works both on Linux (Debian/Ubuntu) and macOS?
|
If you want portability, you'll have to implement the linux-flavour's -i yourself
# don't forget to quote the variable!
echo "${ANDROID_KEYSTORE}" \
| sed 's/[^A-Za-z0-9+/=]//g' \
| base64 -d
The sed command drops invalid characters
| How to decode base64 for both Linux and macOS? |
1,421,059,115,000 |
I'm using syncthing cli command to update settings in its config.xml file.
I have found that it is working only for some parameters, eg gui.user and gui.password:
$ syncthing cli --gui-address=localhost:8384 --gui-apikey=<KEY> config gui user set <VALUE>
$ syncthing cli --gui-address=localhost:8384 --gui-apikey=<KEY> config gui password set <VALUE>
But it is failing for almost everything else, eg:
$ syncthing cli --gui-address=localhost:8384 --gui-apikey=<KEY> config options minHomeDiskFree set 10
No help topic for 'minHomeDiskFree'
Is it possible to update other parameters using syncthing cli (and I'm doing something wrong with command syntax) or is there a list of supported parameters for this command (can't find anything in help/man)?
|
If you run syncthing like so:
syncthing cli config options
... then it will spit out a rather helpful text explaining how to use the cli config options sub-command.
In the text, you will see all the available options, one of these being min-home-disk-free. Note the spelling.
You may then drill further down to discover that you can get the currently configured setting like so:
$ syncthing cli config options min-home-disk-free value get
1
$ syncthing cli config options min-home-disk-free unit get
%
This means my currently running syncthing instance uses 1% as the value and unit of the min-home-disk-free setting.
You set the value and unit with set rather than get, followed by the appropriate argument.
$ syncthing cli config options min-home-disk-free value set 2
$ syncthing cli config options min-home-disk-free value get
2
| Using Syncthing cli to update its config.xml |
1,421,059,115,000 |
I'm using gfind running on MacOS, to find text files.
I am trying to see the filenames only and the birthdate of my gfind results, and then sorting them by date, and paging them. Is this achievable?
For the moment I am trying gfind . -name "*.txt" -printf "%f \t\t %Bc\n"
But the results are the following:
todo.txt Fri Mar 4 17:47:41 2022
99AC1EF5-6BE3-556B-8254-84A8764819E0.txt Fri Mar 4 17:49:08 2022
chrome_shutdown_ms.txt Fri Mar 4 17:48:07 2022
index.txt Fri Mar 4 17:48:05 2022
index.txt Fri Mar 4 17:48:05 2022
index.txt Fri Mar 4 17:48:06 2022
index.txt Fri Mar 4 17:47:46 2022
index.txt Fri Mar 4 17:48:01 2022
index.txt Fri Mar 4 17:48:01 2022
index.txt Fri Mar 4 17:48:05 2022
index.txt Fri Mar 4 17:48:05 2022
index.txt Fri Mar 4 17:48:06 2022
index.txt Fri Mar 4 17:48:06 2022
index.txt Fri Mar 4 17:47:46 2022
index.txt Fri Mar 4 17:48:06 2022
LICENSE.txt Fri Mar 4 17:48:07 2022
english_wikipedia.txt Fri Mar 4 17:48:07 2022
female_names.txt Fri Mar 4 17:48:07 2022
male_names.txt Fri Mar 4 17:48:07 2022
Is there a way to tabulate the output in order to show some consistency as to what it looks like ? I would like to only show the filenames and the birthdate in a more elegant way.
Thanks a lot in advance!
|
Here, you can use:
gfind . -name '*.txt' -printf '%-40f %Bc\n'
or
gfind . -name '*.txt' -printf '%40f %Bc\n'
To print the file name left-aligned or right-aligned padded with spaces to a length of 40 bytes (not characters, nor columns unfortunately).
That would align them as long as file names don't contain control, multi-byte, zero-width or double-width characters, are are not longer than 40 bytes.
Note that if you put the date first (here using the mtime (%T), not the Btime (%B) which I doubt is what you want as it doesn't reflect anything useful in the life of the file), and use a more useful and unambiguous timestamp format like the standard YYYY-MM-DDTHH:MM:SS[.subsecond]+ZZZZ, then you don't have to worry about alignment and it makes the sorting easier:
find . -name '*.txt' -printf '%TFT%TT%Tz %f\n'
| Properly tabulating 'find's output with printf and sorting them by date |
1,421,059,115,000 |
I have around 200 folders, each with 1000 or more jpegs inside, that all need zero padding to 4 digits. Some of these folders have further subdirectories containing deeper images. The photos are all named differently (ie in one folder they may be called Image_1.jpg, Image_11.jpg, etc, while another photo may contain files called Photo01.jpg, Photo02.jpg)
.
├── folderA
│ ├── subfolder1
│ │ ├── Photo_1.jpg
│ │ └── Photo_11.jpg
│ └── subfolder2
│ ├── image001.jpg
│ ├── image002.jpg
│ └── image003.jpg
└── folderB
├── subfolder1
│ ├── foto_01.jpg
│ └── foto_01.jpg
└── subfolder2
├── foto_01.jpg
├── foto_02.jpg
└── foto_03.jpg
Can anyone tell me how to run a command that will go into all subfolders and zero pad the numbers in a filename to 4 characters?
|
Use perl rename:
rename -n --filename 's/\d+/sprintf("%04d",$&)/e' *.jpg
or recursive:
find . -type f -name "*.jpg" -exec rename -n --filename 's/\d+/sprintf("%04d",$&)/e' {} +
the --filename flag makes sure to rename the filename only, not the path, otherwise you will end up with subfolder0001, etc.
Remove the -n if you're happy with the output.
See also
| How to zero pad multiple files in multiple subdirectories using command line on linux? |
1,421,059,115,000 |
Example:
cat < test.txt
Is the content of the file test.txt written/passed to stdin of the cat, and then the cat reads its stdin?
OR
Does the file test.txt itself become the stdin of the cat? In other words, is the stdin of the cat changed to test.txt by setting the file descriptor (fd) of the text file to 0?
|
Option number 2: test.txt is opened, and cat is set up with its standard input pointing to that file (the file descriptor is duplicated so that it’s 0 in the process which ends up running cat).
On Linux, you can see this by running
$ touch /tmp/foo
$ sleep 120 < /tmp/foo &
[1] 3006118
$ ls -l /proc/3006118/fd
total 0
lr-x------ 1 steve steve 64 May 4 16:11 0 -> /tmp/foo
lrwx------ 1 steve steve 64 May 4 16:11 1 -> /dev/pts/3
lrwx------ 1 steve steve 64 May 4 16:11 2 -> /dev/pts/3
The process’ standard input is /tmp/foo directly.
| How does Input/Output Redirection work in Linux/Unix? |
1,421,059,115,000 |
Let's say I have a file listing the path of multiple files like the following:
/home/user/file1.txt
/home/user/file2.txt
/home/user/file3.txt
/home/user/file4.txt
/home/user/file5.txt
/home/user/file6.txt
/home/user/file7.txt
Let's also say that I want to copy those files in parallel 3 by 3. I know that with the command parallel I can execute a specific command in parallel as the following:
parallel bash -c "echo hello world" -- 1 2 3
However, this way of running parallel is hardcoded because even if I use a variable inside the quotes, it will only have a fixed parameter. I'd like to execute the parallel command getting parameters dynamically from a file. As an example, let's say I'd like to copy all files from my file running three parallel processes (something like cp "$file" /home/user/samplefolder/). How can I do it? Is there any parameter I can use with parallel to accomplish that and get parameters dynamically from a file?
|
If you use GNU Parallel you can do one of these:
parallel cp {} destination/folder/ :::: filelist
parallel -a filelist cp {} destination/folder/
cat filelist | parallel cp {} destination/folder/
Consider spending 20 minutes on reading chapter 1+2 of the GNU Parallel 2018 book (print: http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html online: https://doi.org/10.5281/zenodo.1146014). Your command line will love you for it.
| How can I use the parallel command while getting parameters dynamically from a file? |
1,421,059,115,000 |
I have files like these :
- REPORT_100_COMPLETED.csv
- REPORT_100_FAILED.csv
- REPORT_101_COMPLETED.csv
- REPORT_101_FAILED.csv
- REPORT_102_COMPLETED.csv
- REPORT_102_FAILED.csv
I want all of them to be put inside subfolder according to the related id :
100
| REPORT_100_COMPLETED.csv
| REPORT_100_FAILED.csv
101
| REPORT_101_COMPLETED.csv
| REPORT_101_FAILED.csv
102
| REPORT_102_COMPLETED.csv
| REPORT_102_FAILED.csv
and so on, anyone can help? Thank you in advance!
|
for i in REPORT_*_*.csv ;do
dir=$(cut -d'_' -f2 <<<$i)
mkdir -p $dir && mv $i $dir/
done
| How can I move all files matching a pattern into a new folder? |
1,421,059,115,000 |
For example, to install a package with pacman one would use:
pacman -S <package>
While somebody using dnf would type:
dnf install <package>
While pacman uses the -S option, dnf uses the subcommand install.
Some other examples are nmcli and tar, with nmcli connection up <connection> (uses subcommands) and tar -xzvf <file> (uses options).
What are the pros and cons of each, or is it just personal preference?
|
A more technical term for what you call a "word" is "subcommand". It is quite common to design a command line interface as a command with a number of subcommands each with its own set of options. Sometimes subcommands have their own subcommands. git is one such example:
git remote add -f -t "$BRANCH_NAME" "$REMOTE_NAME" "git://example.com/repo"
One reason to split a CLI into subcommands is to emphasize that it can do several different related things. dnf can install pacakges, remove packages, upgrade packages, search for packages, show info about a package etc. All quite different actions. Many of those actions have extra knobs you can tweak and choices you can make via options.
Another reason to use subcommands is that a subcommand provides a namespace for options: --all in dnf list --all means a completely different thing from --all in dnf search --all
Yet another reason to split a big CLI into subcommands is documentation. Imagine if git did not have subcommands. The entire git manual would be one looooooooong page detailing every option that git has and the relationships between them. Also: run git commit --help - you'll see a help page specific to the commit subcommand. Also: install tldr and run tldr git commit - you'll get a cheat sheet that is specific to the commit subcommand.
One more consideration is that you should not use subcommands if you want to allow the modes of operation they enable to be combined. You can specify multiple options in a single command invocation, but you can only specify one subcommand.
| When desiginng a CLI is there a preference/rule of thumb for using an option or a subcommand? [closed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.