date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,499,006,042,000 |
According to this article
I used successfully the command
$ ffmpeg -vf "select='eq(pict_type,I)'" -i somevideo.mp4 -vsync 0 -f image2 /tmp/thumbnails-%02d.jpg
I tried the second command:
$ ffmpeg -vf "select='gt(scene\,0.9)'" -i somevideo.mp4 -vsync 0 -f image2 /tmp/thumbnails-%02d.jpg
but ended with error:
Undefined constant or missing '(' in 'scene'
Because
$ ffmpeg -version
ffmpeg version 0.8.17-4:0.8.17-0ubuntu0.12.04.1, Copyright (c) 2000-2014 the Libav developers
built on Mar 16 2015 13:28:23 with gcc 4.6.3
The ffmpeg program is only provided for script compatibility and will be removed
in a future release. It has been deprecated in the Libav project to allow for
incompatible command line syntax improvements in its replacement called avconv
(see Changelog for details). Please use avconv instead.
ffmpeg 0.8.17-4:0.8.17-0ubuntu0.12.04.1
libavutil 51. 22. 3 / 51. 22. 3
libavcodec 53. 35. 0 / 53. 35. 0
libavformat 53. 21. 1 / 53. 21. 1
libavdevice 53. 2. 0 / 53. 2. 0
libavfilter 2. 15. 0 / 2. 15. 0
libswscale 2. 1. 0 / 2. 1. 0
libpostproc 52. 0. 0 / 52. 0. 0
I tried to use rather avconv. It runs both commands successfully, but in both cases it generates incorrect results (too many frames, so seemingly ignoring the video filter expression).
How can I correct my ffmpeg or avconv to give the right results?
|
First of all,
-vf needs to be specified after the input in order to affect it, and it seems to be the only reason avconv worked for the second command: it must have discarded your filter without even parsing it. If you move the argument after -i, it will result in the same error as ffmpeg gave you. Newer versions of ffmpeg actually treat that as an error.
Now the reason why neither of the commands are working is simple: neither of the versions that you are using support the scene filter.
What's more, the scene filter still appears to be missing in the master branch of avconv as of now, thus, avconv simply does not support it.
As for ffmpeg, the filter was introduced in r7286814 which didn't make it into your build.
Hence, you need to obtain an up-to-date version if you want to use the filter.
Once installed, move -vf after -i, and run your command to get your results.
$ ffmpeg -i somevideo.mp4 -vf "select='gt(scene,0.9)'" -vsync 0 -f image2 /tmp/thumbnails-%02d.jpg
| Incorrect scene change detection with avconv |
1,499,006,042,000 |
I have some Picture, that I want have in black and white.
I'am in the right folder.
-rw-r--r-- 1 alex alex 1027 Jan 21 13:07 target-0.jpg
-rw-r--r-- 1 alex alex 1001 Jan 21 12:17 target-1.jpg
-rw-r--r-- 1 alex alex 957 Jan 21 12:17 target-2.jpg
-rw-r--r-- 1 alex alex 982 Jan 21 12:17 target-4.jpg
Why do this not work?
for i in *.jpg ; do mogrify -monochrome ; done
No errors, but no black and white Pictures. When I convert them single mogrify -monochrome target-0.jpg it works as expected. Version of imagemagick
apt-cache policy imagemagick
imagemagick:
Installiert: 8:6.8.9.9-5+deb8u6
Installationskandidat: 8:6.8.9.9-5+deb8u6
Versionstabelle:
*** 8:6.8.9.9-5+deb8u6 0
500 http://security.debian.org/ jessie/updates/main amd64 Packages
500 http://http.us.debian.org/debian/ jessie/main amd64 Packages
100 /var/lib/dpkg/status
And
env | grep -i shell
SHELL=/bin/bash
|
You do not pass the variable i to your mogrify command in the for loop. It should be as follows.
for i in *.jpg ; do mogrify -monochrome "$i"; done
| mogrify -monochrome to several Picture |
1,499,006,042,000 |
How to input text into a new text file using nano from command line?
I would like the same as with the following, but using nano:
echo 'Hello, world.' >foo.txt
Result:
nano is not capable of handling non-interactive text input.
echo is available in every Linux/Unix system, while nano is not installed by default in every Linux/Unix system. Echo can be also used in shell scripts, too.
Conclusion:
The most compatible solution is to use
echo 'Hello, world.' >foo.txt
as solution to create a file and fill with input text non-interactively.
|
You can use a here document but with this way it is not possible to provide a special output document.
$ cat | nano <<-EOF
one
two
three
EOF
Received SIGHUP or SIGTERM
Buffer written to nano.save
This behaviour is mentioned in the man page under notes
In some cases nano will try to dump the buffer into an emergency
file. This will happen mainly if nano receives a SIGHUP or SIGTERM or
runs out of memory. It will write the buffer into a file
named nano.save if the buffer didn't have a name already, or will add a ".save" suffix to the current filename. If an emergency
file with that name already exists in the current directory, it
will add ".save" plus a number (e.g. ".save.1") to the current filename in order to make it unique. In multibuffer mode, nano will
write all the open buffers to their respective emergency files.
So i think nano is not the best choice for non interactive texting. If you only want to input multi line text to a file you can also use a here document as well without nano.
cat > foo.txt <<-EOF
> one
> two
> three
>
> EOF
cme@itp-nb-1-prod-01 ~ $ cat foo.txt
one
two
three
Maybe this is what you need.
| How to input text into a new text file using nano from command line? |
1,499,006,042,000 |
I'd like to print command output along with its input. For example for such call as
echo "Hello world" | wc -c
I want the following output:
12,Hello world
Is there any way to do this using standard Unix (or GNU) tools?
|
tee and paste solution:
echo "Hello world" | tee >(wc -c) | tac | paste -s -d, -
12,Hello world
| Combine command output along with the input [duplicate] |
1,499,006,042,000 |
I've installed some utilities from the CLI and got quite a long verbose output describe what was installed directly, what needed some dependencies, what is no longer needed to be installed, etc.
Is there a way to grep something from this last command ? A very certain word I need.
Thanks,
|
I needed something that does that after I ran the installation command, and not for to-come installation commands.
While I don't know a command to do it after the instillation command has been executed, what I did was to copy the output from Bash itself, into a text editor like Vi or Nano, and then search for all instances of the desired phrase.
| Grep something specific of the results of last execution? |
1,499,006,042,000 |
I have a file containing a set of information as below:
cat filename
1 S121
2 M121
3 MS121
4 SM154
5 SM91
I am trying to change only all of those which has [mM] to MS plus keeping the same pattern. The following sed script was tried
sed -r 's/ms?([0-9])/MS\1/Ig' filename
but it is not specific only to [mM] and can change the lines 4 and 5 as in the output as below:
4 SMS154
5 SMS91
any help is appreciated. tnx!
|
You are matching a substring, starting with "m", optionally followed by "s", and then followed by a digit [0-9].
The text on lines 4,5 does contain this substring too:
4 SM154
5 SM91
so they are replaced.
Try prefixing your pattern with "\s" to indicate that you are only interested in the prefix of column #2, like that:
sed -r 's/(\s)ms?([0-9])/\1XX\2/Ig'
| How to use appropriate regex to find a pattern in sed? |
1,499,006,042,000 |
I'm not going to lie. This is for an assignment. I'm stuck and its kind a frustrating so i came here as my last resort so please help me out.
So, I need to make a script to find and print, that if the path is a relative or absolute path. I'm stuck in the last part where the prof want's me to do a command line substitute,which I have no idea how to do. This is what I have so far.
if [ "$#" -ne 1 ]; then
echo 1>&2 "$0: please insert one valid file name;found $# ($*) "
echo 1>&2 "Usage: $0 [Filename..]"
exit 2
fi
if [ -z "$1" ] ; then
echo 1>&2 "$0: file name cannot be empty; found $# ($*) "
echo 1>&2 "Usage: $0 [filename...] "
exit 2
fi
if [ ! -L "$1" ] ; then
echo 1>&2 "$0: The pathname '$1' is not a symlink"
echo 1>&2 "Usage: '$0' [symlink] "
exit 2
fi
a=ls "$1" | awk '{ print $NF }'
if [ -z "$a" ] ; then
echo 1>&2 "$0: Pathname is empty "
exit 3
fi
type=$(a)
case "$b" in
/* ) type='an Absolute Pathname' ;;
* ) type='a Relative Pathname in the current directory' ;; # the "default" match
echo "pathname'$a' is $type"
esac
This is screen shot of what he want us to do.
please ask any questions if the question isn't clear enough.
Thank you
|
The script is far from ready, but you're on the right track now.
if [ "$#" -ne 1 ]; then
echo 1>&2 "$0: please insert one valid file name;found $# ($*) "
echo 1>&2 "Usage: $0 [Filename..]"
exit 2
fi
if [ -z "$1" ] ; then
echo 1>&2 "$0: file name cannot be empty; found $# ($*) "
echo 1>&2 "Usage: $0 [filename...] "
exit 2
fi
if [ ! -L "$1" ] ; then
echo 1>&2 "$0: The pathname '$1' is not a symlink"
echo 1>&2 "Usage: '$0' [symlink] "
exit 2
fi
a=$(ls -l "$1" | awk '{ print $NF }')
if [ -z "$a" ] ; then
echo 1>&2 "$0: A Really Good Error Message."
exit 3
fi
# type=$a
case "$a" in
/*) type='an Absolute Pathname' ;;
*) type='a Relative Pathname in the current directory' ;; # the "default" match
esac
echo "pathname'$a' is $type"
| a command substitute |
1,499,006,042,000 |
I have some text in a file copied to the clipboard via command line and I then wish to paste this content to a website.
cat file | pbcopy
/usr/bin/open -a "/Applications/Google Chrome.app" 'http://google.com/'
(1) How do I paste to say google (if it's even possible)?
(2) Is it possible to open a different website, tab x amount of times and then paste?
|
For sending keystrokes to a graphical application from a command-line program you can use xdotool (you may need to install it first). See the answer by Gilles to the question "How to send keystrokes (F5) from terminal to a process?" on Unix & Linux.
| Is it possible for command line to open a website and paste the clip board to a text box? |
1,499,006,042,000 |
The output I get from a capture of the file browser:
xwd -name "CVandXdo - File Browser" -out capture.xwd
Doesn't match the specification defined for xwd files.
I plan on parsing the output for an image recognition program. But I cannot localize the xwd header. I need to know where the pixels start and how many rows and columns there are.
Here is the beginning of the xwd file using a hex-editor. The xwd command has put another header before I think, but I'm unable to find its documentation. I assume there is one header from 0x00 to 0x7c, but the actual xwd format header doesn't appear to begin after it.
00000000: 0000 007c 0000 0007 0000 0002 0000 0018 ...|............
00000010: 0000 01f1 0000 01b5 0000 0000 0000 0000 ................
00000020: 0000 0020 0000 0000 0000 0020 0000 0020 ... ....... ...
00000030: 0000 07c4 0000 0004 00ff 0000 0000 ff00 ................
00000040: 0000 00ff 0000 0008 0000 0100 0000 0100 ................
00000050: 0000 01f1 0000 01b5 0000 055e 0000 007a ...........^...z
00000060: 0000 0000 4356 616e 6458 646f 202d 2046 ....CVandXdo - F
00000070: 696c 6520 4272 6f77 7365 7200 0000 0000 ile Browser.....
00000080: 0000 0000 0000 0701 0001 0101 0101 0101 ................
This is the same file, after I opened it in GIMP and saved it again.
00000000: 0000 0064 0000 0007 0000 0002 0000 0018 ...d............
00000010: 0000 01f1 0000 01b5 0000 0000 0000 0001 ................
00000020: 0000 0020 0000 0001 0000 0020 0000 0018 ... ....... ....
00000030: 0000 05d4 0000 0005 00ff 0000 0000 ff00 ................
00000040: 0000 00ff 0000 0008 0000 0000 0000 0000 ................
00000050: 0000 01f1 0000 01b5 0000 0040 0000 0040 ...........@...@
00000060: 0000 0000 edec ebed eceb edec ebed eceb ................
Can someone find me this arcane xwd documentation, or perhaps its "output implementation", that explain its behaviour? All my google-searching have resulted in tutorials in how to take screenshots using xwd.
|
The include file /usr/include/X11/XWDFile.h which is part of X11 holds more information. I found this file in rpm xorg-x11-proto-devel on my system. In particular, the HeaderSize which your link says is always 40 is incorrect. The header file says header_size = SIZEOF(XWDheader) + length of null-terminated window name. Further useful comments in the file are
Null-terminated window name follows the above structure.
Next comes XWDColor structures, at offset XWDFileHeader.header_size in
the file. XWDFileHeader.ncolors tells how many XWDColor structures
there are.
Here's a bit of python to read the start of an xwd file and print some of this information. It calculates the offset to the first image pixels:
#!/usr/bin/python
import sys, struct
XWDColorlen = 4*3*2+2*1
MSBFirst = 1
class Xwd:
def __init__(self,data):
(self.header_size,
self.file_version,
self.pixmap_format,
self.pixmap_depth,
self.pixmap_width,
self.pixmap_height,
self.xoffset,
self.byte_order,
self.bitmap_unit,
self.bitmap_bit_order,
self.bitmap_pad,
self.bits_per_pixel,
self.bytes_per_line,
self.visual_class,
self.red_mask,
self.green_mask,
self.blue_mask,
self.bits_per_rgb,
self.colormap_entries,
self.ncolors,
self.window_width,
self.window_height,
self.window_x,
self.window_y,
self.window_bdrwidth) = struct.unpack(">25I",data[:100])
f = file(sys.argv[1])
data = f.read()
xwd = Xwd(data)
print("header_size %d ncolors %d" % (xwd.header_size, xwd.ncolors))
offset = xwd.header_size+xwd.ncolors*XWDColorlen
print("offset %d 0x%x" % (offset,offset))
print("bits_per_pixel %d" % xwd.bits_per_pixel)
if xwd.bits_per_pixel==32:
if xwd.byte_order==MSBFirst:
fmt = ">I"
else:
fmt = "<I"
for i in range(20):
print("%08x" % struct.unpack(fmt,data[offset:offset+4]))
offset += 4
Applied to the data example you provided, it says
header_size 124 ncolors 256
offset 6780 0x1a7c
bits_per_pixel 32
I see there is also a perl pod to investigate xwd images.
| xwd output - unknown header |
1,499,006,042,000 |
I have thousands of csv files in some directories. Out of which I would like to copy one csv file to remote machine on same path, if remote machine doesn't have directory then it should create directory and copy it to that path.
Let me elaborate with example, say I have file called foo.csv in some directories
test/
├── 201512
└── foo.csv
└── bar.csv
├── 201601
└── foo.csv
└── abc.csv
├── 201602
└── foo.csv
└── xyz.csv
.
.
├── 201612
└── foo.csv
└── asd.csv
I would like to copy foo.csv on same path as I have on source machine to remote machine. So /test/201512/foo.csv should be copied to same path /test/201512/ on remote. If remote machine doesn't have that directory path, it should create it. Does rsync or scp have any utility to accomplish this ?
(Note : Content of foo.csv could be different in all directories)
|
Setup, done on my Vagrant box:
$ mkdir -p test/20{1512,16{01..12}}
$ for d in !$; do printf 'I am a csv file in %s\n' "$d" > "$d"/foo.csv; printf 'I am a different file; do not copy me!\n' > "$d"/abc.csv; done
Directory structure after setup:
[vagrant@localhost ~]$ tree test
test
├── 201512
│ ├── abc.csv
│ └── foo.csv
├── 201601
│ ├── abc.csv
│ └── foo.csv
├── 201602
│ ├── abc.csv
│ └── foo.csv
├── 201603
│ ├── abc.csv
│ └── foo.csv
├── 201604
│ ├── abc.csv
│ └── foo.csv
├── 201605
│ ├── abc.csv
│ └── foo.csv
├── 201606
│ ├── abc.csv
│ └── foo.csv
├── 201607
│ ├── abc.csv
│ └── foo.csv
├── 201608
│ ├── abc.csv
│ └── foo.csv
├── 201609
│ ├── abc.csv
│ └── foo.csv
├── 201610
│ ├── abc.csv
│ └── foo.csv
├── 201611
│ ├── abc.csv
│ └── foo.csv
└── 201612
├── abc.csv
└── foo.csv
13 directories, 26 files
[vagrant@localhost ~]$ cat test/201609/foo.csv
I am a csv file in test/201609
[vagrant@localhost ~]$
Next, from my own box (not the vagrant box):
rsync -ame 'ssh -p 2222' -f '+ */' -f '+ foo.csv' -f '- *' [email protected]:/home/vagrant/test .
Result:
$ find test
test
test/201512
test/201512/foo.csv
test/201601
test/201601/foo.csv
test/201602
test/201602/foo.csv
test/201603
test/201603/foo.csv
test/201604
test/201604/foo.csv
test/201605
test/201605/foo.csv
test/201606
test/201606/foo.csv
test/201607
test/201607/foo.csv
test/201608
test/201608/foo.csv
test/201609
test/201609/foo.csv
test/201610
test/201610/foo.csv
test/201611
test/201611/foo.csv
test/201612
test/201612/foo.csv
Notes on rsync options:
Here again is the command used:
rsync -ame 'ssh -p 2222' -f '+ */' -f '+ foo.csv' -f '- *' [email protected]:/home/vagrant/test .
-a is the "archive" switch, meaning the directory is copied recursively, permissions are preserved, etc.
-m means any empty directories will not be copied (e.g. if one of the date directories is missing foo.csv we don't create that directory).
-e 'ssh -p 2222' is just because I'm using a Vagrant box which has SSH on a different port than 22; you can omit this part.
-f introduces "filter" rules. You can include or exclude files. The filters I've used should be fairly self-explanatory—but to clarify the '+ */' filter, we need to include all directories so that the foo.csv files will be reached.
Read more about this in the man page at:
LESS='+/INCLUDE\/EXCLUDE PATTERN RULES' man rsync
| copy files remotely on same path |
1,499,006,042,000 |
I do wish to modify a Mac OS X sandbox file via a one-line (copy and paste) command, by inserting a new line — containing a regex — after a line that contains a specific string (also being a regex pattern).
The file to edit requires root rights and is located at /usr/share/sandbox/clamd.sb.
Both search and append lines contain loads of usually to be escaped characters because these are regex-es and containing paths.
Search for line containing
(regex #"^/private/var/clamav/")
Note: the string is preceded with tabs in one case.
Insert this line before the match
(regex #"^/System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent\$")
Note: this to be inserted new-line-string should be prepended with one tab (\t).
My failing try
sudo sed -i '' -e $'/(regex #"\^\/private\/var\/clamav\/")/a \t(regex #"\^\/System\/Library\/PrivateFrameworks\/TrustEvaluationAgent\.framework\/Versions\/A\/TrustEvaluationAgent\\\$")' /usr/share/sandbox/clamd.sb
sed: 1: "/(regex #"\^\/private\/ ...": command a expects \ followed by text
Question
How to fix the above sed command
or
supply a better readable and working alternative that can be used to copy from a website and paste into the Mac OS X terminal (bash) to extend this sandbox configuration file?
|
You can't do this with macOS Sed, because it strips leading whitespace from the lines that you are inserting.
Is it portable to indent the argument to sed's 'i\' command?
Using Awk:
awk '/\(regex #"\^\/private\/var\/clamav\/"\)/ {print "\t(regex #\"^/System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent\$\")"}; {print}' /usr/share/sandbox/clamd.sb > ~/temp-clamd.sb
Note that I've redirected the output to ~/temp-clamd.sb rather than editing the file in place (which is tricky or impossible with BSD Awk).
Next you can check that the changes are as you expect with:
diff /usr/share/sandbox/clamd.sb ~/temp-clamd.sb
If everything is correct, overwrite the contents of the original file with the modified copy (don't use mv, which would change the inode, permissions, owner):
cat ~/temp-clamd.sb | sudo tee /usr/share/sandbox/clamd.sb
| One-liner to insert a new line of text (literally a regex, thus many to be escaped characters) in a configuration file before a specific string? |
1,499,006,042,000 |
I'm using macOS Sierra and I would like to log a process with the top command and store all the information in a file. I'm using the following command:
top | grep --line-buffered "PROCESS" > test.txt
This perfectly works, but I would like to select only certain columns as the reseults:
PID
Memory Usage
CPU Usage
Network Usage
Disk Usage
Is there a way to filter the top result and select only the columns of my interest?
|
You can run this command in a loop.
top -l 1 | grep "PROCESS" | awk '{print $1,$2}' >> test.txt
Use awk to select the respective columns you want to include in your logs. For example, $1 is the first column, $2 is the second and so on.
| filter top result |
1,499,006,042,000 |
I was using an at command in my bash profile to echo the time of day at every hour to remind me of the time but now when I open up the command line it flashes a lot of annoying warning like these:
warning: commands will be executed using /bin/sh
job 241 at Thu Sep 1 00:00:00 2016
warning: commands will be executed using /bin/sh
job 242 at Thu Sep 1 01:00:00 2016
warning: commands will be executed using /bin/sh
job 243 at Thu Sep 1 02:00:00 2016
warning: commands will be executed using /bin/sh
job 244 at Thu Sep 1 03:00:00 2016
warning: commands will be executed using /bin/sh
job 245 at Thu Sep 1 04:00:00 2016
warning: commands will be executed using /bin/sh
job 246 at Thu Sep 1 05:00:00 2016
warning: commands will be executed using /bin/sh
job 247 at Thu Sep 1 06:00:00 2016
warning: commands will be executed using /bin/sh
job 248 at Thu Sep 1 07:00:00 2016
warning: commands will be executed using /bin/sh
job 249 at Thu Sep 1 08:00:00 2016
warning: commands will be executed using /bin/sh
job 250 at Thu Sep 1 09:00:00 2016
warning: commands will be executed using /bin/sh
job 251 at Wed Aug 31 10:00:00 2016
warning: commands will be executed using /bin/sh
job 252 at Wed Aug 31 11:00:00 2016
warning: commands will be executed using /bin/sh
job 253 at Wed Aug 31 12:00:00 2016
warning: commands will be executed using /bin/sh
job 254 at Wed Aug 31 13:00:00 2016
warning: commands will be executed using /bin/sh
job 255 at Wed Aug 31 14:00:00 2016
warning: commands will be executed using /bin/sh
job 256 at Wed Aug 31 15:00:00 2016
warning: commands will be executed using /bin/sh
job 257 at Wed Aug 31 16:00:00 2016
warning: commands will be executed using /bin/sh
job 258 at Wed Aug 31 17:00:00 2016
warning: commands will be executed using /bin/sh
job 259 at Wed Aug 31 18:00:00 2016
warning: commands will be executed using /bin/sh
job 260 at Wed Aug 31 19:00:00 2016
warning: commands will be executed using /bin/sh
job 261 at Wed Aug 31 20:00:00 2016
warning: commands will be executed using /bin/sh
job 262 at Wed Aug 31 21:00:00 2016
warning: commands will be executed using /bin/sh
job 263 at Wed Aug 31 22:00:00 2016
warning: commands will be executed using /bin/sh
job 264 at Wed Aug 31 23:00:00 2016
The command I am trying to run is example:
echo "midnight" | at 00:00
How can I remove these warnings?
Notes: It is not like the question; "Why does at warn me that commands will be executed using /bin/sh? What if I want a different shell?" because, I don't want know why I want to know how to get rid of warning and I don't want to know anything anything about the shell.
|
echo "midnight" | at 00:00 2>/dev/null
| Removing the warning message that pops up when using an at command [duplicate] |
1,499,006,042,000 |
I am trying to batch rename the following files:
art-faculty-3_29060055362_o.jpeg
fine-arts-division-faculty-2016-2017-5_29165851925_o.jpeg
theatre-faculty-2016-2017-1_29132529356_o.jpeg
art-history-faculty-2016-2017-1_29060057642_o.jpeg
music-faculty-2016-2017-1_29132523816_o.jpeg
I would like to rename them to:
art-faculty.jpeg
fine-arts-division-faculty.jpeg
theatre-faculty.jpeg
art-history-faculty.jpeg
music-faculty.jpeg
Here is what I have so far:
rename -n -D '/faculty(.*)/g' -X -v *
This returns:
Using expression: sub { use feature ':5.18'; s/\/faculty\(\.\*\)\/g//g; s/\. ([^.]+)\z//x and do { push @EXT, $1; $EXT = join ".", reverse @EXT } }
'art-faculty-3_29060055362_o.jpeg' unchanged
'art-history-faculty-2016-2017-1_29060057642_o.jpeg' unchanged
'fine-arts-division-faculty-2016-2017-5_29165851925_o.jpeg' unchanged
'music-faculty-2016-2017-1_29132523816_o.jpeg' unchanged
'theatre-faculty-2016-2017-1_29132529356_o.jpeg' unchanged
Is it possible to use REGEX with the delete (-D) transformation? If so, how would I use it to make the transformation I show above? If not, please point me in the right direction for performing transformations with rename using REGEX.
|
for i in *.jpeg; do echo mv "$i" "${i%faculty*}faculty.jpeg" ; done
if okay as per requirements, remove echo to change the file names
The perl rename command on my system has only the options -v -f -n
$ rename -n 's/faculty\K.*(?=\.jpeg)//' *.jpeg
art-faculty-3_29060055362_o.jpeg renamed as art-faculty.jpeg
art-history-faculty-2016-2017-1_29060057642_o.jpeg renamed as art-history-faculty.jpeg
fine-arts-division-faculty-2016-2017-5_29165851925_o.jpeg renamed as fine-arts-division-faculty.jpeg
music-faculty-2016-2017-1_29132523816_o.jpeg renamed as music-faculty.jpeg
theatre-faculty-2016-2017-1_29132529356_o.jpeg renamed as theatre-faculty.jpeg
| Rename command to delete substring |
1,499,006,042,000 |
Given an input file which consists of lines of IP addresses and strings, how can I loop through each line, and execute a command using the IP address and string? An example of the command that I want to run for each line is:
ssh [email protected] cat /etc/component10/version | grep 'Version\|Project' >> /tmp/component_ver.txt
Assume that a password is not required. I would like the script to be robust enough to answer yes if the "...(yes/no)?" prompt is encountered during login.
Sample INPUT file:
192.168.0.10 component10
192.168.0.20 component20
192.168.0.30 component30
|
The following code will do what you want.
while read -r server _; do ssh -n -o StrictHostKeyChecking=no root@"$server" "grep -E 'Version|Project' /etc/component10/version" >> /tmp/component_ver.txt; done < serverfile
| How to read input file and act on each line |
1,499,006,042,000 |
Source Community
I've been trying to figure out an ffmpeg command with following requirements while converting 'avi' to 'mp4' with H264 video codecs. One command I tried was generic one like this which is recommended on most forums.
ffmpeg -I input.avi -acodec copy -vcodec copy output.mp4
But this copies same video codec & doesn't convert to H264. Can anyone of you guys help me compose a line of code that would do the task with following requirements.
=> Video Options
Codec: H264
Video Aspect Ratio: No Change
Video Resolution: No Change
Video FPS: No Change
=> Audio Options
Codec: AC
Audio Channels: No Change
Audio Frequency: No Change
Audio Normalization: No Change
Thanks in advance!
|
Let us enumerate the parameters to ffmpeg then.
-acodec is better written -c:a (menmonic codec for audio)
-vcodec is better written -c:v (same mnemonic)
-i is the input file (not -I)
ffmpeg does a pretty good guesswork based on file extensions, therefore doing:
ffmpeg -i file.wem file.mp4
Will convert things, but probably in a pretty poor quality.
For H264 you are after the libx264 codec therefore it should go:
ffmpeg -i file.avi -c:v libx264 -c:a copy file.mp4
As a test let's use the classic webm example:
$ wget http://video.webmfiles.org/big-buck-bunny_trailer.webm
...
$ ffmpeg -i big-buck-bunny_trailer.webm -c:a copy -c:v libx264 bbb.mp4
...
$ ffprobe bbb.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'bbb.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.41.100
Duration: 00:00:32.50, start: 0.000000, bitrate: 414 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 341 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: vorbis (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 64 kb/s (default)
Metadata:
handler_name : SoundHandler
And that looks promising, stream #0:0 is a H264 encoded video stream.
| Building Requirements-Specific Command For 'ffmpeg' Tool |
1,499,006,042,000 |
Trying to get powerline/airline symbols to show in vim running in a Debian container created with sudo systemd-nspawn -D ~/debian-tree/ on a Fedora host.
Right now it just shows question marks in diamonds (��) I'm pretty sure I need to set the locale but I can't find a straight forward answer on how to do this properly.
Output of locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=
output of locale -a
C
C.UTF-8
POSIX
|
Setting the locale is documented in the Debian install guide - there's an appendix which provides some hints on installing directly with debootstrap and configuring the system yourself.
To configure your locale settings to use a language other than English, install the locales support package and configure it. Currently the use of UTF-8 locales is recommended.
# aptitude install locales
# dpkg-reconfigure locales
The appendix as a whole has a disclaimer that it is not comprehensive, but it is official documentation and this specific method is perfectly correct. There are other alternatives which may be preferred for scripting - this method prompts the user to choose which locale(s).
There is a second issue which the appendix also mentions in passing. I am not sure if it affects your specific character issue, but it can cause issues with similar sophisticated output. You need to make sure that TERM is set correctly. Run echo $TERM outside the container. Inside the container, run e.g. export TERM=xterm-256color to set the terminal type for this session.
I don't think machinectl login handles this for you either, which is sad given how it talks to systemd inside the container.
If you run an SSH server inside the container, then just use that, SSH will forward TERM correctly and you don't have to do anything.
| Setting locale in a systemd-nspawn container (debian jessie) |
1,499,006,042,000 |
I have a file with 21 tabular fields in columns. Tabs 14 and 15 are sets of data which are repeated several times for variables in tab 10(up to ":") and tab 11 has numeric descriptive data for tab 10 .
Here's an example of the input:
399 3 0 0 0 0 0 0 - chromosome_1_Contig0.1980:10701-11103 402 0 402 gi|952977790|ref|NM_001317128.1| 849 447 849 1 402 0 447
281 0 0 0 0 0 0 0 - chromosome_1_Contig0.1980:11209-11490 281 0 281 gi|952977790|ref|NM_001317128.1| 849 166 447 1 281 0 166
166 0 0 0 0 0 0 0 - chromosome_1_Contig0.1980:11588-11754 166 0 166 gi|952977790|ref|NM_001317128.1| 849 0 166 1 166 0 0
51 0 0 0 0 0 0 0 + chromosome_1_Contig0.3916:1547-1598 51 0 51 gi|733214878|ref|NM_001303082.1| 708 0 51 1 51 0 0
132 0 0 0 0 0 0 0 + chromosome_1_Contig0.3916:3201-3333 132 0 132 gi|733214878|ref|NM_001303082.1| 708 282 414 1 132 0 282
294 0 0 0 0 0 0 0 + chromosome_1_Contig0.3916:3412-3706 294 0 294 gi|733214878|ref|NM_001303082.1| 708 414 708 1 294 0 414
103 4 0 0 0 0 0 0 + chromosome_1_unplaced_Contig0.3951:379-486 107 0 107 gi|526117967|ref|NM_001281232.1| 1518 1236 1343 1 107 0 1236
212 1 0 0 0 0 0 0 - chromosome_1_unplaced_Contig0.12366:214-427 213 0 213 gi|526117831|ref|NM_001281196.1| 1025 738 951 1 213 0 738
178 2 0 0 0 0 0 0 - chromosome_1_unplaced_Contig0.12366:633-813 180 0 180 gi|526117831|ref|NM_001281196.1| 1025 558 738 1 180 0 558
243 1 0 0 0 0 0 0 - chromosome_1_unplaced_Contig0.12366:909-1153 244 0 244 gi|526117831|ref|NM_001281196.1| 1025 314 558 1 244 0 314
313 1 0 0 0 0 0 0 - chromosome_1_unplaced_Contig0.12504:1668-1887 314 0 314 gi|526117831|ref|NM_001281196.1| 1025 0 314 1 314 0 0
I would like to get a new summarized tabular file from this in which;
for lines where the values in tabs 10 up to ":" and 14 are the same in a new line, tab 11 is summed for that combination. I would like to keep the lines in which these combinations appear only once. This gives me 3 new summarized tabs.
Then I would like to include the previous tab 15 and in a new tab the difference between the new tab 3 and the old tab 15. The output should look like this:
example output:
old_tab_10 old_tab_14 sumof_old_tab11 old_tab15 (old_tab15)-(sumof_old_tab11)
chromosome_1_Contig0.1980 gi|952977790|ref|NM_001317128.1| 849 849 0
chromosome_1_Contig0.3916 gi|733214878|ref|NM_001303082.1| 477 708 231
chromosome_1_unplaced_Contig0.3951 gi|526117967|ref|NM_001281232.1| 107 1518 1411
chromosome_1_unplaced_Contig0.12366 gi|526117831|ref|NM_001281196.1| 637 1025 388
chromosome_1_unplaced_Contig0.12504 gi|526117831|ref|NM_001281196.1| 314 1025 711
I started messing around with something on the lines of
awk '{S[$14]+=$11;N[$14]+} END{for(i in S){print i, N[i]}}'
then realized this is way out of my capabilities, I'm not even sure how to separate the fields for both tabs and ":" and if thats a good idea or if it would be better to use a different approach to separate the ":".
|
You can use split to extract the two parts of field 10 into an array (here called arr10) like this:
split($10, arr10, ":")
Then you can build an index out of a combination of the first element of that array and the whole of element 14. Using that index, you can build two new arrays, e.g. sum_of_11 and old_15:
sum_of_11[arr10[1]"\t"$14] += $11 # sum of all rows that have this index
old_15[arr10[1]"\t"$14] = $15 # just the value in the single most recent row
Putting it together (and setting OFS = "\t"):
awk '{ split($10, arr10, ":");
sum_of_11[arr10[1]"\t"$14] += $11;
old_15[arr10[1]"\t"$14] = $15
} END {
OFS = "\t";
for (i in sum_of_11) {
print i, sum_of_11[i], old_15[i], old_15[i] - sum_of_11[i]
}
}' file
Result:
chromosome_1_Contig0.3916 gi|733214878|ref|NM_001303082.1| 477 708 231
chromosome_1_unplaced_Contig0.12366 gi|526117831|ref|NM_001281196.1| 637 1025 388
chromosome_1_unplaced_Contig0.3951 gi|526117967|ref|NM_001281232.1| 107 1518 1411
chromosome_1_unplaced_Contig0.12504 gi|526117831|ref|NM_001281196.1| 314 1025 711
chromosome_1_Contig0.1980 gi|952977790|ref|NM_001317128.1| 849 849 0
| awk to consolidate large tabular file? |
1,465,383,627,000 |
My files
masi1.jpg
masi2.jpg
masi3-1.jpg
masi4.jpg
...
masi10.jpg
masi11.jpg
...
Command pdfjam *.jpg. Output: random. Expected output: as the list.
There is no parameters in man pdfjam, only synapsis
pdfjam [OPTION [OPTION] ...] [SRC [PAGESPEC] [SRC [PAGESPEC]] ...]
System: Ubuntu 16.04.
Pdfjam: 2.08.
|
Wildcard matches are sorted in lexicographic order, so 10 is between 1 and 2, not after 9.
To sort matches with numbers in numeric order, use zsh and its n glob qualifier
pdfjam *.jpg(on)
Or (still zsh-only) set the numeric_glob_sort option:
setopt numeric_glob_sort # this can go in your ~/.zshrc
pdfjam *.jpg
If all your files have a number of the same format, you can enumerate the number of digits:
pdfjam masi?.jpg masi??.jpg
But with fancier file names mixed in like masi3-1.pdf, there's no easy solution in bash.
| How to pdfjam by Filename Numbering? |
1,465,383,627,000 |
I want to test a speed connection with terminal command:
./speedtest-cli
And it returns This:
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Moscow, Russia (77.50.8.74)...
Selecting best server based on latency...
Hosted by East Telecom Ltd (Mytishchi) [10.81 km]: 7.433 ms
Testing download speed........................................
Download: 38.06 Mbit/s
Testing upload speed..................................................
Upload: 23.52 Mbit/s
I want to transform this to csv row like :
12-12-2016 12:01 ; 38.06 ; 23.52 ;
12-12-2016 12:11 ; 39.00 ; 21.12 ;
12-12-2016 12:21 ; 37.06 ; 25.00 ;
I'm trying to use simple grep function for this:
grep 'Upload:' test.txt | sed 's/^.*: //' >> test_res.txt
But this is just to get speed number from file, and only rewrite one of the params. How to write exact transformation to needed format. I'm rather new in bash scripting.
|
awk -v date="$(date +%d-%m-%Y\ %H:%M)" -v OFS=';' '/Download:/ { d=$2; }
/Upload:/ { print date, d, $2, ""; d="" }' speedtest
| Convert command output to csv, with time stamp |
1,465,383,627,000 |
I have a directory with the following layout (the layout in Directory 1 is repeated in every other Directory <num>):
Parent directory
Directory 1
some directory
another directory
<many files>
Directory 2
︙
Directory 3
Directory 4
I'd like to rename the files by prefixing them with the Directory <num> and moving them up 3 directories so that they are under the Parent directory and have the original (now empty) directories deleted like:
Parent directory
Directory 1_<many files>
Directory 2
︙
Directory 3
Directory 4
How could I do that?
The following from a similar question
find . -mindepth 2 -name '*.jpg' -exec rename -n 's!/([^/]+)$!_$1!' {} +
renames the files to the 1st parent directory:
Parent directory
Directory 1
another directory_<many files>
Directory 2
︙
|
First of all,
I can’t reproduce the results you claim for the command you showed.
I got the files being renamed to another directory_file1.jpg,
another directory_file2.jpg, etc.,
but still under the some directory directories.
Secondly, because of the depth of your directory structure,
you should be using -mindepth 4 instead of 2.
Thirdly, I strongly encourage you to use -type f.
As long as you’re using -name '*.jpg',
you probably won’t find any directories.
But six to eight weeks from now,
you’ll look at this and think “I want this to apply to all my files
— I don’t need to say -name '*.jpg',” and you’ll take it out.
And then, if you don’t have -type f,
the find command might start finding directories and renaming them.
And modifying a directory tree while you’re scanning it
is a recipe for disaster
(like the proverbial “flying the airplane while we’re still building it”).
Fourthly, the rename command to move a file up three levels
and prefix its name with Directory <num>_ is
rename -n 's!/[^/]+/[^/]+/([^/]+)$!_$1!'
(optionally)
because /[^/]+ represents a directory level,
so I just added two more copies of that.
Warning: This will, as I said, move a file up three levels.
If you have any files deeper in the directory tree than that,
they will not be renamed to the top level;
they may be renamed to something like Directory 3/some directory_file3.jpg.
But it turns out that that’s easy to fix.
To move a file to the top level from whatever depth it is at,
and prefix its name with Directory <num>_,
just use
s!(\./[^/]+)/.*/([^/]+)$/$1_$2/!
TL;DR
So,
The final command is
find . -mindepth 4 -type f -exec rename -n 's!/[^/]+/[^/]+/([^/]+)$!_$1!' {} +
Add -name '*.jpg' if you want.
Delete -n when the trace output shows you that it’s doing the right thing.
Beware that this has the potential to have filename collisions.
If you have files
Directory 1/some directory/another directory/sunrise.jpg
and
Directory 1/some directory/another directory/mountains/sunrise.jpg
this command will try to rename them both to Directory 1_sunrise.jpg.
Luckily the rename program seems to be smart/friendly enough
to do no harm in this case.
The first such file that it sees will (probably) be renamed;
the others will be left where they are (and you will be notified).
(You can use -f or --force to force overwriting existing files.)
But remember that some other programs (e.g., mv)
have less friendly default actions.
For the updated question
("How can I include all directory names in the file names?"):
this is a little trickier.
I'll explicitly assume that we are searching .,
so all the pathnames will begin with ./.
And I've discovered that the perlexpr argument to rename
can actually contain multiple string-modification commands.
So we can move all files to the top level
and prefix their names with all the directory names in the path with
s!^\./!!;s!/!_!g
This also has the possibility of filename collisions.
Directory 1/abc/def_ghi/sunrise.jpg
and
Directory 1/abc_def/ghi/sunrise.jpg
will (potentially) both be renamed to Directory 1_abc_def_ghi_sunrise.jpg.
Disclaimer: I have tested this, but not at all thoroughly.
I do not guarantee that this will work perfectly.
You should make a complete backup before trying the above commands.
| Move files several directories up for several directories with similar layout |
1,465,383,627,000 |
When I am typing in a path at the bash prompt I sometimes do not remember what the directories are so I cannot incrementally search for them.
Is there a way in readline to cycle through the possibilities or list them?
|
Completion does this. Press Tab to list the files starting with the part of the word containing the cursor up to the cursor. That is, if the cursor is at | in xdg-open fo|.pdf, then pressing Tab lists all the files beginning with fo, whether they have the .pdf extension or not. This makes completion most useful when you've only typed a prefix of the file you want.
What exactly happens when you press Tab depends on your completion settings. By default, you need to press it twice to list all the possibilities unless the word at the cursor is an unambiguous prefix. You may want to tweak the readline settings in ~/.inputrc, in particular set show-all-if-ambiguous on to get a list of completions immediately instead of having to press Tab twice.
By default bash's completion is fairly dumb and only ever completes file names as arguments of commands. Install the bash-completion package (provided by most distributions) and put . /etc/bash_completion in your ~/.bashrc to get context-aware completion.
If you don't find bash's completion mechanism fully satisfactory, try zsh, which has a much fancier system, including the possibility to complete based on parts of words (and not just a prefix) or on wildcard patterns, to select completions in a menu, etc.
| View path options when using readline? |
1,465,383,627,000 |
I was wondering if any of you knew why doesn't my terminal keep displaying stuff colored, after I use the command ls -l -a --color=always
I would like it to stay colored, so when the next time I type ls -l -a it is colored.
Just to make it clear, I'm using Windows 10, and then with putty I SSH into a server, where I have an account, and it runs Linux.
|
Once the output of ls is on the terminal, it stays colored. But if you run ls again, whether the output is colored depends on the options you pass to ls this time. The ls command doesn't remember settings from one time to the next.
If you want to have default settings for a command, define an alias for it. For bash, the file where aliases are defined is .bashrc. So add the following line to your .bashrc:
alias ls='ls --color=auto'
In addition, bash doesn't read .bashrc if it's a login shell, only if it's an interactive non-login shell. To get the same interactive configuration in both cases, put the following line in your .bash_profile:
if [ -e ~/.profile ]; then . ~/.profile; fi
case $- in *i*) . ~/.bashrc;; esac # Load .bashrc if this login shell is interactive
For future customizations, use .profile or .bash_profile for session startup things like environment variables and .bashrc for interactive customizations such as aliases and shopt settings
If you ever want to run the ls program and bypass your alias, run \ls instead of ls.
| Terminal color usage does not stay |
1,465,383,627,000 |
Of all screen-shot tools that I have seen in Linux, the KDE one (ksnampshot) looks the most powerful.
ksnapshot --region is a command that I can associate with a shortcut to capture a selected region without opening the Ksnapshot GUI.
The GUI, on the other hand, has a supplementary option of setting a delay for capturing the region:
Can that be done with a command too? I don't see a delay argument mentioned in ksnapshot --help-all.
Can ksnapshot or other tool do that, namely allowing a CLI command to capture rectangular region with delay?
|
There are several ways, the most simple probably sleep(1):
sleep 1m && ksnapshot --region ...
Using && instead of ; has the added benefit of the possibility to cancel the command with CTRL C.
| CLI command to capture region with delay? |
1,465,383,627,000 |
I'm using an AWS EC2 instance and I'm trying to switch from an Ubuntu instance to a Linux (Amazon Linux AMI) instance and in doing so I need to figure out the equivalent apt-get to yum commands and packages to install.
In essence how would you translate the following from apt-get to yum?
Side note assume I have already assume the root user role via sudo su, which is why sudo is not used in conjunction with the following commands.
apt-get commands:
apt-get update (Assuming this would be equivalent to yum update -y
apt-get upgrade -y
apt-get dist-upgrade -y
apt-get autoremove -y
apt-get install apache2 php5 php5-cli php5-fpm php5-gd libssh2-php libapache2-mod-php5 php5-mcrypt php5-mysql git unzip zip postfix php5-curl mailutils php5-json -y
a2enmod rewrite headers
php5enmod mcrypt
|
Various distros name their packages slightly differently and there is no automated way to map one to the other. You've probably quoted the best example already with Apache, which is apache2 on Debian/Ubuntu systems and httpd on CentOS/RedHat/Fedora systems, apache on Arch, apache2 on openSuse, www-servers/apache on Gentoo etc.
The best way to find the packages is to search for them with yum search:
yum search apache
...
httpd.x86_64 : Apache HTTP Server
...
Which finds that Apache in the description. You'll get about 200 lines of packages here, so maybe pipe it into less to read or into grep to look for keywords.
If you know the file name and want to find the package which provides this, then run yum provides:
yum provides *bin/httpd
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirror.simwood.com
* epel: epel.mirrors.ovh.net
* extras: mirror.simwood.com
* updates: mirror.mhd.uk.as44574.net
httpd-2.4.6-40.el7.centos.x86_64 : Apache HTTP Server
Repo : base
Matched from:
Filename : /usr/sbin/httpd
httpd-2.4.6-40.el7.centos.x86_64 : Apache HTTP Server
Repo : @base
Matched from:
Filename : /usr/sbin/httpd
Simply running yum provides httpd will list any package that has files which end in httpd, not just the executable, so it's best to narrow the search by prefixing it with *bin/.
Google can also help you find common packages.
| Linux yum Commands that are Equivalent to these Ubuntu apt-get Commands [closed] |
1,465,383,627,000 |
I am trying to print a tshark command output using awk below is my command:
tshark -r "test.pcap" -odiameter.tcp.ports:"1234" -R 'diameter.cmd.code == 272 and diameter.flags.request==0 and !tcp.analysis.retransmission and diameter.flags.T == 0' -Tpdml -Tfields -ediameter.Session-Id -ediameter.CC-Request-Type -ediameter.Result-Code -ediameter.Validity-Time -ediameter.Unit-Quota-Threshold -ediameter-Value-Digits | awk '{print $1":"$2":"$3":"$4":"$5":"$6}'
Output:
xyz.test.com:1:2001:300:400:1234
If any of the six fields is empty i want to print " " or NULL value. For example if no output is coming for 4th field, i need output as below:
xyz.test.com:1:2001::400:1234
But i am getting output as:
xyz.test.com:1:2001:400:1234:
Any suggestions will be very useful. I am open to use any linux command to get the expected output as mentioned.
|
Based on your tshark parameters, I'm guessing you are trying to output 6 specific fields, and one of them is empty.
tshark by default uses a TAB character as separator, so the output will contain two consecutive TAB characters (indicating a missing value).
awk however, by default treats multiple tab/spaces as one field separator - thus it doesn't do what you expect.
The solution is to specify an single field-separator character in awk.
See example, the value "4" is missing from the simulated output:
$ printf "1\t2\t3\t\t5\t6\n"
1 2 3 5 6
By default, AWK treats the two tabs as one field-separator, resulting in:
$ printf "1\t2\t3\t\t5\t6\n" \
| awk '{print $1":"$2":"$3":"$4":"$5":"$6}'
1:2:3:5:6:
What you want is likely this:
$ printf "1\t2\t3\t\t5\t6\n" \
| awk -v FS='\t' '{print $1":"$2":"$3":"$4":"$5":"$6}'
1:2:3::5:6
| Want to print NULL if value is not present as awk output |
1,465,383,627,000 |
I have a number of files that are saved as
Year -> Month -> Day -> bunch of .nc files
I would like to generate a list of all of the directories that contain the nc files. I can list the path of each nc file with:
find /Year/ -name *.nc | sort > directory_list.txt
which finds each .nc file in the sub directories of these folder in this main file. These results are then saved in a text file 'directory_lists'.
/2000/01/01/nc_file1.nc
/2000/01/01/nc_file2.nc
/2000/01/01/nc_file3.nc
/2000/01/02/nc_file3.nc
/2000/01/03/nc_file3.nc
and so on...
How is it possible to slightly modify this so that I have a list of each 'Day' directory? This would be similar to the results I get with the command above, but without the information on the nc file included.
I have tried:
find /Year/ | sort > directory_list.txt
but this returns each path
/Year/
/Year/Month
/Year/Month/Day
I would like the outcome to be:
/2000/01/01/
/2000/01/02/
/2000/01/03/
and so on... without the directory name being repeated
I think this is the same as trying to get the directory of the third level folder within the Year directory? Any advice would be appreciated.
|
find /Year/ -name '*.nc' | sed -e 's:/[^/]*$:/:' | sort -u
Will give you the list of directories which contains at least one file whose name match '*.nc'
| path of folder within a folder in terminal |
1,465,383,627,000 |
I am attempting to create a Custom Action in Thunar (File Manager) that will extract a gzip archive into a subdirectory of the same name (e.g. abc.tar.gz to abc/). I created this command, which works, although it puts single quotes around the file name (e.g. 'abc'/ instead of abc/). I ran the equivalent command manually and it doesn't contain single quotes. how can i remove them, and where are they coming from? Is there a better method of doing this?
tar -xzvf %n -C "$(f="%n"; g=${f%%.tar.gz}; mkdir -p $g; echo $g)"
|
I would try removing the quotation marks around %n. It appears that thunar puts its own marks there, which is why you have them in the folder name.
Also, when you check thunar's examples, they never put the marks around expanded variables.
| Thunar custom action: Extraction to subdirectories |
1,465,383,627,000 |
Consider the following setup: My laptop is normally located at my desk connected to power, external keyboard and mouse and an external monitor. Since the laptop's screen is pretty small and my monitor is pretty big, I work with the laptop screen turned off using only the external monitor. Recently I wanted to take the laptop to another place to continue working there, hence I disconnected all the externals and took the device now running on battery with me.
Before doing so, I had made a break and during that break the desktop had locked itself and switched off the monitors. Therefore I did not remember that I had to change the display settings to use the internal screen as I removed the external monitor and when I ended up at my new location, I was looking into a black screen and I could not change the screen settings using the laptop's quick access keys since the desktop was locked. However, I had still access to the tty's using the CTRL+ALT+F keys and I was wondering if there was some way to access and change the respective settings from a command line without rebooting the computer, restarting the display manager or any other action that forcefully terminates the user's current session.
Of course, I could just have gone back to my desk, connect the external monitor, change the settings and disconnect again. Or I could have unlocked my desktop by entering my password blindly and then use the screen setting hotkeys, but these options might not be available in some cases.
|
xrandr should be what you are looking for.
Also, you may need to review the instructions Multihead instructions.
I have had good luck with xrandr on gnome 3 but it should work fine with KDE.
| Is there a way to change KDE4 display settings from the command line? |
1,465,383,627,000 |
I have a folder that i want to zip but not deflate the soundfiles (since i'll create an expansion file for android).
TO achieve that one can use the -n flag. that is
zip -n .mp3 main_expansion thaisounds
then a new zip-folder is created where the mp3-soundfiles are stored but not deflated.
The problem is that I also have two other sounfile-format there
.wav
.3ga
If I add those as follows
zip -n .mp3,.wav,.3ga main_expansion thaisounds
Then the program starts to deflate all files though I use the -n flag.
So - my question - How should I use the zip-command to not deflating the mediafiles when there are several formats?
|
According to my version of the zip man-page, you need to use colons to separate the suffixes:
-n suffixes
--suffixes suffixes
Do not attempt to compress files named with the given suffixes. Such files are simply stored (0% compres-
sion) in the output zip file, so that zip doesn’t waste its time trying to compress them. The suffixes
are separated by either colons or semicolons. For example:
zip -rn .Z:.zip:.tiff:.gif:.snd foo foo
So you'd end up with:
zip -n .mp3:.wav:.3ga main_expansion thaisounds
| Zip several soundfile-formats without deflate |
1,465,383,627,000 |
I'm trying to use the output of a command as arguments:
The command:
/home/alexandre/dropbox.py exclude add ls | grep -v photos
I have to add a list of files, for example:
/home/alexandre/dropbox.py exclude add a.txt b.txt c.txt
ls | grep -v photos will give me a list of all files except the folder photos.
But if I use my command, the command adds exclusion for the file ls (which does not exist, I want to run the command ls).
Anybody know how to do that?
|
What you are looking for is to execute the command in a subshell, like:
/home/alexandre/dropbox.py exclude add $(ls | grep -v photos)
| pipe as arguments |
1,465,383,627,000 |
Earlier, in my shell, on pressing TAB, I would directly see the directory path completed to the best length possible. But now, after upgrading to centos-6 I see that it also prints all possible names from the current directory and then completes the command which seems to be unnecessarily take up space in my shell as some of the directories I work with have 100s of subdirectories starting with the same name.
How can I revert it so that I only get completion and not see all directories printed?
|
Have you tried unset autolist? Test it on the command-line and if it works, add it to your ~/.tcshrc
see man tcsh and search for Completion and listing for more details on completion and on what autolist does.
| Disable printing all possibilities in tcsh on TAB |
1,465,383,627,000 |
I want to know which executable gets executed for any command in bash.
Example:
I have firefox installed here /usr/bin/firefox, it is in the $PATH
alias browser=firefox
alias br=browser
Now I want to type something like getexecutable "br" and it should display /usr/bin/firefox
|
Here's a quick script I wrote further to my comment, that in the SIMPLE case of aliases will work. For anything with arguments/etc., though, it will fail miserably.
cmd="$1"
type=aliased
while [ "$type" = "aliased" ]; do
output="$(type "$cmd")"
type="$(cut -d ' ' -f 3 <<< "$output")"
cmd="$(cut -d '`' -f 2 <<< "$output" | tr -d \')"
done
echo "$output"
You will have to (ironically!) alias something to source this, as spawning a subshell will likely remove your local aliases.
| Get executable for any command [duplicate] |
1,465,383,627,000 |
I created mp3 files in Linux with the mp3wrap application and then I applied ID3 tags on it as follows:
$ for i in *.mp3; do mp3info -t ${i%.*} -l yes1 -a yes $i; done
When I look at a particular mp3 file (e.g. ddi.mp3) it looks as follows:
$ mp3info ddi.mp3
File: ddi.mp3
Title: ddi Track:
Artist: yes
Album: yes1 Year:
Comment: Genre: [255]
I copied the mp3 files on my Android device. There is no audio player app showing correctly and sorting according to my ID3 tags.
It shows instead title File wrapped with Mp3Wrap and artist/album
http://mp3splt.sourceforge.net
Is it possible to correct this ID3 information and enforce the correct ID3 tags?
|
AFAIK mp3info write only ID3v1 tags. You can check it with something like eyeD3 (a tool written in python): eyeD3 -1 file.mp3 (to check ID3v1) and eyeD3 -2 file.mp3 (to check IDv2, which is read by recent players).
You can also use eyeD3 to write v1 or v2 tags.
For instance, you can edit v2 tags with:
eyeD3 -2 -a "The Artist" -A "Album Name" -t "Title of the song" /tmp/song.mp3
| Bad ID3 tags when transferring mp3 to another device |
1,465,383,627,000 |
I have a program that runs from the command line. As soon as it runs, it asks for a text value and expects the return key to be pressed after that.
Is that possible to create a bash script that runs that program, wait a little bit for the prompt to appear (lets say 2 seconds) and then provide the text and the enter key?
EDIT:
I have created this script with expect but it is not working:
#!/usr/bin/expect -f
set timeout 15
set user "myusername"
set server "x.x.x.x"
spawn ssh -l $user -p AAAA $server
expect "[email protected]'s password: "
send "the password\r"
where AAAA is the port and x.x.x.x the IP.
when I run this script it finishes almost immediately and nothing happens.
|
It is working now, but I changed the script to:
#!/usr/bin/expect -f
set timeout 15
set user "myusername"
set server "x.x.x.x"
spawn ssh -l $user -p AAAA $server
expect "[email protected]'s password: " { send "the password\r" }
interact
| Running a program and providing input |
1,465,383,627,000 |
I saw this video on Youtube: Run Kali Linux on Android
The phone uses the app: Linux deploy, which uses the phone
to host the distro, but there's is no GUI. The only way to
connect to Kali, is by using a vnc viewer.
My question is:
Is it possible to host a linux distro on e.g. an laptop or
stationary, without GUI - only vnc? So that you need another
PC with a VNC Client to use the desktop itself.
|
Yes. This is easy. Install the system with the GUI libraries for X (drivers optional) and for the desktop environment you want. Then, run something like the TigerVNC server, and you're done.
| Host linux distro without GUI? |
1,465,383,627,000 |
Does anyone know which command I need to enter on the terminal, so I can check whether all allocated memory was successfully freed?
|
In Linux you can use free to see the amount of memory used. Using free before and after a process was executed you might be able to see if all memory is released. Keep in mind though that other applications might have allocated or released memory in the mean time. If you want to monitor a process while it is allocating and/or releasing memory try pmap -x <pid>
| How to check if allocated memory was freed? |
1,465,383,627,000 |
I was wondering if there is an easy way to do the following without writing a script.
Transform
1234,"a;b;d"
2345,"e;f;g;h"
to
1234,a
1234,b
1234,d
2345,e
2345,f
2345,g
2345,h
|
Should be easy with awk:
$ awk -F'[";]' -vOFS='' '{for(i=2;i<NF;i++)print $1,$i}' file
1234,a
1234,b
1234,d
2345,e
2345,f
2345,g
2345,h
| Transform values in a line by first field |
1,465,383,627,000 |
..
Script Run Complete.
You have new mail in /var/spool/mail/<user-name>
-bash-3.2$
I have seen the above message most the times on my prompt, probably while it's idle or as soon as script returns or just upon hitting return. I won't be needing it, at all.
Is there any way/tweak to control whats being output on the prompt? How would I be knowing if it's going to show up there?
|
You have to do unset MAILCHECK. From the bash manual:
MAILCHECK
Specifies how often (in seconds) bash checks for mail. The
default is 60 seconds. When it is time to check for mail, the
shell does so before displaying the primary prompt. If this
variable is unset, or set to a value that is not a number
greater than or equal to zero, the shell disables mail checking.
You can e.g. put it at the end of the your /.bashrc or search that file and/or /etc/bash.bashrc if MAILCHECK is set anywhere and remove it.
The alternative is that you read the mail and make sure no new mail arrives...
| How do I control output after executing a command? |
1,465,383,627,000 |
Recently git branch <tab> started showing me the following error:
$ git branch bash: -c: line 0: syntax error near unexpected token `('
bash: -c: line 0: `/usr/bin/git --git-dir=.git for-each-ref --format=%(refname:short) refs/tags refs/heads refs/remotes'
HEAD bash: -c: line 0: syntax error near unexpected token `('
bash: -c: line 0: `/usr/bin/git --git-dir=.git for-each-ref --format=%(refname:short) refs/tags refs/heads refs/remotes'
HEAD ^C
How can I fix it?
I have the following lines in my ~/.bashrc:
git() {
cmd=$1
shift
extra=""
quoted_args=""
whitespace="[[:space:]]"
for i in "$@"
do
if [[ $i =~ $whitespace ]]
then
i=\"$i\"
fi
quoted_args="$quoted_args $i"
done
cmdToRun="`which git` "$cmd" $quoted_args"
cmdToRun=`echo $cmdToRun | sed -e 's/^ *//' -e 's/ *$//'`
bash -c "$cmdToRun"
# Some mad science here
}
|
Your script does not preserve quotes. The original line executed by completion is:
git --git-dir=.git for-each-ref '--format=%(refname:short)' refs/tags refs/heads refs/remotes
by your script you get:
bash -c '/usr/bin/git --git-dir=.git for-each-ref --format=%(refname:short) refs/tags refs/heads refs/remotes'
Note the missing quotes around:
--format=%(refname:short)
Have not looked at what you actually do, but this:
quoted_args="$quoted_args \"$i\""
# | |
# +--+------- Extra quotes.
should result in something like:
bash -c '/usr/bin/git --git-dir=.git "for-each-ref" "--format=%(refname:short)" "refs/tags" "refs/heads" "refs/remotes"'
or:
quoted_args="$quoted_args '$i'"
# | |
# +--+------- Extra quotes.
bash -c '/usr/bin/git --git-dir=.git '\''for-each-ref'\'' '\''--format=%(refname:short)'\'' '\''refs/tags'\'' '\''refs/heads'\'' '\''refs/remotes'\'''
You might want to look into the %q format for printf.
| Broken git autocompletion after I have overridden the git command |
1,465,383,627,000 |
I have finally come up with a favourite PS1 format but I find it takes too long to load.
The part that is slowing it down is when I call the external commands in the prompt. I simply want to show the # of entries and # of hidden files of the directory.
I followed these 2 pages as a guide to make the prompt: "External command in prompt" and "customizing bash command prompt blog". I could not get Daniel's "customizing bash command prompt blog" method to work any faster than what I came up with. Why would he use "pwd" instead of \w anyway? Plus I don't get why he made a var and echoed it ($OUT). Oh well, here's what I did...
I sort of combined both methods and came up with the below, which works, but not as fast as I would like...
export PS1="\[\e[2;37m\]\d \[\e[2;37m\] @ \[\e[2;37m\] \t \[\e[2;33m\]> Currently in: \[\e[0;33m\]\w [\$(ls -A | wc -l) entries and \$[\$(ls -A | wc -l) - \$(ls | wc -l)$wc -l)] are hidden] \[\e[0m\]
\[\e[2;36m\]\u\[\e[0;37m\]@\[\e[1;32m\]\h\[\e[0;33m\] \$ \[\e[0m\]"
Newly edited command in bashrc, as per @mikeserv's suggestions:
export PS1="\[\e[2;37m\]\d \[\e[2;37m\] @ \[\e[2;37m\] \t \[\e[2;33m\]>Currently in: \[\e[0;33m\] $(($(count_glob c * count_glob h .*)0)) entries and $h are hidden \[\e[0m\]
\[\e[3;36m\]\u\[\e[0;37m\]@\[\e[1;93m\]\h\[\e[0;33m\] \$\[\e[0m\]"
The results of which are below:
Tue Jan 20 @ 18:37:58 >Currently in: 24 entries and are hidden
|
count_glob() {
[ -e "$1" ]
echo "($v=$((!$?*$#)))+"
}
You could declare a function like the above. Then instead of ls and the rest you could just do...
...Currently in: $(($(
v=c count_glob *
v=h count_glob .*
)-2)) entries and $((h-2)) are hidden...
I only removed the escape sequences because they're not relevant here - it will work as well with them.
So all together now...
export PS1='\[\e[2;37m\]\d \[\e[2;37m\] @ \[\e[2;37m\] \t \[\e[2;33m\]>'\
'Currently in: \[\e[0;33m\] $(($(
v=c count_glob *
v=h count_glob .*
)-2)) entries and $((h-2)) are hidden '\
'\[\e[3;36m\]\u\[\e[0;37m\]@\[\e[1;93m\]\h\[\e[0;33m\] \$\[\e[0m\]'
Ok, so what's going on here is the count_glob function is provided an argument list of all of the (hidden or not) files in the current directory. The special parameter $# represents the total count of a shell's positional parameters - its arguments - and every shell function gets its own set of those.
[ -e "$1" ]
... is a check to verify the first argument actually exists - which is actually not necessary in the .* case because there are always two . and .. files to resolve - but for * there is a chance that - if the directory is empty - the glob will not resolve and * will still get passed as argument. So in the function the check is done and the boolean not of the test's return is multiplied by the argument count. This works because if the test is true it returns 0 and if false other than zero - so multiplying by the inverse of those figures works out to get your count right.
The last factor to consider here is the way the shell handles arithmetic. In most cases you cannot just pass a variable definition out of a subshell in this way so easily - but with an arithmetic eval you can - because it really is an eval in the truest sense. The two calls to count_glob wind up printing a statement that looks like:
$(((c=[num])+(h=[num])+-2))
...and the shell honors and assigns those figures - even for subsequent calls. You can test this at your prompt - do echo "$h" "$c" and you'll get the same values as your prompt reports every time. I suppose that might be useful for other things.
| Command prompt (PS1) including the number of files in directory (both hidden and regular entries) |
1,465,383,627,000 |
The below script is not working.
cd desktop/quoteUpdate
while true
do
curl-o quotes.txt -s "http://download.finance.yahoo.com/d/quotes.csv?s=goog,aapl&f=sl1"
sed -i '.bak' 's/,/ /g' quotes.txt
echo UPDATED:
date
sleep 10
done
When I try and run the executable I get this error and no txt file is created in my desktop folder.
UPDATED:
Wed 14 Jan 2015 15:33:30 GMT
/Users/chrisdorman/Desktop/quoteUpdate/runNow: line 4: curl-o: command not found
sed: quotes.txt: No such file or directory
|
You need to have spaces in the curl command. You should have curl -o instead of curl-o. If I include the spaces and run the command, I get the quotes.txt file as expected.
| Using cURL command-line tool on Mac, how does one fetch stock data which comes back *without* commas? [duplicate] |
1,465,383,627,000 |
I have a very important process which is having a Queue, In Queue i'm storing important data and when if someone trap that process it must have to process the data which is there in the Queue after that it will terminate itself.
So if user perform a shut-down operation then that process will be killed and my data will be lossed...
So my Question is, Do we have any command which will wait or do not allow machine to shut-down, Or give your valuable alternative suggestion for handling this situation.
|
If your software do a critical job you have to write a service file and put it under /etc/init.d that when computer goes down your script start stop function in that script which you need to clean up to not mess up your program. Like Oracle DB that when system goes down it will close ports to don't transaction any information and then save the information on the hard disk (Clean Up!).
NOTE: Good software clean up data itself after the stop function in service file sends specific signal (Mostly 15).
NOTE: Normal users can't execute shutdown procedure with commands like reboot -p or shutdown -h now or init 0.
NOTE: Normal users can't kill process which owned by root.
| Is there any command in linux which will force linux machine to not shut-down |
1,465,383,627,000 |
I use aliases a lot but right now only for use cases like alias i='sudo apt-get install -y'. I often would like to add an alias in the following form:
alias cmd='echo [something] >> /path/to/file' where I would like to substitute [something] with what I enter after the cmd.
I can obviously create a one-line script,save it somewhere and make an alias to that command but since I only want to substitute only 1 word in a pipe, is there a simpler way to do this?
|
Functions are perfectly suitable for this purpose. For example:
cmd() { echo $* >> /path/to/file'; }
This is on one line, just like an alias. But it can take parameters.
| How do I make an alias to substitute single word in a piped command? [duplicate] |
1,465,383,627,000 |
I am developing an API in Unix environment for virtual machines. Most of the modules are developed in python. I have few questions on this.
Inside the API I am using absolute path like '/root/virtman/manager/' . Consider running this API in any unix environment , how can I make this absolute path generic to any OS/machine. Or should I have to assume some location where the API will get installed and give that path everywhere?
I read a lot of articles but I didn't get the clear picture,so any hints/suggestions would be helpful.
|
If the path is only pointing to executables you call, you should consider putting links in standard locations during install (/usr/bin/ or /usr/local/bin) and have the executable find out where they were invoked from and then have them derive the path to any data files from that.
You would use the following:
/usr/bin/myprog
/opt/myprog/bin/myprog
/opt/myprog/data/picture_01.img
have /usr/bin/myprog be a link to /opt/myprog/bin/myprog and /opt/myprog/bin/ should not be in your $PATH. Setup the link by doing sudo ln -s /opt/myprog/bin/myprog /usr/bin, and have in /opt/myprog/bin/myprog do:
import sys
import os
base_dir = os.path.realpath(sys.argv[0]).rsplit('/bin/', 1)[0]
To determine /opt/myprog dynamically at run-time
If the python API is based on some included module, make sure that module gets installed in the PYTHONPATH search path of a systems python, then you can just do import yourapimodule in a python executable and use it.
If these are data files that can be installed anywhere, consider having a configuration file that you read and that could be ~/.config/yourapimodule/config.ini, ~/.yourapimodule/config.ini or ~/.yourapimodule.ini.¹ (Instead of .ini you could use other formats like .json, whatever you prefer).
¹ Shameless plug: If you are using Python's argparse to handle commandline arguments, then have a look at the package ruamel.appconfig, that I wrote, it sets up the config for you and allows you to specify defaults in the config file for commandline parsing.
| Maintain the path in installable UnixAPI |
1,412,878,085,000 |
I downloaded Kali Linux from this link, I have a 32 bit OS, so I guessed ISO 32 Kali Linux is the suitable for me (they did not clarify in the website based on what we choose the type).
Then, I installed Kali Linux on USB using 32 image writer.
After that I changed the booting setting to boot from USB and this message appears:
isolinux.bin missing or corrupt
and Kali Linux did not launch .
I found in the link suggested solution to solve the problem but because I did not deal with file system before I did not know how to apply it.
|
That was because the downloading was not complete, Although Chrome showed that it is complete
I re-downloaded from another browser and it work fine now.
| Kali linux throws ' isolinux.bin missing or corrupt ' error |
1,412,878,085,000 |
There is a question about the group, but I couldn't find any question/answer for getting the Apache user.
Therefore how to determine Apache user from the command-line?
|
You may try to use the following command-line method to find out your Apache user name:
WWW_USER=`ps axo user,group,comm | egrep '(apache|httpd)' | grep -v ^root | cut -d\ -f 1 | uniq`
echo Apache user is: $WWW_USER
To get the Apache group, simple change the cut's field number from 1 to 2, see: How to check which Apache group I can use for the web server to write?.
| How to determine Apache user from the command-line? |
1,412,878,085,000 |
I can open my terminal emulator via a keyboard shortcut or through the apps finder that executes the exo-open --launch TerminalEmulator command. My terminal starts and I can cd to any directory and execute any binaries located on any bin directory on my system.
But whenever I launch it by right-clicking any directory on thunar and using the Open terminal here option it sometimes can't find any executable on my local binaries directory (~/.local/bin/). Simply put:
Open terminal via app finder, command launcher, keyboard shortcut, … → It can find local executables.
Open terminal via context menu on Thunar → It sometimes can't find local executables.
This happens on any terminal (xfce4-terminal, xterm, gnome-terminal). My machine is running Fedora 20 XFCE with thunar version 1.6.3-2.
I can't say for sure since when this started happening, because it has been some time, but this became more frequent in recent days. Also, I have to mention that once my terminals can find executables on my local bin directory and I add a new one, it won't find them again, until some time passes - no matter if it was launched via the thunar's context menu or not.
Has anybody noticed this behaviour too? Can somebody shed some light on what's happening here?
Update:
I've noticed that my .bash_profile file is what adds my local bin directory to the $PATH environmental variable:
PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH
And when I run a login shell (not started via the context menu), it executes .bashrc and then .bash_profile, so I proceeded to move those two lines from my .bash_profile to my .bashrc and now everything works fine.
So the question now is: why does the context menu command (which is the same as the normal command) somehow make my terminal to be launched as only interactive and not as a login terminal?
|
I would say your running into a good old classic fight. To ~/.bashrc or to ~/.profile
Checke your $PATH in both.
Read and understand https://stackoverflow.com/questions/415403/whats-the-difference-between-bashrc-bash-profile-and-environment It may answer your question.
Basically your logging in when your launch a terminal emulator, but not when you launch from thunar. This creates a different environment. There is no "right answer" to fix it, and it's a lot like vim or nano, but I usually just source a common file in ALL of them to setup my environments.
| Terminal sometimes fails to find executables on local directory [duplicate] |
1,412,878,085,000 |
I want to get list of URLs that contains vimeo.com from a web site recursively by a command , so that I can pipe it to vimeo_downloader.sh.
I prefer to use wget, but also I'm happy with other options.
Example
index.html
<a href="01.html">01</a>
<a href="02.html">02</a>
<a href="03.html">03</a>
<a href="04.html">04</a>
<a href="05.html">05</a>
<a href="06.html">06</a>
01.html
...
... src="//player.vimeo.com/video/xxxxxxxxxx?api=1" ...
...
Likewise 02.html to 06.html have a vimeo's URL.
How to get all vimeo URLs from 01~06.html?
|
You need to get the list of URL's then parse out the links to feed to the download. As you are using an external program to do the downloading rather than wget you don't really need wgets recursive download options.
Assuming GNU grep which allows you to print only the matching text you can grab the vimeo urls with:
wget -q -O - -i urllist.txt | grep -oi "http://vimeo.com/[0-9]\+"
Then to feed that into the downloader
urls=$(wget -q -O - -i urllist.txt | grep -oi "http://vimeo.com/[0-9]\+")
for url in $urls; do
echo "Downloading [$url]"
vimeo_downloader.sh "$url"
done
| How to get list of urls from a URL recursively with filtering |
1,412,878,085,000 |
I want to alter the stored procedure in my server machine. I'm uploading the codes via SSH Linux command prompt, I need to alter an existing stored procedure in my server. I don't have c-panel or phpMyAdmin access. I have to update it via a command prompt.
stored procedure
DELIMITER $$
USE `dbname`$$
DROP PROCEDURE IF EXISTS `add_data`$$
CREATE DEFINER=`root`@`localhost` PROCEDURE `add_data`(
f_id INT(11),
f_guild_parent_id INT(11),OUT lastid INT
)
BEGIN
INSERT INTO tablename
(
id,
character_detail_id,
media_id,
)
VALUES
(
f_id,
f_character_detail_id,
f_media_id,
NOW() ,
NOW()
);
SET lastid = LAST_INSERT_ID();
END$$
DELIMITER ;
I need the command to perform this. Can any one give me the command with an example?
|
If you have console access to your database you can load this file via the command line like this:
$ mysql < /path/to/text_file.sql
Where text_file is the name of the above file. If you're already at the mysql> prompt you can source the text_file using either of these methods too:
mysql> source /path/to/text_file.sql
-or-
mysql> \. /path/to/text_file.sql
You might need to pass a username + password to mysql to "connect" to the database. This can be done like so:
$ mysql -u <user> -p < /path/to/txt_file.sql
You'll be prompted for the password when you run the above command, after providing it, hit Enter and your .sql command file should execute.
References
4.5.1.5. Executing SQL Statements from a Text File
3.5. Using mysql in Batch Mode
| How to alter stored procedure in a MySQL database using the Linux command prompt? |
1,412,878,085,000 |
I want to be able to use notify-send to send popup messages from one server to be displayed on another.
I'm sure this is possible with SSH, but how can I automate it to a one-line command that doesn't ask me for a password so I can include it into a script?
|
One way of doing this would be to use key-based authentication for the ssh connection.
On the sending computer, create a public/private ssh keypair:
ssh-keygen -f .ssh/notify-key -C "notify-send SSH key" -b 2048 -t rsa
The program will ask you for a passphrase; simply hit enter twice to create an SSH key without a passphrase.
This will generate a pair of files, notify-key and notify-key.pub. Copy the public key over to the receiving server via ssh-copy-id -i .ssh/notify-key foo@receiver
NOTE This will allow anyone who has access to notify-key to log into receiver with your user credentials and no password checking, which is a little more access than you probably want, so let's close it up a bit.
On receiver, edit ~foo/.ssh/authorized_keys ; you'll see a line saying something like
ssh-rsa AAAAB3NzaC1kc3MAAACBAP6Mmqm+ylUEQa+NRassh-dss AAAAB3NzaC1kc3MAAACBAP6Mmqm+ylUEQa+NRa {.......} MuieClE1nhb33EgQ== notify-send SSH key
To tighten this up, alter the line to add the following:
no-pty,no-port-forwarding,no-X11-forwarding from=sender ssh-rsa AAAAB3Nz...
This will prevent this particular key for being used for port forwarding, X11 forwarding, PTYs (making a useful login shell), or being used from anywhere other than sender.
Once you have this, you should be able to run the command
ssh -i ~/.ssh/notify-key foo@receiver notify-send "Can you see this?"
and use it in scripts as long as the user ID running the script has read access to notify-key. I recommend you set its access rights to 0600 and stow it somewhere secure.
| Using notify-send with a non-interactive ssh connection |
1,412,878,085,000 |
I don't have permission to mount a smb share with the mount command or by using /etc/fstab, but I'm able to use the smb protocol in Nautilus (smb://10.1.1.1/share), for example ...
s
Is it possible to grep a file or use any command in it in these conditions like I do in local files?
I'm running openSUSE 13.1 with LXDE.
$ uname -a
Linux thom 3.11.6-4-desktop #1 SMP PREEMPT Wed Oct 30 18:04:56 UTC 2013 (e6d4a27) x86_64 x86_64 x86_64 GNU/Linux
$ mount
devtmpfs on /dev type devtmpfs (rw,relatime,size=944020k,nr_inodes=236005,mode=755)
tmpfs on /dev/shm type tmpfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
/dev/sda1 on / type ext4 (rw,relatime,data=ordered)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
tmpfs on /var/run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
/dev/sda2 on /home type ext4 (rw,relatime,data=ordered)
$ ls -a ~/.gvfs
ls: não é possível acessar /home/thom/.gvfs: Arquivo ou diretório não encontrado
|
Nautilus uses GVFS internally. GVFS itself only caters for applications that use the Glib library to access files.
Make sure that you have the gvfs-fuse package installed. This package contains the program gvfs-fuse-daemon which makes all GVFS filesystems available as normal mounted filesystems.
gvfs-fuse-daemon should start automatically when you first access a GVFS system, for example when you connect to a remote Samba drive in Nautilus. If it doesn't, try doing
mkdir -p ~/.gvfs
/usr/lib/gvfs/gvfs-fuse-daemon ~/.gvfs
GVFS filesystems are all mounted in subdirectories of ~/.gvfs. Some recent distributions use /run/user/500/gvfs instead, where 500 is your user id (you can display it with id -u).
You can even do the mounting on the command line:
gvfs-mount smb://10.1.1.1/share
| How do I grep a file in a smb mount point without using mount or fstab? |
1,412,878,085,000 |
I am using Debian 7.2 and would like to know the shell command for finding the MAC time of a file or directory. I tried
man -k "MAC"
and got a lot of hits about macros. I then tried
man -k "MAC time"
and got nothing.
|
By MAC I assume you're asking about Modify, Access, and Change timestamps. You can get these from the stat command.
Example
$ ls -l LICENSE
-rw-rw-r-- 1 saml saml 810 Jul 5 2012 LICENSE
$ stat LICENSE
File: `LICENSE'
Size: 810 Blocks: 8 IO Block: 4096 regular file
Device: fd02h/64770d Inode: 11409231 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 500/ saml) Gid: ( 501/ saml)
Access: 2012-07-05 19:05:22.000000000 -0400
Modify: 2012-07-05 19:05:22.000000000 -0400
Change: 2013-10-16 21:17:31.148341667 -0400
Details
Access: 2012-07-05 19:05:22.000000000 -0400 – Last access time of the file.
Modify: 2012-07-05 19:05:22.000000000 -0400 – Last modification time of the file.
Change: 2013-10-16 21:17:31.148341667 -0400 – Last change time of the inode data of that file.
| How can I finding the Modify/Access/Change time of a file or directory? [closed] |
1,412,878,085,000 |
I wrote a little script to send multiple mails from a list with differents subjects.
#!/bin/bash
while read -a line
do
mailadd=${line[0]}
subject=${line[1]}
mutt -s `echo $subject` `echo $mailadd` < /home/emanuele/testomail.txt
done < prova.txt
The scripts works fine and sends the mails, but mutt tell me that he needs a -s option.
mutt: option requires an argument -- 's'
Mutt 1.5.21 (2010-09-15)
I don't understand why the script works fine but mutt exit with an error. How can I fix it?
|
First, `echo $subject` is a convoluted way of writing $subject (except that it mangles the value a bit more if it contains whitespace or \[*?, because $subject outside quotes is treated as a whitespace-separated list of wildcard patterns, and then the result of the whole command substitution is again treated as a whitespace-separated list of wildcard patterns).
The only way for your command to result in this error is if `echo $subject` `echo $mailadd` is empty. This happens only when both `echo $subject` and `echo $mailadd` consist only of whitespace, which in turns happens only if both variables subject and mailadd are empty (plus a few oddball cases, such as subject being the character ? and the current directory containing a file whose name is a single space). So most likely you have some blank lines in your input file.
You should always put double quotes around variable substitutions and command substitutions ("$somevar", "$(somecommand)") unless you really mean the values of the variables to be interpreted as whitespace-separated lists of file wildcard patterns.
mutt -s "$subject" "$mailaddr" <~/testeomail.txt
If there's a blank line in the input file, skip it.
while read -a line; do
mailadd=${line[0]}
subject=${line[1]}
if [ -z "$subject" ]; then continue; fi
mutt -s "$subject" "$mailadd" < /home/emanuele/testomail.txt
done < prova.txt
| mutt weird action |
1,412,878,085,000 |
I am trying to setup real vnc server on my RHEL via command line.
I have done the following steps
Downloaded the Real VNC installer from Real VNC for Red Hat 64 bit system
Unizipped it to get to rpms VNC-Server-5.0.5-Linux-x64.rpm,VNC-Viewer-5.0.5-Linux-x64.rpm
The documentation did not have any command line installation instructions
Followed the instructions given in this forum
rpm VNC-Server-5.0.5-Linux-x64.rpm -i
vnclicense -add <KEY>
netstat -an | more
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5902 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5903 0.0.0.0:* LISTEN
On windows 7 I installed RealVNC Viewer and tried connecting to the server, I get
an error promt connect: Connection refused (10061)
|
You need to start vnc session from your Linux account to be able to connect to your session, run vncserver from your command line of your Linux system. After you issue this command it will tell you your session ID to connect. Here is an example:
[root@systemname]# vncserver
New 'systemname:1 (username)' desktop is systemname:1
Starting applications specified in /username/.vnc/xstartup
Log file is /username/.vnc/servername:1.log
If the above task is already covered then you may need to make sure that firewall on your Linux side as well as windows side is disabled for specified port 5901 or 5902.., related to the connection ID that you are trying to connect. if you are using SELinux then you need to make sure that is also configured to allow your VNC session.
| Configuring Real VNC on RHEL 6.3 Command Line |
1,412,878,085,000 |
I am trying to execute date command in unix server for yesterday. The commands tried are :
date --date ="1 day ago"
date --date ="1 days ago"
date --date ="yesterday
date --date ="-1 day"
These command work in a server but the same command does not work in few other servers, where date prints properly the current date.
Could anyone suggest what could be the issue with the other servers?
The server details:
SunOS wupsa02a0014 5.10 Generic_147440-15 sun4u sparc SUNW,SPARC-Enterprise
|
Either remove the = or the space after --date and change those Unicode quotes (U201D) to the ASCII quote character (U0022). So:
date --date="1 day ago"
or
date --date yesterday
or
date -d yesterday
Note that -d/--date is not a standard Unix date option and is only available with GNU date. So if that Unix server is not a Linux distribution or other GNU based system, you'll have to install GNU date there or use alternative options for date calculation.
| Unix Date command not working for few servers |
1,412,878,085,000 |
I would like to get netstat to not display port numbers on the foreign address so I can run some statistics on it. This is for a FreeBSD system.
The following is a example of the output.
<root>:/# netstat -an | grep .80 |head
tcp4 0 0 61.129.65.176.80 123.120.207.172.51972 ESTABLISHED
tcp4 491 0 61.129.65.176.80 171.250.180.211.51000 ESTABLISHED
tcp4 286 0 61.129.65.176.80 123.120.207.17210399 ESTABLISHED
tcp4 299 0 61.129.65.176.80 211.8.128.46.35458 ESTABLISHED
tcp4 0 0 61.129.65.176.80 123.139.147.112.62778 ESTABLISHED
tcp4 361 0 61.129.65.176.80 239.187.139.47.17607 ESTABLISHED
tcp4 509 0 61.129.65.176.80 74.74.87.36.7822 ESTABLISHED
tcp4 324 0 61.129.65.176.80 75.30.126.198.60721 ESTABLISHED
tcp4 508 0 61.129.65.176.80 149.78.116.66.12120 ESTABLISHED
tcp4 321 0 61.129.65.176.80 48.150.75.171.2617 ESTABLISHED
<root>:/#
|
Add this sed command at the end of your pipe. It does a greeding search until last . and delete it and all digits that follow it.
... | sed -e 's/^\(.*\)\.[0-9]*/\1/'
It yields:
tcp4 0 0 61.129.65.176.80 123.120.207.172 ESTABLISHED
tcp4 491 0 61.129.65.176.80 171.250.180.211 ESTABLISHED
tcp4 286 0 61.129.65.176.80 123.120.207.172 ESTABLISHED
tcp4 299 0 61.129.65.176.80 211.8.128.46 ESTABLISHED
tcp4 0 0 61.129.65.176.80 123.139.147.112 ESTABLISHED
tcp4 361 0 61.129.65.176.80 239.187.139.47 ESTABLISHED
tcp4 509 0 61.129.65.176.80 74.74.87.36 ESTABLISHED
tcp4 324 0 61.129.65.176.80 75.30.126.198 ESTABLISHED
tcp4 508 0 61.129.65.176.80 149.78.116.66 ESTABLISHED
tcp4 321 0 61.129.65.176.80 48.150.75.171 ESTABLISHED
| Have netstat not display port numbers for foreign address |
1,412,878,085,000 |
I have a shell script to run JMeter test. Script generates the Jmeter log out put and creates sar (suppose to create sar file though it does not). Shell script is -
runtest()
{
export JMETER_HOME=/home/software/apache-jmeter-2.6
host=$1
port=$2
loopcount=$3
logfile=jmeter$(date -d "today" +"%Y%m%d%H%M%S").jtl
sarfile=sar$(date -d "today" +"%Y%m%d%H%M%S").dat
sar -o $sar_file 3 100000 >/dev/null 2>&1 &
sar_pid=$!;
echo 'sar started with pid' $sar_pid;
for ((start=1; start <= $loopcount; start++ ))
do
echo 'Iteration' $start;
$JMETER_HOME/bin/jmeter.sh -n -t ReportWebService.jmx -Jhost=$host -Jport=$port -l $logfile
done
sleep 2
echo 'killing sar process id' $sar_pid
kill -9 $sar_pid;
}
runtest localhost 8087 1
When I execute the shell script, I eventually encounter error -
sar started with pid 13191
Iteration 1
Created the tree successfully using ReportWebService.jmx
Starting the test @ Tue Apr 17 11:16:24 IST 2012 (1334641584914)
Waiting for possible shutdown message on port 4445
Tidying up ... @ Tue Apr 17 11:16:25 IST 2012 (1334641585129)
... end of run
killing sar process id 13191
./temp.sh: line 22: kill: (13191) - No such process
And I don't see any ".dat" file generated for sar data. What is wrong with this shell script?
|
There's a typo in your question: you set sarfile but use sar_file, which is probably causing your sar command to exit with an error.
| Unable to kill sar process |
1,412,878,085,000 |
I have had gnupod for a while. Still haven't figured out how to use it though. I usually just go for gtkpod. So this question, in case anyone had any doubts after reading everything below, is purely about gnupod syntax.
Recently I decided to get my ipod replaced under this replacement scheme. (I don't like proprietary hardware, etc., but if someone is going to give me a brand new piece of hardware... for something that was originally a gift... hmm...)
Anyway, long story short, I backed up my old ipod using dd:
dd if=/dev/sdb2 of=ipodbk
and just tried running gtkpod (with ipodbk mounted as -t vfat -o loop) to see if it would pick it up, like it does when an ipod is attached, which it does.
But this question is not about that.
I don't want to waste time with gtkpod's bugginess when it comes to exporting an entire ipod's files (tried it before. ended up with a lot of 0 byte files... no thanks).
I just want to be able to export all the tracks from the disk image in one go.
The command probably looks something like
gnupod_something (some switches) -m /home/user/ipodbkmount/ (some stuff?)
and the output probably looks something like
(window flooded with 1000 lines of text)
but I've never really been able to get gnupod to work. I'm sure it's a great program, and I have to usually invoke it as gnupod_check -m ipod --fixit once in a while when gtkpod loses track of what it's doing and makes errors when it writes files to the device. As for gnupod, I'm just not intelligent enough to figure out its syntax on the more routine ipod manipulations, I think.
So, yeah, what's that command?
|
First, gnupod does not work directly with the iPod database, you need to convert from and to the gnupod database, so you want to run tunes2pod.pl first. This only needs to be done when you change the iTunesDB (iPod's own database) directly — if you use gnupod again without using other tool, you don't need to run this again.
(Likewise, if you use gnupod to change the database, you need to run mktunes.pl, which updates the iTunesDB.)
When you already have the gnupod format database (which is stored in the iPod just like the iTunesDB):
If what you want is to get the track information, no files, you want to use gnupod_search.pl (the gnupod tool to search the db and edit entries).
Have a look at the manpage, because you can tune what it shows, including the path to the files themselves (but from your comments it seems you don't need this).
IIRC, running it with no arguments (other than the mountpoint and related iPod configuration stuff) dumps the whole list of entries.
If you want to search, there are several arguments that go like --title=the-string, --artist=the-string. title and artist are "keys", and you can use --rename="key=value" to change the value associated to that key in the matched entries. There's also --delete that, well, deletes whatever matches the search parameters.
Oh, and, for a quick introduction to an iPod manager, what else... oh, right, add songs: gnupod_addsong.pl, it takes the names of the media files and copies these to the iPod. Has several options (e.g. to override the title) and, if you have the required dependencies, it may even be able to convert other formats such as ogg (although you probably need to specify --decode=format, where the format can be, among others, mp3).
(If you end up using gnupod as your iPod manager, you can put the mountpoint and model information in ~/.gnupodrc to save some typing.)
| Backed an ipod up with dd, how to retrieve all tracks in one go with gnupod? |
1,412,878,085,000 |
Many laptops are delivered with Windows 7 recovery partition (but without CD or DVD) which takes place on the harddrive that could contain some Linux distribution or serve for data storage.
I don't want to format it because sister paid for this partition and has a license key. But she would like to backup it to some medium and use it in time she will need to use this system.
Is there a command in Linux that will backup this recovery partition or make a bootable DVD from this partition?
Note: The content of the partition is not so important. Important is to make a backup of this partition to some medium (best would be CD or DVD) and show how to restore from that medium.
|
Using tar and gzip is probably your best bet. You could use dd to do a block-by-block copy of it but this will obviously give you a file exactly the same size as the partition.
Assuming the partition is /dev/sda2, something like:-
mkdir /mnt/recovery
mount -t ntfs /dev/sda2 /mnt/recovery
cd /mnt/recovery
tar -cvf - . | gzip -c >/path/to/store/recovery.tar.gz
should back it up, and:-
mkfs.ntfs /dev/sda2
mount -t ntfs /dev/sda2 /mnt/recovery
cd /mnt/recovery
tar xvf /path/to/sotre/recovery.tar.gz
to put it all back again.
| Create recovery medium from Windows 7 recovery partition |
1,412,878,085,000 |
I have a text file of a database dump with some line break characters (0x0A0x0D) in the middle of lines. I want to replace them with commas, but I can't do it simply, because those characters are the actual line break characters where I do want line breaks!
But I noticed that the line break sequences I want to keep are surrouned by space characters ( 0x20), so I was thinking a regex to find and replace any 0x0A0x0D sequence without a leading or trailing space.
How can I do this?
|
The regex for a whitespace character is, of course, \s. However, since you want a non-whitespace character, you can use \S! Therefore, your regex to replace would be \S\n\r\S.
EDIT:
#!/usr/bin/perl
use strict; use warnings;
my $pattern = "xxxxxxxxxxxxxxxxxxxy\n\ryxxxxxxxxxxxxxxxxxxx \n\r xxxxxxxxxxxxxxxxxxxy\n\ryxxxxxxxxxxxxxxxxxxx";
$pattern =~ s/(\S)(\n\r)(\S)/$1$3/g;
print "$pattern\n";
exit;
result:
xxxxxxxxxxxxxxxxxxxyyxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxyyxxxxxxxxxxxxxxxxxxx
I changed the regex to replace with $1$3 so you retain the characters that \S matches.
| regex find and replace 0x0D, 0x0A characters |
1,412,878,085,000 |
In vim I use the following command to compile a tex file:
pdflatex\ \-file\-line\-error\ \-shell\-escape\ \-interaction=nonstopmode\ $*\\\|\ grep\ \-P\ ':\\d{1,5}:\ '
this works in terms of getting the errors into a quick fix window (if you don't use vim, ignore this sentence). The only problem is that I would like to see the latex output coming on the screen while the document is compiling (now grep swallows all the output).
what should I change to make this happen? I already tried piping it all to tail, yet to no avail.
This is pdfTeX, Version 3.1415926-2.3-1.40.12 (TeX Live 2011) (format=preamble 2011.10.2) 3 OCT 2011 23:16
entering extended mode
\write18 enabled.
file:line:error style messages enabled.
%&-line parsing enabled.
**main.tex
(./main.tex
LaTeX2e <2011/06/27>
Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, ge
rman-x-2011-07-01, ngerman-x-2011-07-01, afrikaans, ancientgreek, ibycus, arabi
c, armenian, basque, bulgarian, catalan, pinyin, coptic, croatian, czech, danis
h, dutch, ukenglish, usenglishmax, esperanto, estonian, ethiopic, farsi, finnis
h, french, galician, german, ngerman, swissgerman, monogreek, greek, hungarian,
icelandic, assamese, bengali, gujarati, hindi, kannada, malayalam, marathi, or
iya, panjabi, tamil, telugu, indonesian, interlingua, irish, italian, kurmanji,
lao, latin, latvian, lithuanian, mongolian, mongolianlmc, bokmal, nynorsk, pol
ish, portuguese, romanian, russian, sanskrit, serbian, serbianc, slovak, sloven
ian, spanish, swedish, turkish, turkmen, ukrainian, uppersorbian, welsh, loaded
.
PRECOMILED PREAMBLE LOADED
LaTeX Warning: Overwriting file `./main.bib'.
\openout15 = `main.bib'.
\openout4 = `main.auxlock'.
Package biblatex Info: Trying to load language 'english'...
Package biblatex Info: ... file 'english.lbx' found.
(/usr/local/texlive/2011/texmf-dist/tex/latex/biblatex/lbx/english.lbx
File: english.lbx 2011/07/29 v1.6 biblatex localization
)
Package biblatex Warning: 'babel' detected but 'csquotes' missing.
(biblatex) Loading 'csquotes' recommended.
\@quotelevel=\count451
\@quotereset=\count452
(./main.aux)
\openout1 = `main.aux'.
LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 42.
LaTeX Font Info: ... okay on input line 42.
LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 42.
LaTeX Font Info: Try loading font information for TS1+cmr on input line 42.
(/usr/local/texlive/2011/texmf-dist/tex/latex/base/ts1cmr.fd
File: ts1cmr.fd 1999/05/25 v2.5h Standard LaTeX font definitions
)
LaTeX Font Info: ... okay on input line 42.
Package caption Info: Begin \AtBeginDocument code.
Package caption Info: subfig package 1.2 or 1.3 is loaded.
Package caption Info: float package is loaded.
Package caption Info: hyperref package is loaded.
Package caption Info: wrapfig package is loaded.
Package caption Info: End \AtBeginDocument code.
(/usr/local/texlive/2011/texmf-dist/tex/context/base/supp-pdf.mkii
[Loading MPS to PDF converter (version 2006.09.02).]
\scratchcounter=\count453
\scratchdimen=\dimen319
\scratchbox=\box86
\nofMPsegments=\count454
\nofMParguments=\count455
\everyMPshowfont=\toks48
\MPscratchCnt=\count456
\MPscratchDim=\dimen320
\MPnumerator=\count457
\makeMPintoPDFobject=\count458
\everyMPtoPDFconversion=\toks49
) (/usr/local/texlive/2011/texmf-dist/tex/latex/oberdiek/epstopdf-base.sty
Package: epstopdf-base 2010/02/09 v2.5 Base part for package epstopdf
(/usr/local/texlive/2011/texmf-dist/tex/latex/oberdiek/grfext.sty
Package: grfext 2010/08/19 v1.1 Managing graphics extensions (HO)
)
Package grfext Info: Graphics extension search list:
(grfext) [.png,.pdf,.jpg,.mps,.jpeg,.jbig2,.jb2,.PNG,.PDF,.JPG,.JPE
G,.JBIG2,.JB2,.eps]
(grfext) \AppendGraphicsExtensions on input line 452.
(/usr/local/texlive/2011/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg
File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv
e
))
Package biblatex Info: No input encoding detected.
(biblatex) Assuming 'ascii'.
Package biblatex Info: Automatic encoding selection.
(biblatex) Assuming data encoding 'ascii'.
\openout3 = `preamble-blx.bib'.
Package biblatex Info: Trying to load bibliographic data...
Package biblatex Info: ... file 'main.bbl' not found.
No file main.bbl.
Package biblatex Info: Reference section=0 on input line 42.
Package biblatex Info: Reference segment=0 on input line 42.
ABD: EveryShipout initializing macros
\AtBeginShipoutBox=\box87
Package hyperref Info: Link coloring ON on input line 42.
(/usr/local/texlive/2011/texmf-dist/tex/latex/hyperref/nameref.sty
Package: nameref 2010/04/30 v2.40 Cross-referencing by name of section
(/usr/local/texlive/2011/texmf-dist/tex/generic/oberdiek/gettitlestring.sty
Package: gettitlestring 2010/12/03 v1.4 Cleanup title references (HO)
)
\c@section@level=\count459
)
LaTeX Info: Redefining \ref on input line 42.
LaTeX Info: Redefining \pageref on input line 42.
LaTeX Info: Redefining \nameref on input line 42.
(./main.out) (./main.out)
\@outlinefile=\write8
\openout8 = `main.out'.
(/usr/local/texlive/2011/texmf-dist/tex/latex/beamer/translator/dicts/translato
r-basic-dictionary/translator-basic-dictionary-English.dict
Dictionary: translator-basic-dictionary, Language: English
)
(/usr/local/texlive/2011/texmf-dist/tex/latex/siunitx/config/siunitx-abbreviati
ons.cfg
File: siunitx-abbreviations.cfg 2011/09/13 v2.3f siunitx: Abbreviated units
)
(/usr/local/texlive/2011/texmf-dist/tex/latex/siunitx/config/siunitx-binary.cfg
File: siunitx-binary.cfg 2011/09/13 v2.3f siunitx: Binary units
)
LaTeX Info: Redefining \microtypecontext on input line 42.
Package microtype Info: Generating PDF output.
Package microtype Info: Character protrusion enabled (level 2).
Package microtype Info: Using default protrusion set `alltext'.
Package microtype Info: Automatic font expansion enabled (level 2),
(microtype) stretch: 20, shrink: 20, step: 1, non-selected.
Package microtype Info: Using default expansion set `basictext'.
Package microtype Info: No tracking.
Package microtype Info: No adjustment of interword spacing.
Package microtype Info: No adjustment of character kerning.
Package microtype Info: Redefining babel's language switching commands.
(/usr/local/texlive/2011/texmf-dist/tex/latex/microtype/mt-cmr.cfg
File: mt-cmr.cfg 2009/11/09 v2.0 microtype config. file: Computer Modern Roman
(RS)
)
\c_siunitx_mathsf_int=\count460
LaTeX Font Info: Try loading font information for U+msa on input line 42.
(/usr/local/texlive/2011/texmf-dist/tex/latex/amsfonts/umsa.fd
File: umsa.fd 2009/06/22 v3.00 AMS symbols A
)
(/usr/local/texlive/2011/texmf-dist/tex/latex/microtype/mt-msa.cfg
File: mt-msa.cfg 2006/02/04 v1.1 microtype config. file: AMS symbols (a) (RS)
)
LaTeX Font Info: Try loading font information for U+msb on input line 42.
(/usr/local/texlive/2011/texmf-dist/tex/latex/amsfonts/umsb.fd
File: umsb.fd 2009/06/22 v3.00 AMS symbols B
)
(/usr/local/texlive/2011/texmf-dist/tex/latex/microtype/mt-msb.cfg
File: mt-msb.cfg 2005/06/01 v1.0 microtype config. file: AMS symbols (b) (RS)
)
LaTeX Font Info: Try loading font information for U+esint on input line 42.
(/usr/local/texlive/2011/texmf-dist/tex/latex/esint/uesint.fd
File: uesint.fd
)
LaTeX Font Info: Try loading font information for U+rsfs on input line 42.
(/usr/local/texlive/2011/texmf-dist/tex/latex/jknapltx/ursfs.fd
File: ursfs.fd 1998/03/24 rsfs font definition file (jk)
)
\c_siunitx_mathtt_int=\count461
./main.tex:47: Undefined control sequence.
l.47 \akaka
The control sequence at the end of the top line
of your error message was never \def'ed. If you have
misspelled it (e.g., `\hobx'), type `I' and the correct
spelling (e.g., `I\hbox'). Otherwise just continue,
and I'll forget about whatever was undefined.
Package atveryend Info: Empty hook `BeforeClearDocument' on input line 51.
[1{/usr/local/texlive/2011/texmf-var/fonts/map/pdftex/updmap/pdftex.map}]
Package atveryend Info: Empty hook `AfterLastShipout' on input line 51.
(./main.aux)
Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 51.
Package atveryend Info: Executing hook `AtEndAfterFileList' on input line 51.
Package rerunfilecheck Info: File `main.out' has not changed.
(rerunfilecheck) Checksum: D41D8CD98F00B204E9800998ECF8427E;0.
Package logreq Info: Writing requests to 'main.run.xml'.
\openout1 = `main.run.xml'.
)
Here is how much of TeX's memory you used:
2167 strings out of 455899
40306 string characters out of 2353312
1131909 words of memory out of 3000000
42382 multiletter control sequences out of 15000+200000
26633 words of font info for 111 fonts, out of 3000000 for 9000
831 hyphenation exceptions out of 8191
36i,6n,45p,773b,1377s stack positions out of 5000i,500n,10000p,200000b,50000s
{/usr/local/texlive/2011/texmf-dist/fonts/enc/dvips/cm-super/cm-super-t1.enc}
</usr/local/texlive/2011/texmf-dist/fonts/type1/public/cm-super/sfrm1000.pfb></
usr/local/texlive/2011/texmf-dist/fonts/type1/public/cm-super/sfrm1200.pfb></us
r/local/texlive/2011/texmf-dist/fonts/type1/public/cm-super/sfrm1728.pfb>
Output written on main.pdf (1 page, 19387 bytes).
PDF statistics:
29 PDF objects out of 1000 (max. 8388607)
22 compressed objects within 1 object stream
2 named destinations out of 1000 (max. 500000)
23053 words of extra memory for PDF output out of 24883 (max. 10000000)
Vim should only display:
main.tex l.47 undefined control sequence
Yet I like seeing the output so that I have some feedback about which page is being compiled etc...
|
Have you tried using errorformat instead of grep'ing the output? C.f. http://vim.wikia.com/wiki/Errorformats. It is specially useful if you set up the make command ( http://vim.wikia.com/wiki/Make_make_more_helpful ).
Thanks for the updated output, romeovs. It sounds like you wish to have something like:
set errorformat=%E%f:%l:\ %m%C1.%l\ %Z
I can't test this, but based on the output, it seems to be what you want.
| Display output to console while grep is used |
1,412,878,085,000 |
I am using PuTTY to connect to a distant network to then set up x11vnc and then using ssl/sshvnc as a client.
in the host name for PuTTY I have: ssh.inf.uk
and port: 22
in the ssh tunnel options I have source port set to: 5910
and destination: markinch.inf.uk
Then putty brings up an xterm and I am prompted for my username and password. I get to the common gateway machine and do
ssh markinch
then I set up the x11vnc server
x11vnc -ssl -usepw -rfbport 5910 -create -geometry 1200x800
I use ssl/ssh vnc viewer with the verify certs off and host port set to, localhost:10
and put the password, and connect fine.
---Now I want to bypass usuing PuTTY, and do the ssh connection via command line. So I do
ssh -L localhost:5910:ssh.inf.uk:5910 [email protected]
which brings me into the gateway machine, then I need to log into a specific desktop
ssh -L localhost:5910:markinch.inf.uk:5910 markinch
Then I set up the x11vnc server,
x11vnc -ssl -usepw -rfbport 5910 -create -geometry 1200x800
then I use ssl/ssh vnc viewer with verify certificates off, localhost:10, and with the password in, and get:
PORT=5910
SSLPORT=5910
channel 3: open failed: connect failed: Connection refused
What is putty doing so different?
Best,
|
In your putty config, the traffic is exiting the tunnel at ssh.inf.uk and being forwarded directly to markinch.inf.uk. So you're only building 1 tunnel.
In your ssh statements, you're building 2 tunnels - one from localhost to ssh.inf.uk, and a second from ssh.inf.uk to markinch.inf.uk.
I haven't yet worked out why the 2-tunnel solution isn't working for you. However, you might try adjusting your ssh command to match what putty's doing and see if that works.
ssh -L localhost:5910:markinch.inf.uk [email protected]
| vnc connection working with PuTTY but not with command line |
1,412,878,085,000 |
man modprobe says in the ENVIRONMENT section:
The MODPROBE_OPTIONS environment variable can also be used to pass
arguments to modprobe.
But this is unclear. Suppose for example that I want to force the module search path by faking the kernel version string. That is the -S option. Should it be:
MODPROBE_OPTIONS='-S fake-version' or
MODPROBE_OPTIONS='-Sfake-version' or
MODPROBE_OPTIONS='--set-version fake-version' or
MODPROBE_OPTIONS='--set-version=fake-version' or maybe
MODPROBE_OPTIONS='set-version=fake-version'
?? This is a situation where a single example would have made all the difference.
Thanks,
Ian
|
The contents of MODPROBE_OPTIONS are prepended to any existing arguments, with no processing other than splitting on spaces.
So if you want to end up with
modprobe -S fake-version module-foo
you’d set
MODPROBE_OPTIONS="-S fake-version"
and then (with the variable exported) run
modprobe module-foo
| Passing modprobe options through environment |
1,412,878,085,000 |
I use a little script to numbering files. I start it with a thunar custom action. This only works when all files are in the same directory. the new files name are 00001.ext to 00005.ext when I rename 5 files. krename has an option to restart for every folder.
When i have this
/path/to/folder1/file1
/path/to/folder1/file2
/path/to/folder2/file1
/path/to/folder2/file2
my script would create this
/path/to/folder1/00001
/path/to/folder1/00002
/path/to/folder1/00003
/path/to/folder2/00004
^
folder two
What I want is
/path/to/folder1/00001
/path/to/folder1/00002
/path/to/folder1/00003
/path/to/folder1/00004
^
folder 1
is there a command line tool that can rename the files and restart for every new folder?
|
Using Perl's rename (usable in any OS):
rename -n 's!folder\d+/.*!sprintf "folder1/%05d", ++$MAIN::c!se' ./folder*/*
$ tree folder*
folder1
├── 00001
├── 00002
├── 00003
└── 00004
folder2
Remove -n switch, aka dry-run when your attempts are satisfactory to rename for real.
To go deeper:
you can capture folder with:
rename -n 's!([^/]+)\d+/.*!sprintf("%s1/%05d", $1, ++$MAIN::c)!se' ./folder*/*
to make it dynamic.
You can add more logic inside if needed, it's Perl code.
| Numbering files and restart for every folder in command line? |
1,691,420,788,000 |
I'm using a server where I'm a common user (non-sudo).
I access the server through ssh.
Here's the output of some commands run on the server:
[username@machinename: ~]$ ps -p $$
PID TTY TIME CMD
1332818 pts/55 00:00:00 bash
[username@machinename: ~]$ echo $$SHELL
1332818SHELL
[username@machinename: ~]$ echo $-
himBHs
[username@machinename: ~]$ uname
Linux
[username@machinename: ~]$ uname -v
#1 SMP Thu May 11 07:38:47 EDT 2023
[username@machinename: ~]$ uname -r
4.18.0-372.57.1.el8_6.x86_64
[username@machinename: ~]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.6 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.6"
Username and machine name are hided.
Here's the operating system information:
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: RedHatEnterprise
Description: Red Hat Enterprise Linux release 8.6 (Ootpa)
Release: 8.6
Codename: Ootpa
Normally to add aliases on Ubuntu/Linux, I just need to edit the ~/.bashrc file. But this file doesn't exit the first time I enter the system, so I created it on my own in my home directory and fill the aliases:
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/tien/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/tien/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/tien/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/tien/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
export PATH="/usr/local/cuda-12.1/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-12.1/lib64:$LD_LIBRARY_PATH"
alias c='clear'
alias gpu='watch -n 0.5 nvidia-smi'
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../../../'
alias .....='cd ../../../../'
alias h='history'
alias j='jobs -l'
alias gst='git status'
But when I log out and re-login, the alias doesn't work.
So how can I debug this?
Here are some of my info:
[username@machinename: ~]$ which alias
/usr/bin/alias
[username@machinename: ~]$ alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias vi='vim'
alias xzegrep='xzegrep --color=auto'
alias xzfgrep='xzfgrep --color=auto'
alias xzgrep='xzgrep --color=auto'
alias zegrep='zegrep --color=auto'
alias zfgrep='zfgrep --color=auto'
alias zgrep='zgrep --color=auto'
|
I include this in ~/.profile
# if running bash
# include .bashrc if it exists
[ -n "$BASH_VERSION" ] && [ -f "$HOME/.bashrc" ] && . "$HOME/.bashrc"
| How to config alias on RedHat server? |
1,691,420,788,000 |
# Create a folder
mkdir archived_PA_2022-01_2022-06
# Move files to new folder
find ./ -newermt "2021-12-31" ! -newermt '2022-06-28' -exec mv /var/log/pentaho/PA –t archived_PA_2022-01_2022-06 {} +
# Archive folder
zip -r archived_PA_2022-01_2022-06.zip /var/log/pentaho/archived_PA_2022-01_2022-06
I have this Unix script here to move files for the previous 6 months to a new folder.
What I want is to move files dynamically.
For example, today, July 6, 2023, I want to move files to a new folder named 'archived_PA_2023-01-05_2023-07-05' based on the timestamp from January 5, 2023, to July 5, 2023. And for July 7, 2023, I want to move files to a new folder named 'archived_PA_2023-01-06_2023-07-06' based on the timestamp from January 6, 2023, to July 6, 2023, and so on.
I want this process to be dynamic, where the timestamp is automatically determined, and a new folder is created accordingly, without manually changing the script each time I run it.
Is there a way for me to achieve this?
|
I was able to figure it out.
Just use:
# Create a folder dynamically
mkdir archived_PA_"$(date -d "6 months ago" +%Y-%m-%d)"_"$(date -d "1 day ago" +%Y-%m-%d)"
# Move files to new folder dynamically
find ./ -newermt "6 months ago" ! -newermt "1 day ago" -exec mv -t archived_PA_"$(date -d "6 months ago" +%Y-%m-%d)"_"$(date -d "1 day ago" +%Y-%m-%d)" {} +
| How do I move files to new folder based on timestamp dynamically without manually changing the script? |
1,691,420,788,000 |
Is there a way to create nested directories which all have the same user/group in a single command?
That single command would have the same effect as the following two commands:
mkdir -p new-1/new-2/new-3
chown -R myUser:myUser new-1
|
I can not add a comment, therefore I post this as an answer.
Have a look at install, see man install(1).
install -d -g myUser -o myUser new-1 new-1/new-2 new-1/new-2/new-3
Or, if you don't want to repeat the directory names (using root user):
sudo -g myUser -u myUser mkdir -p new-1/new-2/new-3
| Create nested directories with the same user/group in a single command |
1,691,420,788,000 |
When chaining commands in zsh using ;, && and ||, how can I access the previous command in chain (the command that is being executed before)?
Example:
rm foo ; echo ...
In place of the dots, I'm looking for some kind of command / variable that provides me with rm foo.
I have tried playing around a bit with history and !!, but those only provide the commands up until the command that was executed last, not the current one.
Another idea was to use $0, but this only prints the location of zsh.
Accessing the complete chain would also be fine, if retrieving only the last command is not possible.
|
You could do something like:
$ TRAPDEBUG() last_command=$current_command current_command=$ZSH_DEBUG_CMD
$ echo test; print -r -- $last_command
test
echo test
Beware there will be some reformatting of the commands:
$ (echo test); print -r -- $last_command
test
(
echo test
)
$ (for a (1 2) echo $a); print -r -- $last_command
1
2
(
for a in 1 2
do
echo $a
done
)
| Retrieve last command when Chaining |
1,691,420,788,000 |
I own a 60% keyboard that does not have a ~ key. Before when I was on MacOS, I used to use Karabiner Elements to map Shift+Esc to ~.
Now that I've switched to Linux, I would like to know how I can do the same on Linux with just terminal commands.
|
In X11
xmodmap -e 'keysym Escape = Escape asciitilde Escape'
Would map ~ to Shift + any key that is currently mapped to Escape.
| Mapping Shift + Escape to ~ From the Command Line |
1,627,577,410,000 |
how to perform a silent install of bandwidthD to avoid windows and put IP and interfaces to monitor by command line (for ubuntu 20.04)
sudo apt-get install bandwidthd # with what parameters
Important:
There is not help bandwidthd. Only help:
bandwidthd --help
Usage: bandwidthd [OPTION]
Options:
-D Do not fork to background
-l List detected devices
-c filename Alternate configuration file
--help Show this help
thanks
Update:
I found a workaround, and at @muru's suggestion I post it as an answer. If there is a better answer, feel free to post it and I will select it as the best answer.
|
Workaround:
sudo DEBIAN_FRONTEND=noninteractive apt-get -y install bandwidthd
After installed, if necessary, edit the configuration file and change the default parameters.
# default parameters:
sudo debconf-show bandwidthd
bandwidthd-pgsql/sensorid:
bandwidthd/dev: any
bandwidthd/promisc: false
bandwidthd/metarefresh:
bandwidthd/subnet: 169.254.0.0/16, 192.168.1.0/24, 192.168.122.0/24
bandwidthd/outputcdf: true
bandwidthd/recovercdf: true
sudo nano /etc/bandwidthd/bandwidthd.conf
# change default parameters. Example:
subnet 192.168.0.0/16
dev "wlp1s0"
# save the changes and...
sudo /etc/init.d/bandwidthd restart
access:
http://localhost/bandwidthd/
| how to perform a silent install of bandwidthD in ubuntu 20.04 |
1,627,577,410,000 |
Problem
I want to parse some data structured as lines (\n separated) with fields separated by the NUL character \0.
Many linux commands handle this separator with options such as --zero for find, or -0 for xargs or by defining the separator as \0 for gawk.
I didn't manage to understand how to make column interpret NUL as separator.
Example
If you generate the following set of data (2 lines with 3 columns, separated by \0):
echo -e "line1\nline2" | awk 'BEGIN {OFS="\0"} {print $1"columnA",$1"columnB",$1"columnC"}'
You would get the expected following output (\0 separators won't be displayed but is separating each field):
line1columnAline1columnBline1columnC
line2columnAline2columnBline2columnC
But when I try to use column to display my column, despite passing \0, the output for some reason only display the first column:
echo -e "line1\nline2" \ | awk 'BEGIN {FS="\0"; OFS="\0"} {print $1"columnA",$1"columnB",$1"columnC"}' | column -s '\0'
line1columnA line2columnA
Actually, even without providing the delimiter, column seems to break on the nul character:
echo -e "line1\nline2" \ | awk 'BEGIN {FS="\0"; OFS="\0"} {print $1"columnA",$1"columnB",$1"columnC"}' | column
line1columnA line2columnA
Question
Is there a way to use \0 as a field/column separator in column ?
Optional/ bonus question: Why does column behaves like this (I would expect the \0 to be totally ignored if not managed and the whole line to be printed as a single field) ?
Optional/ bonus question 2: Some data in these columns will be file paths and I wanted to use \0 as a best practice. Do you a have better practice to recommand for storing "random strings" in file without having to escape potential conflictual field separator character they may contain?
|
Is there a way to use \0 as a field/column separator in column ?
No, because both implementations of column (that I am aware of), which are the historical BSD and the one in the util-linux package, both use the standard C library's string manipulation functions to parse input lines, and those functions work under the assumption that strings are NUL-terminated. In other words, a NUL byte is meant to always mark the end of any string.
Optional/ bonus question: Why does column behaves like this (I would expect the \0 to be totally ignored if not
managed and the whole line to be printed as a single field) ?
On top of what I explained above, note that option -s expects literal characters. It does not parse an escape syntax like \0 (nor \n for that matters). This means that you told column to consider either a \ and a 0 as valid separators for its input.
You can provide escape sequences through the $'' string syntax if you are using one of the many shells that support it (e.g. it is available in bash but not in dash). So for instance column -s $'\n' would be valid (to specify a <newline> as column separator) if run by one of those shells.
As a side-note, it's not clear to me what you'd expect from column. Even if it did support NUL as separator, it would just turn each line of that input into a whole column on output. Perhaps you'd wanted to also use -t so as to columnize the single fields for each line?
Optional/ bonus question 2: Some data in these columns will be file paths and I wanted to use \0 as a best practice. Do you a have better practice to recommand for storing "random strings" in file without having to escape potential conflictual field separator character they may contain?
The only one I know of is by prefixing each single field with its length, expressed as text or binary as you see fit. But then surely you could not pipe them into column.
Also, if your concern is file paths then you should consider not using the \n either as a "structure" separator, because that is a perfectly valid character for filenames.
Just as a proof-of-concept, based on your example but using NUL as structure/record separator and length-specified fields: (I also fiddled a bit with your example strings to involve multibyte characters)
echo -e 'line1\nline2 ò' \ | LC_ALL=C awk '
BEGIN {
ORS="\0"
# here we just move arguments away from ARGV
# so that awk reads input from stdin
for (i in ARGV) {
c[i]=ARGV[i]
delete ARGV[i]
}
}
{
# first field is the line read
printf "%4.4d%s", length, $0
# then a field for each argument
for(i=1; i<length(c); i++)
printf "%4.4d%s", length(c[i]), c[i]
printf "%s", ORS
}
' "€ column A" $'colu\nmnB' "column C"
Use arguments to awk to pass as many arbitrary column strings as you wish.
Then, a hypothetical counterpart script in awk (actually has to be gawk or mawk to handle RS="\0"):
LC_ALL=C awk '
BEGIN { RS="\0" }
{
nf=0; while(length) {
field_length = substr($0, 1, 4)
printf "field %d: \"%s\""ORS, ++nf, substr($0, 5, field_length)
$0 = substr($0, 5+field_length)
}
printf "%s", ORS
}
'
Note that it is important to specify the same locale for both scripts to match the character size. Specifying LC_ALL=C for both is fine.
| Zero/Nul separator breaks column command |
1,627,577,410,000 |
I'd like to create a temporary file that will be read by multiple scripts after its creation, but I don't have an easy way of monitoring when the last script finishes reading this temporary file to delete it (it may be a different script each time). I'd like to know if there's a standard way of solving this problem with command-line tools that will autodelete this file when it passes a specific interval of time without being read by any program, is it possible? Or the only way to solve this problem would be to figure out a way of knowing when the last script finishes reading this file for deleting it?
|
Based on taiyu's answer using inotifywait I've created a node.js solution to this problem... It required more details than I expected when I asked it. Sorry if that's not the right place for posting node.js code but the asynchronous nature of the language made things simpler for me... My solution is the following:
const fs = require('fs');
const spawn = require("child_process").spawn;
const file='/home/user/hugefile.csv';
let counter = 0; // counter used for identifying how many times the file was opened and closed by a different program
const child = spawn('/bin/bash', [ '-c', `inotifywait --format=%e -q -m -e access,close ${file}` ])
child.stdout.on('data', (data) => {
let line = data.toString().split('\n').filter(item => item); // get events inside a javascript array and filter empty values
// loop through the inotify events
line.forEach(function(event) {
if ( event === "ACCESS" )
counter++;
else if ( event === "CLOSE_NOWRITE,CLOSE" )
counter--;
// asynchronous function that checks the value of counter after 10 seconds
async function timer() {
await sleep(10000);
console.log(counter);
if ( counter === 0 ){
fs.unlinkSync(file); // erase file
console.log("tmpfile erased!")
process.exit();
}
}
timer();
});
});
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
Basically, I used inotifywait as a basis for solving the problem... I just need to execute this script after creating the temporary file and it will erase the file after all other programs finish reading the file (after 10 seconds).
OBS: My problem trying to solve it with bash is that when I make a function run as a process with & I lose control of the new values of global variables inside that function. So I couldn't get the counter state using the same logic that I've used on node.js... If someone knows a workaround for this, feel free to write in the comments here. :)
| Is it possible to create a temporary file that will autodelete after a specific time that it's not read by any other program? |
1,627,577,410,000 |
I'm running a command from a bash 4 prompt to gather the output from AWS CLI commands that pull the output of an AWS SSM run document. I can have it output in multiple formats including text or json (default). I am, unsuccessfully so far, attempting put this output into an array so I can loop through the output until every value in the array equals 2 or higher.
#!/bin/bash
aws ec2 reboot-instances --instance-ids `aws ec2 describe-instances --filters "Name=tag:RebootGroup,Values=01" --query 'Reservations[].Instances[].InstanceId' --output text`
sleep 30
completeLoop=false
while [ ! ${completeLoop} ]
do
ssmID=$(aws ssm send-command --document-name "AWS-RunPowerShellScript" --document-version "1" --targets '[{"Key":"tag:RebootGroup","Values":["01"]}]' --parameters '{"commands":["$wmi = Get-WmiObject -Class Win32_OperatingSystem ","$uptimeMinutes = ($wmi.ConvertToDateTime($wmi.LocalDateTime)-$wmi.ConvertToDateTime($wmi.LastBootUpTime) | select-object -expandproperty \"TotalMinutes\")","[int]$uptimeMinutes"],"workingDirectory":[""],"executionTimeout":["3600"]}' --timeout-seconds 600 --max-concurrency "50" --max-errors "0" --region us-west-2 --output text --query "Command.CommandId")
declare -a a
readarray -t upTimeArray <<< $(aws ssm list-command-invocations --command-id "$ssmID" --details --output json | jq '.CommandInvocations[].CommandPlugins[].Output')
if [[ " ${upTimeArray[@]} " -gt 5 ]]; then
echo "Uptime is greater than 5 minutes."
completeLoop=true
else
completeLoop=false
fi
done
I've made some progress here but now I am trying to figure out how to remove the carriage return/new line from the output.
Here is my array simplified to just output the value of the items in the array. I assume I need to use sed to strip the '\r\n' from each line but I am having trouble doing so.
declare -a a
readarray -t upTimeArray <<< $(aws ssm list-command-invocations --command-id "$ssmID" --details --output json | jq '.CommandInvocations[].CommandPlugins[].Output')
for i in "${upTimeArray[@]}"
do
echo $i
done
is returning the following
"1\r\n"
"1\r\n"
I need it to return just "1" for each line so I can iterate over the array until each equals 2 or greater.
EDIT #2
I made progress with help provided here but eventually fully solved my issues with the question and scripting in this second question https://stackoverflow.com/questions/65362975/bash-aws-cli-trying-to-figure-out-how-to-validate-an-array-of-uptimes-with-2-ch
|
You never actually enter the loop:
$ completeLoop=false
$ while [ ! $completeLoop ]; do date; done; echo complete
complete
The [ command, when given a single argument (setting aside ! and the trailing ]), will return success if the argument is not empty. Both "true" and "false" are not empty.
To act on the actual boolean result of true and false commands, omit [:
completeLoop=false
count=0
# ....vvvvvvvvvvvvvvv
while ! "$completeLoop"; do
echo in loop
(( ++count == 5 )) && completeLoop=true
done
echo complete
in loop
in loop
in loop
in loop
in loop
complete
You want
readarray -t upTimeArray < <(
aws ssm list-command-invocations --command-id "$ssmID" --details --output json |
sed $'s/\r$//' |
jq -r '.CommandInvocations[].CommandPlugins[].Output'
)
Using sed to remove the trailing carriage return (the newline is handled automatically), and jq -r to output the "raw" value without quotes.
I'm redirecting from a process substitution instead of a here-string. Same results.
It's OK to add extra whitespace inside $(...) for readability
| Trouble building bash array to validate uptime values from aws cli command json output |
1,627,577,410,000 |
I know, that you can add applications with right click -> Add to favorites.
But I want to add / remove the favorites via script.
I was trying to use the solution from this question but
When executing following command:
gsettings get org.gnome.shell favorite-apps
I receive following error:
No such schema »org.gnome.shell«
When searching all gsettings for 'favorite' there wasn't anything that fits.
The only thing is com.linuxmint.mintmenu.plugins.applications but that's empty. (My favorites are definetely not empty)
gsettings list-recursively | grep 'favorite'
com.linuxmint.mintmenu applet-icons ['linuxmint-logo', 'linuxmint-logo-badge', 'linuxmint-logo-badge-symbolic', 'linuxmint-logo-filled-badge', 'linuxmint-logo-filled-leaf-badge', 'linuxmint-logo-filled-leaf', 'linuxmint-logo-filled-ring', 'linuxmint-logo-leaf-badge', 'linuxmint-logo-leaf-badge-symbolic', 'linuxmint-logo-leaf', 'linuxmint-logo-leaf-symbolic', 'linuxmint-logo-neon', 'linuxmint-logo-ring', 'linuxmint-logo-ring-symbolic', 'linuxmint-logo-simple', 'linuxmint-logo-simple-symbolic', 'mate-symbolic', 'emblem-favorite-symbolic', 'user-bookmarks-symbolic', 'start-here-symbolic']
com.linuxmint.mintmenu start-with-favorites false
org.x.warpinator.preferences favorites @as []
com.linuxmint.mintmenu.plugins.applications favorite-apps-list @as []
I couldn't find a solution where my favorites are saved or can be changed.
Does anyone know a solution for my problem?
|
I found the solution on my own. The solution was quiet simple..
I executed the command as root, that's the reason why the favorite-apps-list was empty, after executing as my user everything worked as expected.
| Locate and add favorite apps Linux Mint 20 via CLI |
1,627,577,410,000 |
I am having to to rewrite history expansion commands, instead of calling it from history.
For Example, I have to change 35 to 36, 37, 38.... in the following command.
$ print -P '\033[35mThis is the same red as in your solarized palette\033[0m'
$ !!:gs/35/36
Now I need to make it !!:gs/36/37
However, when I use the Up key. It does not show $ !!:gs/35/36. It shows print -P '\033[35mThis is the same red as in your solarized palette\033[0m'
What can be done here?
|
I have two suggestions how you can approach what you want (referring to bash only):
add it to the history
Before typing the first history expansion command line you can disable history expansion (set +H) and "execute" the history expansion command (and then reenable with set -H). It then is part of the shell history and you can easily get back to it and modify it.
A more direct approach for getting the history expansion command line in the shell history would be history -s. The earlier suggestion may be easier to remember (and may be easier in case of complicated quoting), though (depending on how familiar someone is with shell options).
readline yank
This is most useful when you do not need yanking during the whole operation.
Type the history expansion command line but do not press Enter. Instead go to the beginning / end of the line and delete the whole line with Ctrl-K / Ctrl-U. This puts the whole line on the kill ring. You can restore the line with Ctrl-Y. Even after executing the command you can get it back this way as long as you do not put anything else in the kill ring. And even if: You can go back to older kill ring entries with Ctrl-Y Alt-Y.
| View History Expansion On History |
1,627,577,410,000 |
I have the following line in my nginx.conf file:
proxy_set_header Authorization "Basic dXNlcjpwYXNzd29yZA==";
Currently, the command to start the nginx is:
exec nginx -c /etc/nginx/nginx.conf
Is there any way to pass the string "Basic dXNlcjpwYXNzd29yZA==" as argument to the command above, and use the value inside the nginx.conf file?
I mean if something like this is possible:
exec nginx -c /etc/nginx/nginx.conf "Basic dXNlcjpwYXNzd29yZA=="
Thanks in advance.
|
I've written this code: with help from here
#!/bin/sh
json_file=nginx/auth/basic.json
auth_header=$(jq '"Basic " + ("\(.user):\(.pass)" | @base64)' $json_file)
basicAuth=$(cat << EOF
map "" \$basicAuth {\n\tdefault $auth_header;\n}
EOF
)
conf_file=nginx/auth/basic.conf
echo $basicAuth > $conf_file
That converts this json:
{
"user": "user",
"pass": "password"
}
Into this conf file:
map "" $basicAuth {
default "Basic dXNlcjpwYXNzd29yZA==";
}
Then I was able to import that basic.conf file in my nginx.conf file:
http {
include auth/auth.conf;
...
And use in my specific locations:
proxy_set_header Authorization $basicAuth;
Nginx is now proxying requests with the properly authentications.
| How to pass argument to file used in nginx command? |
1,627,577,410,000 |
I'm working on a RHEL7 and I just installed clang: sudo yum install clang.
Then I execute the command clang-format --version and the output is below:
me@localhost:~$ clang-format --version
LLVM (http://llvm.org/):
LLVM version 3.4.2
Optimized build.
Built May 10 2018 (10:48:27).
Default target: x86_64-redhat-linux-gnu
Host CPU: x86-64
me@localhost:~$ echo $?
1
As you see, clang-format --version seems to work without any error but echo $? shows me a 1.
What's wrong with this command?
I just did the same thing on an Ubuntu system and there is no such an error.
The output of type -a clang-format:
clang-format is /usr/bin/clang-format
clang-format is /bin/clang-format
The output of file "$(command -v clang-format)":
/usr/bin/clang-format: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=899595580dbae12ee1ae6eb9feb8a19aa6d51f49, stripped
|
This issue can be reproduced with older versions of clang-format
available for install with yum in sglim2/centos7 docker image for
example. clang-format --version has been modified to return 0 in
this
commit:
CommandLine: Exit successfully for -version and -help
Tools that use the CommandLine library currently exit with an error
when invoked with -version or -help. This is unusual and non-standard,
so we'll fix them to exit successfully instead.
I don't expect that anyone relies on the current behaviour, so this
should be a fairly safe change.
llvm-svn: 202530
| Why does `clang-format --version` return 1 |
1,585,209,776,000 |
Autojump or z let you move around in your filesystem by entering only a part of the entire path (e.g. z foo takes me to /long/long/path/to/foo).
I often want to jump to a path, do something, and get back. This is easily achieved by using cd -.
However, if I jump to the path, cd around a little, then want to "get back", cd - would no longer work.
It would also not work if I started in dir a, wanted to jump to b, then to c, then "back" (to b) and "back" (to a).
Having to remember the name of where I want to jump back to (so I can do z a instead of "jump back") is no fun.
pushd and popd are built exactly to help you navigate through a stack of directories. I was wondering, if I could integrate the partial matching behavior of z with pushd and popd?
There seems to be no command line option in z or autojump which would give the target directory instead of cd'ing to it, otherwise I'd try pushd $(z ...).
|
Not sure how I missed this, but z has a -e option that echoes the best match instead of cding to it.
I'll give an example of how to use this in fish shell
> pushd (z -e ...)
You can also use fish abbreviations to abbreviate ze to z -e. I am not sure if there is a way to set an abbreviation to automatically expand to pushd (z -e ...) with your cursor behind the closing bracket.
| Using autojump / z in combination with pushd popd |
1,585,209,776,000 |
What's the best way to take a segment out of a text file? and the many others in the right sidebar there are almost duplicates.
The one difference is that my file was too large to fit in the available RAM+VM and so anything I tried would not only do nothing for minutes till killed, but would bog down the system. One of them made me unable to do anything until the system crashed.
I can write a loop in the shell or any other to read, count, and discard lines until the count (line number wanted) is reached, but maybe there exists already a single command that will do it?
After trying a few things (vim, head -X | tail -1, GUI editor), I gave up, deleted the file and changed the program that created it to give me only the lines needed.
Looking further, Opening files of size larger than RAM, no swap used. suggests that vi should do it, but if vi is the same, it was definitely doing something that takes minutes not seconds.
|
You should try less.
From the manpages:
Also, less does not have to read the entire input file before starting, so with large input files it starts up faster than text editors like vi (1).
I open large files regularly with less and have no problems with the start up time. If you combine it with the option -jn, you can reach basically every part of the file in a trivial time.
I didn't test this because I'd need a pretty big file but combined with the head, you could script that too with more which is more fit for scripting.
more +$START_LINE_NUMBER | head -n$AMOUNT_OF_LINES
If I understood it right head should be finishing the pipe process after getting the required amount of lines, so this solution should be the low-overhead solution you were looking for.
| Getting selected lines from a file that is larger than available RAM+VM |
1,585,209,776,000 |
I'm attempting to setup a backup that I want to execute from the command line where, in the case of an error, it curls the error to an API endpoint. Something like:
mysqldump -u whatever -pwhatever somedb > somebackupfile.sql || curl (..options..) -d $'{error:<ERROR FROM FIRST COMMAND>}'
Any help would be greatly appreciated!
|
Save the error stream to a separate file and "curl it" if there is an error. Then delete it (or keep it, it may be useful?):
if ! mysqldump -u whatever -pwhatever somedb >somebackupfile.sql 2>error.log
then
json=$( jq -c -n --arg message "$(cat error.log)" '{ error: $message }' )
curl ...options... -d "$json"
fi
rm -f error.log # or not
This additionally uses jq to properly encode the error output in error.log as a JSON text string.
If you want to change the logic so that you send the contents of the error.log file whenever it's not empty, maybe because the mysqldump program doesn't return a sane exit status (I don't know how this particular program behaves at the moment):
mysqldump -u whatever -pwhatever somedb >somebackupfile.sql 2>error.log
if [ -s error.log ]; then
json=$( jq -c -n --arg message "$(cat error.log)" '{ error: $message }' )
curl ...options... -d "$json"
fi
rm -f error.log # or not
The -s file test is true if the named file has non-zero size.
| mysqldump into file with any error firing off a cURL request with error information |
1,585,209,776,000 |
mohammad@abbasi ~/NGS/Data/RESULTS/TR
$ java -jar ~/NGS/programs/Trimmomatic-0.39/trimmomatic-0.39.jar PE DRR000001_1.fastq DRR000001_2.fastq -baseout GgG.fastq HEADCROP:15 LEADING:30 TRAILING:30 MINLEN:50; done
-bash: syntax error near unexpected token `done'
(base)
|
Your line ends with ; done, done is the shell syntax to end a loop (for, while, until), but you never start any loop. The shell is confused when it reaches done because it doesn't know what loop to terminate.
| I cannot execute trimmomatic on ubuntu |
1,585,209,776,000 |
I'm hoping this is the correct place to ask.
Basically, I'm using cmus and have a little bash script to automate downloading, renaming and moving a mp3.
However, cmus just lists the song name as its full path in the library (e.g.: /home/user/music/genre/song.mp3)
I'd like to change this by adding tags such as album, title, and artist to the mp3 files.
I'm not sure if in Unix some files inherently have metadata like this or if you have to create the data in a specific way for the music player to be able to read it.
I read the cmus man page and I don't see anything about how it reads tags or labels.
Overall, through a bash script after reading an input, such as 'albumname' I'd like to know how to assign that input as the album tag of a mp3 file for a music player to use.
|
Try eyeD3.
Install it from python pip. pip installs latest version.
Docs here: https://eyed3.readthedocs.io/en/latest/ .
eyeD3 has good documentation so it's easy to start.
Also it has a bunch of useful plugins - try it!
| Adding tags to mp3s for use in music players [duplicate] |
1,585,209,776,000 |
I have a large number of files with the following structure:
[Lion] 2015 Africa Book.pdf
[Lion] 2015 Africa Magazine.pdf
[Lion] 2016 Africa Book.pdf
[Lion] 2016 Africa Magazine.pdf
[Lion] 2015 Asia Book.pdf
[Lion] 2015 Asia Magazine.pdf
[Lion] 2016 Asia Book.pdf
[Lion] 2016 Asia Magazine.pdf
[Tiger] 2016 Africa Book.pdf
[Tiger] 2016 Africa Magazine.pdf
[Tiger] 2015 Asia Book.pdf
[Tiger] 2015 Asia Magazine.pdf
[Tiger] 2016 Asia Book.pdf
[Tiger] 2016 Asia Magazine.pdf
etc.
basically the files follow the follwoing pattern: [{animal}] {year} {location} {format}.{ext}
How can I move the file so they have a directory stucture like this?
Animal stuff
├── Lion
│ ├── 2015 - Africa
│ │ ├── [Lion] 2015 Africa Book.pdf
│ │ └── [Lion] 2015 Africa Magazine.pdf
│ ├── 2015 - Asia
│ │ ├── [Lion] 2015 Asia Book.pdf
│ │ └── [Lion] 2015 Asia Magazine.pdf
│ ├── 2016 - Africa
│ │ ├── [Lion] 2016 Africa Book.pdf
│ │ └── [Lion] 2016 Africa Magazine.pdf
│ └── 2016 - Asia
│ ├── [Lion] 2016 Asia Book.pdf
│ └── [Lion] 2016 Asia Magazine.pdf
└── Tiger
├── 2015 - Africa
│ ├── [Tiger] 2015 Africa Book.pdf
│ └── [Tiger] 2015 Africa Magazine.pdf
├── 2015 - Asia
│ ├── [Tiger] 2015 Asia Book.pdf
│ └── [Tiger] 2015 Asia Magazine.pdf
├── 2016 - Africa
│ ├── [Tiger] 2016 Africa Book.pdf
│ └── [Tiger] 2016 Africa Magazine.pdf
└── 2016 - Asia
├── [Tiger] 2016 Asia Book.pdf
└── [Tiger] 2016 Asia Magazine.pdf
|
Try:
find . -maxdepth 1 -type f -exec bash -c '
animal=${1%% *};
year=${1#* }; year=${year% *};
mkdir -p "${animal//[][]}/${year/ / - }" && mv "$animal $year"'*' "${animal//[][]}/${year/ / - }/"
' _ {} \; 2> /dev/null
result:
$ tree
.
├── Lion
│ ├── 2015 - Africa
│ │ ├── [Lion] 2015 Africa Book.pdf
│ │ └── [Lion] 2015 Africa Magazine.pdf
│ ├── 2015 - Asia
│ │ ├── [Lion] 2015 Asia Book.pdf
│ │ └── [Lion] 2015 Asia Magazine.pdf
│ ├── 2016 - Africa
│ │ ├── [Lion] 2016 Africa Book.pdf
│ │ └── [Lion] 2016 Africa Magazine.pdf
│ └── 2016 - Asia
│ ├── [Lion] 2016 Asia Book.pdf
│ └── [Lion] 2016 Asia Magazine.pdf
└── Tiger
├── 2015 - Asia
│ ├── [Tiger] 2015 Asia Book.pdf
│ └── [Tiger] 2015 Asia Magazine.pdf
├── 2016 - Africa
│ ├── [Tiger] 2016 Africa Book.pdf
│ └── [Tiger] 2016 Africa Magazine.pdf
└── 2016 - Asia
├── [Tiger] 2016 Asia Book.pdf
└── [Tiger] 2016 Asia Magazine.pdf
| Creating directory tree based on file name |
1,529,099,354,000 |
I am getting random "cd: Too many arguments." when using different commands, for example newgrp or when logging in. Here is a console log showing the issue along with the Linux version and shell type.
Last login: Mon Jun 4 10:50:58 2018 from somewhere.com
cd: Too many arguments.
myServerName /home/myUserName>
myServerName /home/myUserName>
myServerName /home/myUserName>
myServerName /home/myUserName>
myServerName /home/myUserName> groups
groupA groupB
myServerName /home/myUserName> newgrp groupB
cd: Too many arguments.
myServerName /home/myUserName> groups
groupB groupA
myServerName /home/myUserName> uname -or
2.6.32-696.13.2.el6.x86_64 GNU/Linux
myServerName /home/myUserName> lsb_release -irc
Distributor ID: RedHatEnterpriseServer
Release: 6.9
Codename: Santiago
myServerName /home/myUserName> echo $0
tcsh
myServerName /home/myUserName>
newgrp command actually runs fine, still I would like to get rid of this message.
Unfortunately searching online had no real results as all of them were about cd command itself.
I would welcome some help in tracking this issue down.
Update
myServerName /home/myUserName> grep "cd " ~/.tcshrc ~/.cshrc ~/.login
grep: /home/myUserName/.tcshrc: No such file or directory
myServerName /home/myUserName> grep "cd " ~/.cshrc ~/.login
myServerName /home/myUserName>
~/.cshrc ~/.login files:
# ---------------------------------------------------------------------------- |# ----------------------------------------------------------------------------G
# Name : .login |# Name : .cshrc
# Function : users startup-file for csh and tcsh |# Function : Users startup-file for csh and tcsh
# |#
# Note : Please do not edit this file until you have read the |# Note : Please do not edit this file until you have read the
# site policy file for dot-files: /etc/home/README |# site policy file for dot-files: /etc/home/README.*
# |#
# ---------------------------------------------------------------------------- |# ----------------------------------------------------------------------------
if (-r /etc/home/login && -d /env) then |if (-r /etc/home/cshrc && -d /env) then
source /etc/home/login | source /etc/home/cshrc
else |else
source .login.old | source .cshrc.old
endif |endif
|
The problem was in the ~/.cshrc ~/.login scripts:
# ----------------------------------------------------------------------------
# Name : .login
# Function : users startup-file for csh and tcsh
#
# Note : Please do not edit this file until you have read the
# site policy file for dot-files: /etc/home/README
#
# ----------------------------------------------------------------------------
if (-r /etc/home/login && -d /env) then
source /etc/home/login
else
source .login.old
endif
The source command was overridden by an alias that was a shortcut to some directory. Removing the alias fixed the issue.
| Getting random "cd: Too many arguments." error messages when using different commands |
1,529,099,354,000 |
I am trying to find a way to automate the process of merging multi-track .bin + .cue files into a single .bin and .cue. I have a collection of PSX roms, and they all come in bin and cue format. And I am trying to follow the directions here in order to create a "playlist" .m3u file.
It recommends merging the multi-track files (they are audio channels of the image) using IsoBuster to mount the .cue file, then "burn" it to a singular .bin file.
I don't have Windows, and I have over 200 games I would need to do this for. So if I could automate it that would be great.
Anyone have any suggestions for CLI tools to use?
I've read previous answers about bin2iso and bchunk but apparently they don't work for .bin files with multiple track files. I.e. the sound won't work when the game is played.
|
Update:
I have published a tool for merging the multi-bin files into a single .bin/.cue pair.
You can find it Here
Original Answer:
For those also looking for a solution. I have come across a python script that seems to work. So far I've tested it with a couple files and it seems to be working.
You can find the script here.
| Command Line alternative to IsoBuster |
1,529,099,354,000 |
I want to create soft links (ln -s) to folder2 of all the files that contain *foo* in its name, and can be found in some or all the subdirectories of folder1.
I've tried it with for, find, and find -exec ln, and a combination of them, but all I get is a broken link named *foo* or a link to everything inside folder1.
|
You can use this little snippet
#!/bin/bash
folder1="/path/to/folder1"
find "$folder1" -type f -name '*foo*' -exec \
sh -c 'for f; do ln -s "$folder1" "/path/to/folder2/${f##*/}"; done' _ {} +
This can run from anywhere since I'm using absolute paths here.
| Create soft links from multiple specific files in various subdirectories |
1,529,099,354,000 |
Right now I have following command to copy all contents of the current directory to sub-directory, provided if the subdirectory is created in advance:
cp -p !($PWD/bakfiles2) bakfiles2/
But I have to some times visit those folders which I have never visited before, so sub-directory "bakfiles2" may not exist there, can I somehow create that backup directory with current timestamp(as to avoid conflictions with any existing directory), on the fly when with single copy command or bash script ?
It would be great if the script can ignore any sub-directory starting with a particular pattern which could then be reserved for backup directory names like _bak_* (Note: * means any number of any characters).
|
cp command doesn't have an option to create destination directory if doesn't exist while coping, but you can achieve with scripting.
or simply use rsync command which can create destination directory if doesn't exist only on last level.
rsync -rv --exclude='_bak_*/' /path/in/source/ /path/to/destination
note that leading / in /path/in/source/ will prevent coping source directory itself and adding --exclude option to don't sync directories with matched name.
| Backup all contents of current directory to a subdirectory inside the current directory, which will be created if not exists |
1,529,099,354,000 |
I have a bash script that runs hostapd_cli all_sta, and the script executes successfully from the command line under both jessie and stretch. The script also works when run under sudo on jessie but not on stretch. On stretch the command times out with the error 'STA-FIRST' command timed out. When I invoke hostapd_cli under strace I see that it opens a socket file under /tmp:
bind(3, {sa_family=AF_UNIX, sun_path="/tmp/wpa_ctrl_13552-1"}, 110) = 0
connect(3, {sa_family=AF_UNIX, sun_path="/var/run/hostapd/wlan1"}, 110) = 0
As a test I temporarily modified the script and added a line:
echo "this is a test" >/tmp/test 2>/root/error
When the modified script runs under sudo, the file in /tmp is not created and no error is written to /tmp/error.
On my system, /tmp is not a tmpfs, just a plain old directory under / on an ext3 filesystem. So root is unable to create a file under /tmp and there is ample space.
# df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 6.7G 5.1G 1.4G 80% /
And an ls -ld /tmp gives:
# ls -ld /tmp
drwxrwxrwt 9 root root 4096 Jul 27 23:50 /tmp/
If I can figure out why /tmp can't be written to, I believe the hostapd_cli command will work. What could be happening here?
|
The reason this was not working as expected was because /tmp was remapped by systemd to /tmp/systemd-private-67fcab218d3d46bcb5092dd8a6d4789b-nagios-nrpe-server.service-lN2L1e/tmp
The issue had nothing to do with sudo but the fact that sudo was executing as a plugin running under the nrpe daemon which in turn was configured to have a private /tmp under systemd.
To resolve I had set to:
systemctl stop nagios-nrpe-server
set PrivateTmp=false in /etc/systemd/system/multi-user.target.wants/nagios-nrpe-server.service
systemctl daemon-reload
systemctl start nagios-nrpe-server
| How can I get hostapd_cli to work under sudo on debian stretch? |
1,529,099,354,000 |
Here are some values I have in a file named "example"--I only put one row but there are about a thousand.
a 7 q y 4 5 8 9 5 6 567 5678578 56784 345 345 2 df 4 1 245
b 7 q y 4 5 8 9 5 6 567 5674578 56789 334 324 3 df 4 1 245
Specifically, see in column 1 how the values are a or b? That goes on for the rest of the thousand rows, where column one will either be a or b. I want to separate the rows so that all rows with the value "a" are in one file, and all rows with value "b" are in another file. Is that possible?
awk '$1 == a' /home/me/example > /home/me/rowa
I've tried that to no success, but I don't know why. Can anyone help clarify?
|
easy with awk command
awk '{print > $1".txt"}' infile.txt
this will produce two files "a.txt" containing those lines which column one is only "a" and "b.txt" containing those lines which column one is only "b" if your column one only contains a or b
the above is when your data delimited by tab or space, in case it's different than these we could tell it to awk with its -F"DELIMITER", which DELIMITER represents your file feilds delimiter.
| Selecting rows with specific value in column |
1,529,099,354,000 |
After running the following command to revoke a key
gpg --gen-revoke <key ID>
I have to then press y, 2 times Enter followed by y
How would you suggest to answer automatically, that is revoking a key without any other user interaction than the passphrase prompt?
|
It appears to me that gpg opens the controlling terminal directly, so you're unable to redirect input.
| How to revoke a GPG key without confirmation? |
1,529,099,354,000 |
I have the dir var/www/html and under it there are a few website dirs (say, about 5).
All of the 5 website dirs have an internal path dir0/dir1.
How could I bulk delete all inodes inside this path (besides one inode named he_IL.mo), but in one command?
I ask about one command since I have the following block of 3 commands that works, but I would like to go minimal as much as I can with this:
(
find /var/www/html/*/dir0/dir1/ ! -name 'he_IL.mo' -type f -exec rm -f {} +
find /var/www/html/*/dir0/dir1/ -type f -exec rm -d {} +
find /var/www/html/*/dir0/dir1/ -type f -exec rm -l {} +
)
If I do * instead of f I get "Should contain only a letter".
If I do i instead of f, I get a "Unknown argument".
|
The way to use one command is removing the -type, from the command. Then we get:
find /var/www/html/*/dir0/dir1/ ! -name 'he_IL.mo' -exec rm -f {} +
Note that it won't will delete directories and softlinks with the name he_IL.mo as well, but if it's okay, use it.
| Delete all inodes BESIDES one UNDER all instances of dir0/dir1 UNDER var/www/html, in one command |
1,529,099,354,000 |
I have a file named users.json which is 3GB, and is invalid json.
So what I'm trying to do is read the file's text content, and take the information that I need, which is the usernames contained in the file, and write them to a usernames.txt file which should contain 1 username per line, with no duplicates.
The format of the usernames in json the file is as follow: "username":"someUsername"
How can I gather all the usernames, put them in the text file and make sure there are no duplicates?
I've tried via Node.js and PHP and nothing as been working efficiently yet, hopefully something cool can be done using bash.
Example of data contained in the file (probably not much help as I already mentioned about the format "username":"someUsername"):
username":"satish_nanded","original_ff_id":"99554"},"100003":{"username":"sweetnamu","original_ff_id":"100003"}},"08fdlhNuZEM1z8q4mQftYUtO7uC3":{"575511":{"username":"lrlgrdnr","original_ff_id":"575511"}},"08fe4Dg7NeOTItq3b9Pi8ORsX5J2":{"59520":{"username":"joneljon","original_ff_id":"59520"}},"08gsZHsbm9Rew4S2IqcbGvD9Fct1":{"724707":{"username":"jacksonc4565","original_ff_id":"724707"}
|
You can use the grep command to match the patterns you need, and sort to filter out duplicates. If your input file is input.json and the output is usernames.txt:
grep -P -o '(?<="username":")[^"]*' input.json | sort -u > usernames.txt
Breaking it down:
grep is a command-line utility for matching regular expressions in a file. Regular expressions are a powerful way to describe pieces of text that you wish to find
-P tells grep to use "Perl Compatible Regular Expressions". Note that the man page for grep describes this as "highly experimental"!
-o tells grep to only output the matching text. By default, grep would normally output the whole line wherever a match is found.
'(?<="username":")[^"]*' is the regular expression itself:
We put it in single quotes '....' to stop the command-line shell from trying to interpret anything in it
(?<=...) is what's called a lookbehind assertion. It says we want to match "username":" before something else, but not include it in the output
[^"]* means "as many characters as possible that aren't ". It can be broken down again:
[..] is a character class. Any character you put between square brackets is allowed at this point. Unless...
^" When you use a caret ^ as the first character in a character class it means not any of the following characters
* means 0 or more of the preceding item (which is the whole of [^"] in this case).
Piping the lot through sort sorts the usernames into alphabetical order, which with the -u option means "unique items only", i.e. no duplicates.
Note: All of this assumes that the pattern we're matching can't occur anywhere else in the file (which seems unlikely), or that the brokenness of the JSON itself won't cause the match to fail (which might be, I'm not sure in what way your file is broken).
EDIT:
With grep regularly complaining that the lines were too long, and for some reason sed -e 's/,/,\n/' not really working either, the split command was used to break the file up into more manageable chunks.
| Generate .txt file with specific content from an invalid 3GB .json file |
1,529,099,354,000 |
I have Installed CentOS 7 x86_64 and I forgotten root password. After then I reset the password editing boot grub menu according How To Reset Root Password On CentOS 7 as follow. But after rebooting machine now I have no GUI or CLI login. What should I do ?
1 – In the boot grub menu select option to edit.
2 – Select Option to edit (e).
3 – Go to the line of Linux 16 and change ro with rw init=/sysroot/bin/sh.
4 – Now press Control+x to start on single user mode.
5 – Now access the system with this command.
chroot /sysroot
6 – Reset the password.
passwd root
7 – Update selinux information
touch /.autorelabel
8 – Exit chroot
exit
9 – Reboot your system
reboot
|
Use theses steps to solve your issue.
Interrupt the boot loader countdown by pressing any key.
Move the cursor to the entry that needs to be booted.
Press e to edit the selected entry.
Move the cursor to the kernel command line (the line that starts
with linux16).
Append rd.break (this will break just before control is handed from
the initramfs to the actual system).
Press Ctrl+x to boot with the changes and execute the following commands.
# mount -o remount,rw /sysroot
# chroot /sysroot
# chage -l root
# chage -E -1 root
# passwd root
# touch /.autorelabel
Type exit twice. The first will exit the chroot jail, and the second will exit the initramfs debug shell.
| CentOS 7 GUI or CLI not loading |
1,477,679,498,000 |
I have an xml document containing an element witch I can select with stkconfig>Video[width]. So, I want to modify the value of this element.
There is a CLI utilities for this?
|
Finally I find XML Starlet as suggested by Sato.
I use for that xmlstarlet ed --inplace -u "/stkconfig/Video/@width" -v <new value> <path to my document.
| Modify xml uttribut value of an xml document by selector |
1,477,679,498,000 |
I have a script which function like this for one file.
./script 0001g.log > output
for two or more files, like this
./script 0001g.log 0002g.log 0003g.log > output
The script take one special number from each input file and put it in one output file.
My question is I have 1000 input files, how can I do a loop to execute my script.
|
You have a few possible solutions:
Simply
$ ./script *g.log >output
... and hope that *g.log doesn't expand to something that makes the command line too long. This is not very robust.
If your script doesn't depend on the number of files given to it, i.e., if output can just be appended to output for each input file, then this is another solution:
$ find ./ -type f -name "*g.log" | xargs ./script >output
A third solution would be to move the loop into the script itself:
for f in *g.log; do
# old code using "$f" as file name
done
This does not have the problem with command line length restriction since it's in a script.
The invocation of the script would now be
$ ./script >output
| How to do a loop to execute many files |
1,477,679,498,000 |
I am writing a script that will systematically install Numix theme using gnome-tweak-tool.
I want to make sure that I don't reinstall items if they are already installed, so I used which [name of item] > /dev/null.
Here is my current script:
function installNumix() {
echo "Checking if Numix is installed ..."
if ! which gnome-tweak-tool > /dev/null; then
if ! which numix-gtk-theme > /dev/null; then
if ! which numix-icon-theme-circle > /dev/null; then
echo "Installing Numix ..."
sudo add-apt-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-gtk-theme numix-icon-theme-circle -y
sudo apt-get install gnome-tweak-tool -y
echo "Configuring Numix:"
echo "===================================================================="
echo "Please use the 'tweak-tool' to change your theme to 'Numix'."
echo "[GTK+]: Numix."
echo "[icons]: Numix-Circle."
echo "===================================================================="
gnome-tweak-tool
echo "Numix has been manually configured."
source ~/.profile
changeBackground backgrounds/background.png
changeProfilePicture $(whoami) profile_pictures/profile_picture.png
echo "The Numix has been installed."
sleep 5
fi
fi
else
echo "Numix has already been installed."
sleep 5
fi
}
My .profile file:
#Change desktop background f(x)
#Ex. changeBackground /path/to/image.png
function changeBackground() {
FILE="file://$(readlink -f "$1")"
fileName="${FILE##*/}" # baseName + fileExtension
echo "Changing desktop background to: '$fileName' ..."
dconf write "/org/gnome/desktop/background/picture-uri" "'$FILE'"
echo "Desktop background has been changed."
sleep 5
}
#Change profile picture f(x)
#Ex. changeProfilePicture username /path/to/image.png
function changeProfilePicture() {
FILE="$(readlink -f "$2")"
fileName="${FILE##*/}" # baseName + fileExtension
echo "Checking if 'imagemagick' is installed ..."
if ! command brew ls --versions imagemagick >/dev/null 2>&1; then
echo "Installing 'imagemagick' ..."
brew install imagemagick -y
echo "'Imagemagick' has been installed."
sleep 5
else
echo "'Imagemagick' has already been installed."
sleep 5
fi
echo "Changing profile picture to: '$fileName' ..."
sudo mkdir -p '/var/lib/AccountsService/icons/'"$1"
sudo convert "$2" -set filename:f '/var/lib/AccountsService/icons/'"$1/%t" -resize 96x96 '%[filename:f].png'
echo "Profile picture has been changed."
sleep 5
}
|
Instead of getting the user to manually use gnome-tweak-tool, you can set the gtk and window-manager themes and the icon-theme in your script with gsettings. e.g.
gsettings set org.gnome.desktop.interface gtk-theme Numix
gsettings set org.gnome.desktop.wm.preferences theme Numix
gsettings set org.gnome.desktop.interface icon-theme Numix-Circle
BTW, unless numix-gtk-theme and numix-icon-theme-circle are executables somewhere in the PATH directories, running which on them will not do what you want.
Check for the existence of a specific file or directory, instead. e.g.
if [ ! -d /usr/share/themes/Numix ] ; then ... fi
I don't have the Numix theme installed, so I don't know if that's the right directory - use dpkg -L numix-gtk-theme and dpkg -L numix-icon-theme-circle to find out the correct directories to search for.
Alternatively, don't bother checking to see if the packages are already installed. Just run:
apt-get -y install numix-gtk-theme numix-icon-theme-circle gnome-tweak-tool
(optionally redirect stdout and stderr to /dev/null)
If the latest version of those packages is already installed, apt-get will do nothing. Otherwise, it will install or upgrade them.
Finally, use sudo add-apt-repository -y ppa:numix/ppa so that it doesn't prompt the user. If the repository has already been added, no harm done - it will comment out previous entries in the /etc/sources.list.d/numix-ubuntu-ppa-yakkety.list file and add the ppa to the start of the file.
| How would I make this script more efficient? [closed] |
1,477,679,498,000 |
I installed Raspbian to a 16 GB card and expanded the filesystem. When I made a dd backup of the card, the .img file output was ~16 GB. Most of it is unused space in the ext4 partition—I'm only using like 2.5 GB in that partition. (There are two partitions—the first is FAT for boot and the second is ext4 for rootfs.) I'd like to shrink the backup.img file which resides on an Ubuntu 16.04 Sever installation (no GUI) so that I can restore the image to a card of smaller size (say 8GB for example).
So far, I have mounted the ext4 partition to /dev/loop0 by using the offset value provided to me by fdisk -l backup.img. Then I used e2fsck -f /dev/loop0 and then resize2fs -M /dev/loop0 which appeared to shrink the ext4 fs... am I on the right track? I feel like parted might be next, but I have no experience with it.
How do I accomplish this using only cli tools?
Update:
Here is the output from running fdisk -l backup.img:
Disk backup.img: 14.9 GiB, 15931539456 bytes, 31116288 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x79d38e92
Device Boot Start End Sectors Size Id Type
backup.img1 * 8192 124927 116736 57M e W95 FAT16 (LBA)
backup.img2 124928 31116287 30991360 14.8G 83 Linux
|
I can confirm you are in the right track shrinking that filesystem; fdisk/parted is next. The tricky part is getting it right next to the size of the new filesystem,do your math or leak a hundred KB more just to be safe. You can adjust it later on the new card if need be.
The order is normally: umount, resize, fdisk/parted, partprobe, fsck, and mount to check all is ok. As the partition you are resizing is less than 2T, you can use either fdisk or parted.
The resize process has to come first, as you won't be able to shrink the partition reliably while the filesystem still claims the space you need to reclaim. After you need to shrink the partition once again for consistency, and for not having the remaining space you want to rid off "booked" for use. The filesystem as to come last to make the filesystem structure consistent with the new size.
I will leave these RH articles. It is missing the partprobe, as the new partition size is not always recognised immediately or by older kernels.
How to Shrink an ext2/3/4 File system with resize2fs
How to Resize a Partition using fdisk
Your missing steps are:
sudo fdisk /dev/loop
p - to check for partition number (probably 2)
d - to delete
2 - partition 2
n - new partition
p - primary
ENTER - default beginning
+new size - smaller card size
w - write it
sudo partprobe /dev/loop
To finish it off, you umount the image file; as the extra size is no longer marked as used both by the filesystem and the partition size in your file image, the operation system won't try to use that space. So it can be truncated safely to the intended size:
truncate -s 8GB fileName
To use the appropriate sizes, as I am lazy, I would shring the filesystem to something less than what I need (i.e. size of new partition - 400k, and then expand it again after I shrink the partition), and would create the partition with the needed size (8GB-2048(2K) for possible 1st partition padding-minus the size of the 1st partition). Not much math involved.
For calculating it properly, please do have a look:
How To Resize ext3 Partitions Without Losing Data
| Shrinking Raspberry Pi SD .img via Ubuntu Server (cli) |
1,477,679,498,000 |
I have thousands of PDF files named in the format
Author Year Title of the book
The first two spaces are relevant: they make a break between the Author, the year and the title. The title could contain a number of space. I am looking for a script to write the author to the author meta field in the PDF; the Title to the title, and the year to the year metadata. Exiftool seems the most promising of all the tools I looked at.
Can you guys help me?
|
Some EXIF manipulation tools have a built-in way to rename files based on EXIF data, but I don't know of one that can do it the other way round. So let the shell call the program with the right parts of the file names. Here's a script that processes just one file (pass the name as the sole argument of the script).
#!/bin/sh
title=${1##*/}
author=${title%% *}; title=${title#* }
year=${title%% *}; title=${title#* }
exiftool -Author="$author" -Title="$title" -CreateDate="$year" "$1"
Explanation: I use parameter expansion constructs to perform some basic string processing: put the base name (after the last /) into title; put the part up to the first space into author and remove that from title; repeat with the year.
To process all the files in a directory, put that code in a loop.
#!/bin/sh
for filename in *\ *\ *.pdf; do
title=${filename##*/}
author=${title%% *}; title=${title#* }
year=${title%% *}; title=${title#* }
exiftool -Author="$author" -Title="$title" -CreateDate="$year" "$filename"
done
To process all the files in a directory and its subdirectories recursively, use find.
find /path/to/top/directory -name '* * *.pdf' -type f -exec sh -c '
for filename do
…
done
' _ {} +
| Write PDF metadata from the file name using Exiftool or PDFtk |
1,477,679,498,000 |
What's going to happen if I shutdown my PC after suspending a terminal process (with Ctrl+Z)? In my case, it's sudo apt-get upgrade. The upgrade file size is 250 MB, it takes far more time downloading those files compared to downloading some random files with the same size from websites.
So, will I lose all my downloaded data, or will it continue the download process after I turn my PC back on and resume the process with fg command?
|
If you shut down your computer, it starts again with no program running. Depending on your desktop environment, some of the programs you were using may be started automatically when you log in again, and if the programs remember their open files then they'll have the same files open, but that's about it.
Many GUI applications are big things that do a lot, including session saving. In contrast, command line applications tend to be built on a philosophy of doing one task well, and using the shell as a glue language to control applications and tie them together. If a well-designed command line application is interrupted, running it again with the same parameters (remembered in the shell history) should finish the job. And that works for apt-get. Run the command again and it'll start again where it left off, or close enough.
A Debian upgrade consists of many files, with a median size of around 100 kB (the largest packages, at around 30–60 MB, are mostly large datasets and debugging information). If you interrupt the upgrade process, the files that are already fully downloaded will still be around (in /var/cache/apt/archives) and won't be downloaded again.
| Shutdown PC after suspending terminal process (apt-get upgrade) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.