date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,533,849,977,000 |
I want to extract text infos from layer ( like font, font-style, font-size and content ) with the name and number of layer.
All available command line on standard repo are an option.
I know it can be done from Photoshop scripting, but for the sake of science I would like to do it from a Unix server, and maybe later extract all infos from multiple file in a zip and process them with multiple tools.
|
GIMP has the script-fu scheme extension that can be run from the command line. This will be sketchy because I have not written any scheme in some 3-4 years, but here goes nothing:
Assuming the following script in a file called sc.sch:
(define (go-by-layers no layers)
(while (< 0 no)
(let* ((layer (vector-ref layers (- no 1))))
(display "Layer name: ")
(display (car (gimp-item-get-name layer))) (newline)
(if (< 0 (car (gimp-item-is-text-layer layer)))
(begin
(display "This is a text layer") (newline)
(display "Font: ")
(display (car (gimp-text-layer-get-font layer))) (newline)
(display "Text: ")
(display (car (gimp-text-layer-get-text layer))) (newline)
) )
(if (>= 0 (car (gimp-item-is-text-layer layer)))
(begin
(display "Not a text layer")
(newline)
) )
(set! no (- no 1))
)
)
)
(let* ((layers (gimp-image-get-layers 1)))
(display "Number of Layers: ") (display (car layers)) (newline)
(go-by-layers (car layers) (cadr layers))
(display "end") (newline)
)
(gimp-quit 0)
We can do:
$ gimp zz.psd -b - < sc.sch 2>/dev/null
Welcome to TinyScheme, Version 1.40
Copyright (c) Dimitrios Souflis
ts> go-by-layers
ts> Number of Layers: 2
Layer name: Background
Not a text layer
Layer name: Layer 1
Not a text layer
end
#t
This is quite hacky since we are running the batch mode from STDIN and redirecting the script in. We also get the prompt output, which is quite ugly, but should work with most GIMP versions.
How does this work:
Since we have only one image loaded we know it is named 1.
We get the layers with (gimp-image-get-layers 1)
The layers are a fixed vector so we walk through them using vector-ref (inside a while)
(gimp-item-is-text-layer layer) provides us with information whether we can execute text specific operations on the layer.
gimp-text-layer-get-* give us info about the text layer.
For non-text layers we print less info.
How to get a function reference for script-fu?
In GIMP go to Filters -> Script Fu -> Console. And in there, next to the text field where you can insert scheme commands, you get a button Browse that gets the reference for you version of GIMP.
Disclaimer: this is poorly tested, I only have a simple two layer (without any text) PSD to test it.
| Extract text layer from PSD ( ImageMagick or GiMP ) |
1,533,849,977,000 |
I have a directory that goes like this:
drwxrwxrwx 6 www-data www-data 4096 Jun 8 10:21 ./
drwxr-xr-x 31 user1 user1 4096 Jun 8 10:40 ../
lrwxrwxrwx 1 www-data www-data 66 Jun 8 10:21 archive -> /media/user1/7f62b5e4-4fe7-43c2-b0d0-8dad6e5a2381/archive/
I try to create a file with touch in the symbolic link with the user www-data. I get this error:
$ sudo -u www-data touch archive/myfile
touch: cannot touch ‘archive/myfile’: Permission denied
The root directory and the archive directory are chmod 777.
But this works correctly
$ touch archive/myfile
What am I missing?
|
I fixed the probleme by mounting the hard disk pointed by the symbolic link. In fact, media/ is the path set by default so you need to mount the disk to set a valid path. Here is a link were you can find how to automatically mount a hard disk : InstallingANewHardDrive
| Can't create a file through symbolic link |
1,533,849,977,000 |
All programs/commands I attempt to print certain PDFs with (lpr, lp, Okular, Evince, Xpdf) print solid black pages. The one exception is Gimp, which allows me to import PDFs one page at a time and properly print.
Obviously, this isn't a practical solution for multiple PDFs, so I would like to see exactly what command Gimp is using to print so I can try reproducing it from the command line.
I tried running it with the --verbose flag and printing, but there doesn't seem to be any output showing the lp or lpr command Gimp is using. How can I catch this print command?
Please Note: I'm not looking for help with the blank page printing problem. There are a ton of posts on that on the internet and it just seems to be black magic how one works for some people but not others. Please refrain from answering/commenting about this.
|
Running pdfinfo on the PDF generated by GIMP's print function, as well as checking the PDF file generated by GIMPs's post script function suggests that the program doing the print is Cairo.
Here is the line in the Postscript file:
Creator: cairo 1.14.8 (http://cairographics.org)
| What command is gimp using to print? |
1,533,849,977,000 |
How can I strip the time from a ping return? For example:
64 bytes from 10.3.0.1: icmp_seq=0 ttl=63 time=2.610 ms
I want to grab the value after time= and pass it to a test like:
if time>=50.0; then do_something; fi
|
So if you wanted to get just the time value without the ms label:
HOST="127.0.0.1"
PING_MS=`ping -c1 "$HOST" | /usr/bin/awk 'BEGIN { FS="=" } /time=/{gsub(/ ms/, ""); print $NF; exit}'`
This gives me:
0.058
Now, if we wanted to test if time>=50.0, we could use awk for this, too, since POSIX sh itself can't compare decimal numbers:
if echo $PING_MS | awk '{exit $1>=50.0?0:1}'; then
echo "Ping time is >= 50.0ms."
fi
You could shorten this to:
if ping -c1 "$HOST" | /usr/bin/awk 'BEGIN { FS="=" } /time=/{gsub(/ ms/, ""); exit $NF>=50.0?0:1}'; then
echo "Ping time is >= 50.0ms."
fi
FS is the Field Separator, and $NF is always the last field. $NF>=50.0?0:1 will exit with a success exit code if the last field is >=50.0; or an error exit code if not. /time=/ matches only lines that contain time=. gsub(/ ms/, ""); removes " ms" from the string.
| Comparing ping times in FreeBSD sh |
1,533,849,977,000 |
I want to add this command into a launcher on the panel or desktop so that I can start the inserted DVD with a single click.
I would prefer this to selecting one of the many options my KDE panel notifier offers, or to opening VLC, going to Media - Open disc etc. (On the other hand I prefer not to enable the option of playing automatically any inserted DVD.)
|
GAD3R gave the answer in a comment to the question:
vlc dvd://
| CLI command to start playing inserted DVD in VLC? |
1,533,849,977,000 |
I'm trying to export the result of a command line as en environment variable. Here is how I'm doing it:
group_id=$(aws ec2 describe-security-groups --filters Name=group-name,Values=${group_name} \
| jq '.["SecurityGroups"][0].GroupId' \
| sed -e 's/^"//' -e 's/"$//'
)
However when I run the bash file, I get the following error:
Error parsing parameter '--filters': Expected: '=', received: 'EOF' for input:
^
The command is valid, as it works when I try it directly from the command line. When I use set -exv on the top of this bash file, I get the content of the file then:
+ case $1 in
+ init
aws ec2 describe-security-groups --filters Name=group-name,Values=${group_name} \
++ aws ec2 describe-security-groups --filters Name=group-name,Values=docker-networking ' '
Error parsing parameter '--filters': Expected: '=', received: 'EOF' for input:
^
+ group_id=
Any idea why I'm getting this error?
|
There seems to be a space after the backslash. To make multiline commands, the backslash must be the very last character on a line.
Deduced from the output of set -xv:
++ aws ec2 describe-security-groups --filters Name=group-name,Values=docker-networking ' '
here -> ~~~
| Error when exporting the result of a valid command as a bash variable |
1,533,849,977,000 |
I have a problem I have been trying to figure out:
We have a stock file CSV, which contains the stock in multiple locations.
The csv looks like this:
stock_no,primary,secondary,tertiary,cstock,direct
ABU0029843,1,,,5,
ABU0029934,60,,,5,
ABU0030034,,30,,5,
I would like the end result to look something like this (essentially summing up and removing the empty columns.)
stock_no,primary
ABU0029843,6
ABU0029934,65
ABU0030034,35
I have tried various methods with awk, but I seem to be returning values of 0
However, I am not too familiar with awk, so I am sure I am doing something wrong. Any help would be appreciated.
|
You can try following awk:
awk 'BEGIN { FS = OFS = ","; } NR == 1 { print $1, $2; next; } { for (x = 3; x <= NF; x++) $2 += $x; print $1, $2 } ' file
| Adding two columns together in CSV and outputting to new CSV file |
1,533,849,977,000 |
I have done a listing of the device folder two times, one time without the sd-card in the slot and one time inserted, the system automatically adds one
file in the device folder.
$ ls /dev | wc -l
205
$ ls /dev | wc -l
206
I could put each listing into a separate file: ls /dev > foo.
But how can I determine from this point the device file that was added?
|
You could run this before adding the device to store the inital list
in a file:
ls /dev >~/a
And then this after adding the device:
ls /dev | diff -u ~/a -
This should show you in what way the two lists of files differ. diff
shows the differences between two text files, and flag -u changes its
output format: lines added will be prefixed with a + sign. For
example, if you get the following output (I omitted the diff header):
sdc
sdd
sde
+sdf
sg0
sg1
sg2
then it means that the new device that got created is /dev/sdf.
You can then delete the temporary file ~/a.
Another way to get the information you are looking for would be to tail -f /var/log/messages: you should see kernel messages mentioning the new device's appearance and disappearance.
| How to determine the only additional file in otherwise two identical listings? |
1,533,849,977,000 |
what do i have to do in the terminal in order to switch all the frontends of the xserver off, just to have the x window system running without any window manager or desktop environment?
|
You should kill your Display Manager to switch all frontends of the xserver off. It could be:
mdm - MATE Display Manager
gdm - Gnome Display Manager
kdm - KDE Display Manager
xdm - X Window Display Manager
The corresponding one should be killed. E.g:
sudo killall mdm
To start a plain xserver you should type this command:
sudo X
More info here.
After that, for example you could start an xterm on it:
xterm -geometry +1+1 -n login -display :0
| how to start into the x window server from linux mint ? [closed] |
1,533,849,977,000 |
I'm using ifconfig on OpenSUSE. When I run ifconfig eth0 I get
eth0 Link encap:Ethernet HWaddr CE:FD:75:DF:A5:6D
inet addr:172.16.4.177 Bcast:172.16.5.255 Mask:255.255.254.0
inet6 addr: fe80::adfd:75ef:fedf:v56d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11812456 errors:0 dropped:2 overruns:0 frame:0
TX packets:7000495 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2591436376 (2471.3 Mb) TX bytes:9196901478 (8770.8 Mb)
I'm looking to format this so each parameter gets returned on a new line using sed or awk e.g.:
eth0
Link encap:Ethernet
HWaddr CE:FD:75:DF:A5:6D
inet addr:172.16.4.177
Bcast:172.16.5.255
Mask:255.255.254.0
inet6 addr: fe80::adfd:75ef:fedf:v56d/64
Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:11812456
errors:0
dropped:2
overruns:0
frame:0
TX packets:7000495
errors:0
dropped:0
overruns:0
carrier:0
collisions:0
txqueuelen:1000
RX bytes:2591436376 (2471.3 Mb)
TX bytes:9196901478 (8770.8 Mb)
I've tried ifconfig eth0 | sed 's/ /\r/' but that doesn't seem to split on the double space.
|
You can start with
sed 's/\(:[^: ]\+\) \([^(]\)/\1\n\2/g;s/\()\)/\1\n/;s/^ \+//'
it should be close enough, and most probably can be simplified and optimized further.
The result:
eth0 Link encap:Ethernet
HWaddr CE:FD:75:DF:A5:6D
inet addr:172.16.4.177
Bcast:172.16.5.255
Mask:255.255.254.0
inet6 addr: fe80::adfd:75ef:fedf:v56d/64
Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:11812456
errors:0
dropped:2
overruns:0
frame:0
TX packets:7000495
errors:0
dropped:0
overruns:0
carrier:0
collisions:0
txqueuelen:1000
RX bytes:2591436376 (2471.3 Mb)
TX bytes:9196901478 (8770.8 Mb)
| Formatting ifconfig using sed/awk |
1,533,849,977,000 |
I use Fedora. There are a number of programmes packaged in a security spin. Included desktop files work but open the programmes with root priviliges.
How can I edit the desktop file shown here to open the target without root. I have tried every obvious edit I can think of but am not having any luck.
#!/usr/bin/env xdg-open
[Desktop Entry]
Name=argus
Exec=gnome-terminal -e "su -c 'argus -h; bash'"
TryExec=argus
Type=Application
Categories=System;Security;X-SecurityLab;X-Reconnaissance;
|
#!/usr/bin/env xdg-open
[Desktop Entry]
Name=argus
Exec=gnome-terminal -e "sh -c 'argus -h; bash'"
TryExec=argus
Type=Application
Categories=System;Security;X-SecurityLab;X-Reconnaissance;
This matches behavior most closely. It could be improved upon by someone who knows argus better than I
| .desktop file. Correct exec path |
1,415,863,041,000 |
I use GNU Screen on occasion, and my Emacs keybinding muscle memory is hard to overcome. I know that I can send a control-a (i.e. "go to beginning of line") by hitting "c-a a", but I'm wondering if it's possible to remap the keybindings so that c-a c-a so that sends the c-a.
I've tried a simple "bind ^a ^a" in the .screenrc, but that doesn't seem to do the trick.
I suspect there's some way to do it with the combination of bind and bindkey, but the answer isn't obvious to me.
|
You want:
escape ^a^a
or
bind ^a meta
(since meta sends the command character, i.e. here ^a).
But since ^a is typically more useful than ^z in GNU Screen, you could use ^z as the command character. This is what I do:
escape ^z^z
| Gnu screen: remap "c-a c-a" to send "c-a" |
1,415,863,041,000 |
I have recently finished a script that will ask for a hostname and automatically take the argument of the function and ssh into one of my work servers.
Here is a copy of my script:
#!/bin/bash
echo "Provide hostname: "
read host
createSSHFunction()
{
arg1=$1
ssh $host
}
createSSHFunction $host
while((1)); do ~/sshScript.sh; done
What I notice though is say I want to ssh into one of the servers and I got disconnected, by instinct I will press the Up arrow key to redisplay the same argument I put in the last time. Here is an example of that:
Provide hostname:
crf-alpha
support@localhost's password:
Last login: Thu Jul 24 12:16:33 2014 from localhost
[support@crf-alpha ~]$
Connection to localhost closed.
Provide hostname:
^[[A
Is there such a way to code in my current script where it will buffer anything I type last? So for example i ssh into the crf-alpha and I get disconnected. My script will run again prompting me to enter a hostname, I would like it so that if I press the Up arrow key, the last input I entered will re-display, so in this case crf-alpha rather than the weird symbol I always get.
|
If you can install the rlwrap utility, then it is as simple as doing
rlwrap ./yourscript.sh
This will allow you to use the up and down array keys to browse through history, as well as the right and left arrow keys for editing the current command, for programs that do not support it already.
| Memory buffer in a bash script |
1,415,863,041,000 |
How can I partition /dev/sdb into /dev/sdb1 and /dev/sdb2, and then format /dev/sdb1 to exFAT ( or FAT32 ) from LBA=1 to 2097152 by commands? ( LBA=0 is reserved for MBR )
|
I have created a loop device for testing:
dd if=/dev/zero of=tmp.img bs=1M count=100
modprobe loop
dd if=/dev/zero of=tmp.img bs=1M count=100
losetup /dev/loop0 tmp.img
And then:
# parted --script /dev/loop0 unit s mklabel msdos \
mkpart primary fat32 1 2048 mkpart primary fat32 2049 4096 print
Warning: The resulting partition is not properly aligned for best performance.
Warning: The resulting partition is not properly aligned for best performance.
Model: Loopback device (loopback)
Disk /dev/loop0: 204800s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1s 2048s 2048s primary lba, type=0c
2 2049s 4096s 2048s primary lba, type=0c
formatting
mkfs.vfat -F 32 /dev/sdb1
| How can I partition and format my disk to an accuracy of "LBA" by commands? |
1,415,863,041,000 |
Hello I open my terminal window on Mac Os 10.6.8 as I am trying to update my ruby to 1.9.3 and the terminal gives me this response immediately as I open it:
-bash: export: /Library/Frameworks/Python.framework/Versions/Current/bin': not a valid identifier
-bash: export:/Library/Frameworks/Python.framework/Versions/2.7/bin': not a valid identifier
-bash: export: /Library/Frameworks/Python.framework/Versions/Current/bin': not a valid identifier
-bash: export:/Library/Frameworks/Python.framework/Versions/Current/bin': not a valid identifier
-bash: export: /usr/bin': not a valid identifier
-bash: export:/bin': not a valid identifier
-bash: export: /usr/sbin': not a valid identifier
-bash: export:/sbin': not a valid identifier
-bash: export: /usr/local/bin': not a valid identifier
-bash: export:/usr/local/git/bin': not a valid identifier
-bash: export: /usr/X11/bin': not a valid identifier
-bash: export:/Users/oskarniburski/.rvm/bin': not a valid identifier
-bash: export: /usr/X11R6/bin': not a valid identifier
-bash: export:/Library/Frameworks/Python.framework/Versions/Current/bin': not a valid identifier
-bash: export: /Library/Frameworks/Python.framework/Versions/2.7/bin': not a valid identifier
-bash: export:/Library/Frameworks/Python.framework/Versions/Current/bin': not a valid identifier
-bash: export: /Library/Frameworks/Python.framework/Versions/Current/bin': not a valid identifier
-bash: export:/usr/bin': not a valid identifier
-bash: export: /bin': not a valid identifier
-bash: export:/usr/sbin': not a valid identifier
-bash: export: /sbin': not a valid identifier
-bash: export:/usr/local/bin': not a valid identifier
-bash: export: /usr/local/git/bin': not a valid identifier
-bash: export:/usr/X11/bin': not a valid identifier
-bash: export: /Users/oskarniburski/.rvm/bin': not a valid identifier
-bash: export:/usr/X11R6/bin': not a valid identifier
I tried to change my path but it did not work. I am not sure how to go about this problem and have been reading a whack load of forums. Any ideas?
Here is the bash_profile:
$ /bin/cat ~/.bash_profile
# Setting PATH for MacPython 2.5
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/Current/bin:${PATH}"
export PATH
# Setting PATH for MacPython 2.5
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/Current/bin:${PATH}"
export PATH
# Setting PATH for Python 2.7
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/2.7/bin:${PATH}"
export PATH
# Setting PATH for EPD_free-7.3-2
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/Current/bin:${PATH}"
export PATH
# Setting PATH for Python 3.3
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/3.3/bin:${PATH}"
export PATH
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*
export PATH=/usr/local/bin:/Library/Frameworks/Python.framework/Versions/3.3/bin /Library/Frameworks/Python.framework/Versions/Current/bin /Library/Frameworks/Python.framework/Versions/2.7/bin /Library/Frameworks/Python.framework/Versions/Current/bin /Library/Frameworks/Python.framework/Versions/Current/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/local/git/bin /usr/X11/bin /Users/oskarniburski/.rvm/bin /usr/X11R6/bin
export PATH=/usr/local/bin:/Library/Frameworks/Python.framework/Versions/3.3/bin /Library/Frameworks/Python.framework/Versions/Current/bin /Library/Frameworks/Python.framework/Versions/2.7/bin /Library/Frameworks/Python.framework/Versions/Current/bin /Library/Frameworks/Python.framework/Versions/Current/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/local/git/bin /usr/X11/bin /Users/oskarniburski/.rvm/bin /usr/X11R6/bin
##
# Your previous /Users/oskarniburski/.bash_profile file was backed up as /Users/oskarniburski/.bash_profile.macports-saved_2013-09-26_at_17:32:30
##
# MacPorts Installer addition on 2013-09-26_at_17:32:30: adding an appropriate PATH variable for use with MacPorts.
export PATH=/opt/local/bin:/opt/local/sbin:$PATH
# Finished adapting your PATH environment variable for use with MacPorts.
|
OK, the main issue here was that you had spaces separating directory entries in your $PATH and that you had these spaces in non quoted variables which confused bash.
What you wanted to do in this case was add a directory to your path. The correct syntax is PATH="/foo:/bar/baz:$PATH. Adding the $PATH to the end means that its current value will be appended to the end of the variable, that way you will not overwrite what was already there. The directories in $PATH are read in order so add it to the beginning if you want the new directories to be searched last: PATH="$PATH:/foo:/bar".
Another problem was that you had many duplicate paths. You can find these by running
$ echo $PATH | perl -pne 's/:/\n/g' | sort | uniq -d
/bin
/Library/Frameworks/Python.framework/Versions/2.7/bin
/Library/Frameworks/Python.framework/Versions/3.3/bin
/Library/Frameworks/Python.framework/Versions/Current/bin
/sbin
/usr/bin
/usr/local/bin
/usr/sbin
Finally, you were exporting the $PATH multiple times which is pointless. I removed all duplicates and fixed your syntax and ended up with this:
# Setting PATH for MacPython 2.5
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/Current/bin:${PATH}"
# Setting PATH for Python 2.7
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/2.7/bin:${PATH}"
# Setting PATH for Python 3.3
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/3.3/bin:${PATH}"
# Load RVM into a shell session *as a function*
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"
PATH="/usr/local/git/bin:/usr/X11/bin:/Users/oskarniburski/.rvm/bin:/usr/X11R6/bin:$PATH"
##
# Your previous /Users/oskarniburski/.bash_profile file was backed up
# as /Users/oskarniburski/.bash_profile.macports-saved_2013-09-26_at_17:32:30
##
# MacPorts Installer addition on 2013-09-26_at_17:32:30: adding an appropriate PATH
# variable for use with MacPorts.
export PATH="/opt/local/bin:/opt/local/sbin:$PATH"
# Finished adapting your PATH environment variable for use with MacPorts.
Copy that file, open your terminal and run these commands:
/bin/cp ~/.bash_profile ~/bash_profile.bad
/bin/cat > ~/.bash_profile
The first will make a backup of your current ~/.bash_profile (just in case). The second will appear to do nothing but it will have opened ~/.bash_profile for writing. Just paste what I gave above directly into the terminal then hit Enter and then CtrlC. That should bring everything back to normal.
NOTE: You were specifying /bin,/sbin,/usr/bin and /usr/local/bin in your .bash_profile. These are almost certainly already in your $PATH and don't need to be added. If they are missing (echo $PATH to see the current value) just add them using the syntax I described above.
| Bash Commands not working on Mac |
1,415,863,041,000 |
Possible Duplicate:
redirect output of a running program to /dev/null
Is it possible to change stdout after starting something as a background application in the command line?
Say I run test.py:
import time
while True:
print "Hello"
time.sleep(1)
and then do:
$ python test.py &
Can I redirect the output to /dev/null somehow?
Relates to: How to redirect output of a running program to /dev/null
With an answer on this site: How to redirect output of a running program to /dev/null by Mike Perdide
It's also a direct duplicate of a StackOverflow question: Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?
|
Not unless you taught the program to do so somehow (say, on receipt of a particular signal such as SIGUSR1 it reopens sys.stdout and sys.stderr on /dev/null). Otherwise, once it's been started you have very little control over it.
| Can I redirect stdout from a background application after starting it? [duplicate] |
1,415,863,041,000 |
I have one file: combined.txt like this:
GO_GLUTAMINE_FAMILY_AMINO_ACID_METABOLIC_PROCESS
REACTOME_APC_CDC20_MEDIATED_DEGRADATION_OF_NEK2A
LEE_METASTASIS_AND_RNA_PROCESSING_UP
RB_DN.V1_UP
REACTOME_ABORTIVE_ELONGATION_OF_HIV1_TRANSCRIPT_IN_THE_ABSENCE_OF_TAT
...
and in my current directory I have multiple .xls files which are named like lines in combined.txt, for example: GO_GLUTAMINE_FAMILY_AMINO_ACID_METABOLIC_PROCESS.xls
In those .xls files I want to retrieve everything in column named: GENE_TITLE for which I have "Yes" in column named: "METRIC SCORE"
those files look like:
NAME PROBE GENE SYMBOL GENE_TITLE RANK IN GENE LIST RANK METRIC SCORE RUNNING ES CORE ENRICHMENT
row_0 MKI67 null null 51 3.389514923095703 0.06758767 Yes
row_1 CDCA8 null null 96 2.8250465393066406 0.123790346 Yes
row_2 NUSAP1 null null 118 2.7029471397399902 0.17939204 Yes
row_3 H2AFX null null 191 2.3259851932525635 0.22256653 Yes
row_4 DLGAP5 null null 193 2.324765920639038 0.2718671 Yes
row_5 SMC2 null null 229 2.2023487091064453 0.31562105 No
row_6 CKS1B null null 279 2.0804455280303955 0.3555722 No
row_7 UBE2C null null 403 1.816525936126709 0.38350475 No
And in the output file I would have just in every line:
GO_GLUTAMINE_FAMILY_AMINO_ACID_METABOLIC_PROCESS 51 96 118 191 193
<name of the particular line in combined.txt> <list of all entries in GENE_TITLE which have METRIC SCORE=Yes>
What I tried so far is:
grep -iw -f combined.txt *.xls > out1
I also tried this but here I am not using information from combined.txt neither getting values labeled with "Yes" just extracting 5th column from all files
awk '{ a[FNR] = (a[FNR] ? a[FNR] FS : "") $5 } END { for(i=1;i<=FNR;i++) print a[i] }' $(ls -1v *.xls) > out2
this is maybe a little bit closer but still not there:
awk 'BEGIN {ORS=" "} BEGINFILE{print FILENAME} {print $5 " " $8} ENDFILE{ printf("\n")}' *.xls > out3
I am getting something like:
GENE_TITLE GENE 1 Yes 4 Yes 11 Yes 23 Yes 49 Yes 76 Yes 85 Yes 118 No 161 No....
GENE_TITLE GENE 0 Yes 16 No 28 Yes 51 Yes 63 No 96 Yes 182 Yes 191 Yes
...
so my desired output would have instead of "GENE_TITLE GENE" the name of the file from where it did grab those values (without .xls suffix) : 0 Yes 16 No 28 Yes 51 Yes 63 No 96...not including the one which have "No"
UPDATE
I did get the file I needed but I wrote the ugliest code possible (see bellow). If someone has something a little bit more elegant please do share.
This is how I got it:
awk '{print FILENAME " "$5 " "$8}' *.xls | awk '!/^ranked/' | awk '!/^gsea/'| awk '!/^gene/' | awk '$3!="No" {print $1 " " $2}' | awk '$2!="GENE_TITLE" {print}' |awk -v ncr=4 '{$1=substr($1,0,length($1)-ncr)}1' | awk -F' ' -v OFS=' ' '{x=$1;$1="";a[x]=a[x]$0}END{for(x in a)print x,a[x]}'>out3
grep -iw -f combined.txt out3 > ENTR_combined_SET.txt
|
xargs -I {} awk '$8 == "Yes" { title = title OFS $5 } END { print substr(FILENAME,1,length(FILENAME)-4), title }' {}.xls <combined.txt
This uses xargs to execute an awk program for each name listed in your combined.txt file.
The awk program is given whatever names is read from the combined.txt file with .xls added onto the end of the name as its input file.
The awk program collects the data from the 5th column for each row whose 8th column is Yes. This string is then printed together with the filename with its last four characters (the file name suffix) chopped off.
| How to grep all lines from one file in specific column in multiple other files? |
1,415,863,041,000 |
[EDITED to reflect answers below]
I am looking for a way to create blocks of folders / directories from the command line or a script that will generate a top level folder titled "200000-209999" and then inside of that folder, sub-folders named thusly:
200000-200499
200500-200999
201000-201499
... etc ...
... etc ...
208500-208999
209000-209499
209500-209999
The naming is spaced like you see, and then I would want to set up the next batch of top-level/sub-folders, "210000-219999," "220000-229999," etc.
[EDIT]
I came up with the following script based on the answers below to accomplish exactly what I am looking for. My additions may not be elegant scripting, so if it can be improved upon, let me know.
#!/bin/bash
#
# mkfolders.sh
#
# Ask user for starting range of job #'s and create the subsequent
# folder hiearchy to contain them all.
#
###
clear
read -p 'Starting Job# in range: ' jobnum
mkdir "${jobnum}"-"$((jobnum + 9999))"
for start in $(seq $jobnum 500 $((jobnum+9999))); do mkdir "${jobnum}"-"$((jobnum + 9999))"/"${start}"-"$((start + 499))"; done
echo
echo Done!
echo
|
The seq utility is one way to generate numbers:
for start in $(seq 200000 500 209000); do mkdir "${start}"-"$((start + 499))"; done
The syntax is seq start increment end.
| Creating numerous ranges or blocks of folders/directories? |
1,415,863,041,000 |
I want the shell to detect that I had run a specific command and then after running the command, run another command.
For Example : When every I run the command : git commit -m " "
First finish the above command and then run another command such as : python check.py
I'm inclined towards modifying the .bash_rc file. Am I right ?
Thanks in advance.
|
Use the trap command in bash.
trap [-lp] [[arg] sigspec ...]
If a sigspec is DEBUG, the command arg is executed before every simple command ...
Now your only problem is that your trap command is run before and not after the command. But you can refer the the command to be executed with $BASH_COMMAND, and you can cause the command not to be executed.
extdebug
If the command run by the DEBUG trap returns a non-zero value, the next command is skipped and not executed.
So set a DEBUG trap, if you detect the command you are interested in, execute $BASH_COMMAND, execute your own command and then cause the original command not to be run.
Edit
Try this example:
#!/bin/bash
function myfunc ()
{
if test "$BASH_COMMAND" = "echo 1"; then
$BASH_COMMAND
echo "runing after 'echo 1'"
return 1
else
return 0
fi
}
shopt -s extdebug
trap "myfunc" DEBUG
echo 1
echo 2
Executing this script:
$ bash test.sh
1
runing after 'echo 1'
2
| How do I detect a command is being executed and then execute an additional command after the current command |
1,533,768,388,000 |
I have a sample textfile(test_long_sentence.txt) below and I want to grep all the lines that contain test1 excluding unwanted data.
How do I grep the data before the quote closes?
test_long_sentence.txt
This is some unwanted data blah blah blah
20 /test1/catergory="Food"
20 /test1/target="Adults, \"Goblins\", Elderly,
Babies, \"Witch\",
Faries"
20 /test1/type="Western"
This is some unwanted data blah blah blah
20 /test1/theme="Halloween"
Command:
grep "test1" test_long_sentence.txt
Actual Output:
20 /test1/catergory="food"
20 /test1/target="Adults, \"Goblins\", Elderly,
20 /test1/type="Western"
20 /test1/theme="Halloween"
Expected Output:
20 /test1/catergory="food"
20 /test1/target="Adults, \"Goblins\", Elderly,
Babies, \"Witch\",
Faries"
20 /test1/type="Western"
20 /test1/theme="Halloween"
Ps: I have no control in editing the test_long_sentence.txt. So please, do not ask me to edit it to a single line.
|
Using awk
$ awk '/test1/{line=$0; while (!(line ~ /[^\\]".*[^\\]"/)) {getline; line=line "\n" $0}; print line}' sentence.txt
20 /test1/catergory="Food"
20 /test1/target="Adults, \"Goblins\", Elderly,
Babies, \"Witch\",
Faries"
20 /test1/type="Western"
20 /test1/theme="Halloween"
/test1/ is a condition. If the current line contains a match to the regex test1, then the commands in curly braces are performed. Those commands are:
line=$0
The contents of the current line are saved in variable `line.
while (!(line ~ !/[^\\]".*[^\\]"/)) {getline; line=line "\n" $0}
If the current contents of line do not contain two unescaped quotes, then get the next line, getline and append it to line via line=line "\n" $0
print line
Now that the variable line contains two unescaped quotes, we print it.
For those who prefer their commands spread over multiple lines, the same command as above can be written as:
awk '
/test1/{
line=$0
while (!(line ~ /[^\\]".*[^\\]"/)) {
getline
line=line "\n" $0
}
print line
}' sentence.txt
Using sed
$ sed -n '/test1/{:a; /[^\\]".*[^\\]"/{p;b}; N; ba}' sentence.txt
20 /test1/catergory="Food"
20 /test1/target="Adults, \"Goblins\", Elderly,
Babies, \"Witch\",
Faries"
20 /test1/type="Western"
20 /test1/theme="Halloween"
How it works:
-n
This tells sed not to print anything unless we explicitly ask it to.
/test1/{...}
For any line containing test1, we perform the commands in curly braces which are:
:a
This defines a label a.
/[^\\]".*[^\\]"/{p;b}
If the pattern space currently contains two unescaped quotes, we print the pattern space, p, and then we skip the rest of the instructions and branch, b, to start over on the next line.
N
If we get here, that means that the current did not have two unescaped quotes. We read in the next line into the pattern space.
ba
We branch back to label a and repeat that commands which follow that label.
| grep till end of quote |
1,533,768,388,000 |
I have a JSON file with thousands of records line by line in the following structure, with different values.
Example:
{"in": 5,"li": [{"st": 1508584161,"abc": 128416626,"ta": 33888}],"usr": {"is": "222108923573880","ie": "222108923573880"},"st2": 1508584161,"ei": {"ev": 0,"rt": 10},"rn": 947794,"st1": 1508584161}
{"in": 5,"li": [{"st": 1508584174,"abc": 128572802,"ta": 33504}],"usr": {"is": "222108923573880","ie": "222108923573880"},"st2": 1508584174,"ei": {"ev": 0,"rt": 19},"rn": 947795,"st1": 0}
{"in": 5,"li": [{"st": 1508584145,"abc": 279682,"ta": 50000}],"usr": {"is": "222108923573880","ie": "222108923573880"},"st2": 1508584145,"ei": {"ev": 0,"rt": 18},"rn": 947796,"st1": 1508584145}
{"in": 5,"li": [{"st": 1508584183,"abc": 1378680,"ta": 49840}],"usr": {"is": "222108923573880","ie": "222108923573880"},"st2": 1508584183,"ei": {"ev": 0,"rt": 10},"rn": 947797,"st1": 1508584186}
{"nt": 4}
I am trying to select objects (records) in the JSON file that match the following criteria and output to another file.
st1 is < or = st2
st1 is not 0
st2 is not 0
st1 is less than 2147483647
st2 is less than 2147483647
In the output, the footer of the original file ({"nt": 4}) should also be in the output file, so it can be edited with the new records count
Example of output file:
{"in": 5,"li": [{"st": 1508584161,"abc": 128416626,"ta": 33888}],"usr": {"is": "222108923573880","ie": "222108923573880"},"st2": 1508584161,"ei": {"ev": 0,"rt": 10},"rn": 947794,"st1": 1508584161}
{"nt": 1}
I have the following:
jq -c 'select((.st1 > 0 and .st2 > 0 and .st1 < .st2) or (.st1 < 214748647 and .st2 < 214748647 and .st1 > 0 and .st2 > 0 and .st1 < .st2)) file.json
I have tried various permutations but it is not capturing the correct records.
|
With the correct numbers, a straightforward translation of you conditions works:
$ jq -c 'select(.st1 <= .st2 and
.st1 > 0 and .st2 > 0 and
.st1 < 2147483647 and .st2 < 2147483647)' file.json
{"in":5,"li":[{"st":1508584161,"abc":128416626,"ta":33888}],"usr":{"is":"222108923573880","ie":"222108923573880"},"st2":1508584161,"ei":{"ev":0,"rt":10},"rn":947794,"st1":1508584161}
{"in":5,"li":[{"st":1508584145,"abc":279682,"ta":50000}],"usr":{"is":"222108923573880","ie":"222108923573880"},"st2":1508584145,"ei":{"ev":0,"rt":18},"rn":947796,"st1":1508584145}
Note the closing ', and no double parenthesis. I don't understand why you split the conditions into two and clauses connected by or, that's not what your conditions say.
Anyway, that captures the correct records; now we only have to add the footer. That's easiest with an additional step, shortening the select clause from above for brevity:
jq -c 'select ...' file.json > out.json
printf '{"nt":%i}\n' `wc -l < out.json` >> out.json
I guess one can also do it with a complicated jq expression, but I didn't try that.
| Using multiple wildcards in jq to select objects in a JSON file |
1,533,768,388,000 |
Details
OS: Solaris 10 , update 11
HW: M5-32 LDOM, V490, IBM x3650, T5240, VMware virtual machine, etc...
EDITOR=vi
term=vt100
tmp directory=/var/tmp
cron shell=/sbin/sh
My shell=/bin/bash
Issue
A very interesting error occurs when attempting to modify the crontab via crontab -e.
If I attempt to search for a non-existent string utilizing crontab -e to verify and check syntax with vi as my editor, and then try and save, it will puke back and tell me an error has occurred even if no changes were made.
Example
admin@server# export EDITOR=vi
admin@server# crontab -e
In command mode, search for a non-existent string like "foobar123". After receiving the "Pattern not found" then attempt to :wq and you'll receive...
The editor indicates that an error occurred while you were
editing the crontab data - usually a minor typing error.
Edit again, to ensure crontab information is intact (y/n)?
('n' will discard edits.)
If you are cheeky and choose to go right back in and attempt to save it will now save sans error. This is repeatable from on all types of Solaris from VMware to M5-32 LDOM, to a V490 physical. Curious as to why cron would interpret a search for a non-existent string as an error, but not say visudo.
A related note is Solaris 11 will not produce this error, which then begs the question if this is some sort of POSIX specification why it would apply to Solaris 10 and not 11?
|
Not having the source to Solaris 10 or Solaris 11, I can't say for sure, but I suspect that Thomas Dickey is on the right track, based on his findings with vim.
I tracked down the IllumOS source where a search for errcnt in the ex/vi directory shows that errcnt is only ever incremented, and errcnt is used as the return code from main().
Thus, any failure that increments errcnt in vi will "bubble up" to the crontab command, where the IllumOS source for crontab indicates that it will be unhappy with anything other than zero.
Notice also the comment in crontab.c!
311 ret = system(buf);
...
327 if ((ret) && (errno != EINTR)) {
328 /*
329 * Some editors (like 'vi') can return
330 * a non-zero exit status even though
331 * everything is okay. Need to check.
332 */
| Error while searching for non-existent string with EDITOR=vi crontab -e |
1,533,768,388,000 |
I want to find and delete first 10 largest files. Below is the command to find out 10 largest files.
du -a * | sort -n -r | head -n 10
|
Assuming the GNU implementation of all utilities below:
find /some/folder -type f -printf '%s\t%p\0' | \
sort -rnz | \
head -10 -z | \
cut -f2- -z | \
xargs -0 rm -f
| Find And Delete |
1,533,768,388,000 |
I've been using rsync to synchronise folders and it works well. The problem is that more recently I've started syncing folders with larger files in them, and it takes much longer than I'd like it to (due to its hashing comparison). I've noticed that the cp commands can do one part of rsyncs job much quicker, by invoking the -u option. This means newer files in the source can be added to the destination easily using this method.
But what I need to figure out is the second part of the rsync job which I find useful. This is a command which can recursively compare the list of files in all folders, and delete those which no longer feature in the source, but which are still in the destination (but without performing a hash on all the files, a simple comparison using the ls command, for example, is good enough for what I want).
Is this possible?
|
This will (pretend to) delete any differences between the folders:
diff -awr folderA folderB | sed 's/Only in //;s/: /\//' | while read f; do echo "removing ${f}"; done;
If you want to remove differences in A but not B, you can add in a grep like so:
diff -awr folderA folderB | sed 's/Only in //;s/: /\//' | grep "^folderA/" | while read f; do echo "removing ${f}"; done;
note that you have to type folderA into the command twice for this one
To run it for real, just replace echo "removing ${f}"; with rm -f "${f}";
| Recursive comparison and deletion (without rsync or hashing) |
1,533,768,388,000 |
Is there a way to cd out of a directory which has just been deleted (go up one level into the upper folder which still exists?
It often happens to me that I have a console opened for a folder, and then I delete the folder with my temporary test data and create another one.
However, both cd .. and cd $(pwd)/.. only get me to the trash bin, and not to the upper directory when I try to leave the deleted folder.
So, current situation is:
$ mkdir -p /home/me/test/p1
$ cd /home/me/test/p1
now I delete the folder p1
$ cd ..
me:~/.local/share/Trash/files$ ...
I'm now searching for a way to get into /home/me/test/ and not into the Trash bin. Is there such a command?
|
PWD variable hold current path definition.
to go up one level
cd $(dirname $PWD)
will expand to
cd $(dirname /home/me/foo/bar/baz/deleteddirectory)
who expand to
cd /home/me/foo/bar/baz/
this supposed you delete only one level of dir.
| cd out of deleted folder |
1,533,768,388,000 |
I need to completely rebuild my boot partition. The file system has sda1 250mb for boot and sda2 lvm luks encrypted with ManjaroVG-ManjaroHome, ManjaroVG-ManjaroRoot and ManjaroVG-ManjaroSwap inside of it. I have the live usb I installed from originally if that helps. Currently the kernel panics when trying to boot returning Kernel panic - not syncing: VFS : Unable to mount root fs on unkown-block(0,0). I can however chroot into it using the live usb.
|
This is probably caused by a recent kernel update. Try getting into the boot menu and see if you can choose a different, older version of your kernel. Boot up with that one and it should be working ok afterwards.
Please have a look over this thread:
http://ubuntuforums.org/showthread.php?t=1751574&p=10780594#post10780594
If this doesn't work, you should use a LiveCD distro like SystemRescueCD, run an analysis in testdisk and see what the problem is.
| How do I restore my boot partition manjaro/arch? |
1,533,768,388,000 |
I'm trying to launch live(not ondemand) RTMP stream from ubuntu, but i succeed only with RTSP stream through VLC
vlc -vvv ./videos/test.mp4 --sout '#rtp{dst=192.168.8.106,port=1234,sdp=rtsp://192.168.8.106:1234/test.sdp}'
(source here - https://www.videolan.org/doc/streaming-howto/en/ch04.html)
and unfortunately it is not supported by any flash or html5 players. For RTMP streaming i found "how to" only for webcam case - http://www.jpsaman.org/vlc/rtmp
Can someone help me with creating exact command for RTMP stream from this 2 examples please? Or is there other free linux software which can start stream RTMP stream ?
|
After long research and tests finally i found the solution with vls+hls streaming ...
vlc -vvv path/to/video/test.mp4 :sout="#transcode{vcodec=h264,vb=100, venc=x264{aud,profile=baseline,level=30,keyint=30,ref=1}, aenc=ffmpeg{aac-profile=low},acodec=mp4a,ab=32,channels=1,samplerate=8000} :std{access=livehttp{seglen=10,delsegs=true,numsegs=5, index=/var/www/video-stream/stream.m3u8, index-url=http://192.168.8.106/video-stream/stream-########.ts}, mux=ts{use-key-frames}, dst=/var/www/video-stream/stream-########.ts}"
and this is only player which i found which can support upper provided hls stream - https://github.com/clappr/clappr
| Ubuntu live RTMP video streem |
1,533,768,388,000 |
A command like dpkg -i *.deb will install all deb files in a folder without warning of incompatibilities and such.
Can this command be changed so that installation of broken packages is avoided, warning displayed, etc?
|
incompatibilities and such
What do you mean by that?
Before installing, dpkg first checks if all dependencies of the .deb package(s) to be installed are satisfied. If the dpkg database is inconsistent, a message is printed.
| Is there a dpkg argument to warn about incomptibilities/broken dependencies? |
1,533,768,388,000 |
How can I configure a KVM guest running CentOS to allow passwordless console access from the hypervisor? I want to be able to use the following command to log straight in to the VM from the hypervisor without it prompting for a password:
virsh console 1
The question is similar to this question, however the server in this case is running CentOS as opposed to Ubuntu, so the tty files and syntax is different.
|
You can possibly use PolicyKit rules to "unlock" libvirt for uids which are members of a specific gid-group. Here is another question, which does that for virt-manager (which like virsh console is based on libvirt).
https://superuser.com/questions/548433/how-do-i-prevent-virt-manager-from-asking-for-the-root-password
Specific advice for "unlocking" virsh in this fashion:
https://major.io/2015/04/11/run-virsh-and-access-libvirt-as-a-regular-user/
Note that, as I understand things, using polkit like this will make all of libvirt (and thus all of virsh) unlocked to uids in the specified gid-group; you might not be able to prevent your guestVM owner from accessing other VMs through virsh and/or virt-manager, in other words.
Caveat: this answer probably allows you to get virsh console to work without a password prompt, but it also may allow other things to work (which you may not want). Insert the usual disclaimers here, about carefully testing your security after attempting any kinds of changes like these; you may be lowering your defenses more than you wanted to.
| How to access KVM guest console without password? |
1,393,264,983,000 |
I need to move a large number of files that need to go to different directories i.e.
file1.mpg to /mnt/s3/directory1/file1.mpg
file2.mpg to /mnt/s3/directorya/file2.mpg
file3.mpg to /mnt/s3/directoryx/anotherfilename.mpg
rsync -av --progress --inplace /path/to/file1.mpg /different/path/directory/1/file1.mpg
Works but I would like to batch all the file transfers together so I don't have to monitor it all the time and keep manually put in each rsync. I wrote a quick shell script to rsync the files but after the file is transferred it seems to hang there waiting for some sort of user input. If ^c it continues on, otherwise will hang there indefinitely.
|
#!/bin/bash
set -e
R="rsync -a --timeout=10"
$R file1.mpg /mnt/s3/directory1/
$R file2.mpg /mnt/s3/directorya/
| rsync multiple files to multiple directories |
1,393,264,983,000 |
How can I install the latest release of Mono in Ubuntu Saucy?
In order to develop in a free OS, I need to setup Mono (that now supports .NET 4.5). This is what I did:
Download and uncompress mono-mono-3.2.5
Run ./autogen.sh --prefix=/usr/local (finished ok)
Run make, but it exit with error checking dependencies:
terminal-output:
....
mkdir -p -- build/deps
make[6]: gmcs: Command not found
make[6]: *** [build/deps/basic-profile-check.exe] Error 127
*** The compiler 'gmcs' doesn't appear to be usable.
*** You need Mono version 2.4 or better installed to build MCS
*** Check mono README for information on how to bootstrap a Mono installation.
make[5]: *** [do-profile-check] Error 1
make[4]: *** [profile-do--basic--all] Error 2
make[3]: *** [profiles-do--all] Error 2
make[2]: *** [all-local] Error 2
make[2]: Leaving directory `/home/hogar/Software/mono-mono-3.2.5/runtime'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/hogar/Software/mono-mono-3.2.5'
make: *** [all] Error 2
But to install mono-gmcs, Ubuntu will also install mono-runtime v2.10.8.1-5ubuntu2. And that is exactly what I'm trying to not install, an old version of mono.
This is confusing me, what should I do?
|
It needed to have another instance of mono already installed, so apt-get install mono-gmcs did the job. Then, a new error appeared, and the only solution seemed to be using the GitHub package:
git clone git://github.com/mono/mono.git
cd mono
./autogen.sh --prefix=/usr/local
make
make install
Now I have installed the latest version of mono.
NOTE: now I am not able to run any mono program, e.g. monodevelop. I couldn't find the solution to the exception, so now I'm downgrading mono to an old & compatible version.
PS: If you find the solution, please leave a comment :)
| How to install Mono v3+ in Ubuntu? |
1,393,264,983,000 |
Rainlendar is a great tool to keep events and tasks on the desktop (I did not find a better solution yet).
The problem is that the application completely hides if you press:
The "Show Desktop" button of Mint's panel (taskbar)
Special-Key + d / CTRL + ALT + d
Possible Rainlendar position settings:
On Top - Does not work
Normal - Does not work
On Bottom - Does not work
On Desktop - With this option, the application won't hide, but would be on top of every window. So this isn't really a solution.
Is it possible to prevent Rainlendar from hiding or is there at least a command to show Rainlendar again (at the moment I have to kill the process and start it again) or any other solution?
I'm using:
Linux Mint 15 "Olivia" with Cinnamon 1.8.8
Rainlendar 2.11.1
Everything is up-to-date at the moment.
|
After trying a lot of different things, I ended up with this:
Just do right click on the application's tray icon (usually a little calendar) and click "Refresh". This will make the application visible again.
On Linux Mint with the Cinnamon desktop: The tray icon is only visible if the App Indicator Icon option is disabled (which is enabled by default)!
To disable the App Indicator Icon go to: Options->Advanced->"Show app indicator icon"
| How to stop Rainlendar from hiding on the Linux Mint desktop? |
1,393,264,983,000 |
I'm looking for way to process shell script arguments that is cleaner and more "self documenting" than getopt/getopts.
It would need to provide...
Full support of long options with or without a value after '=' or ' '(space).
Proper handling of hyphenated option names (i.e. --ignore-case)
Proper handling of quoted option values (i.e. --text "A text string")
I would like to eliminate the overhead of the big loop with embedded case statement that getopt/getopts requires and reduce option processing to something like...
option=argumentparse "$@"
[[ option == "" ]] && helpShow
[[ option =~ -h|--help ]] && helpShow
[[ option =~ -v|--version ]] && versionShow
[[ option =~ -G|--GUI ]] && GUI=$TRUE
[[ option =~ --title ]] && TITLE=${option["--title"]}
Here, an argumentparse() function resolves the various syntax possibilities into a consistent format, perhaps an associative array.
There must be something coded out there somewhere. Any ideas?
(updated and retitled)
|
Since this question has been viewed so much (for me at least) but no answers were submitted, passing on the solution adopted...
NOTE
Some functions, like the multi-interface output functions ifHelpShow() and uiShow() are used but not included here as their calls contain relevant information but their implementations do not.
###############################################################################
# FUNCTIONS (bash 4.1.0)
###############################################################################
function isOption () {
# isOption "$@"
# Return true (0) if argument has 1 or more leading hyphens.
# Example:
# isOption "$@" && ...
# Note:
# Cannot use ifHelpShow() here since cannot distinguish 'isOption --help'
# from 'isOption "$@"' where first argument in "$@" is '--help'
# Revised:
# 20140117 docsalvage
#
# support both short and long options
[[ "${1:0:1}" == "-" ]] && return 0
return 1
}
function optionArg () {
ifHelpShow "$1" 'optionArg --option "$@"
Echo argument to option if any. Within "$@", option and argument may be separated
by space or "=". Quoted strings are preserved. If no argument, nothing echoed.
Return true (0) if option is in argument list, whether an option-argument supplied
or not. Return false (1) if option not in argument list. See also option().
Examples:
FILE=$(optionArg --file "$1")
if $(optionArg -f "$@"); then ...
optionArg --file "$@" && ...
Revised:
20140117 docsalvage' && return
#
# --option to find (without '=argument' if any)
local FINDOPT="$1"; shift
local OPTION=""
local ARG=
local o=
local re="^$FINDOPT="
#
# echo "option start: FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG, @=$@" >&2
#
# let "$@" split commandline, respecting quoted strings
for o in "$@"
do
# echo "FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG" >&2
# echo " o=$o" >&2
# echo "re=$re" >&2
#
# detect --option and handle --option=argument
[[ $o =~ $re ]] && { OPTION=$FINDOPT; ARG="${o/$FINDOPT=/}"; break; }
#
# $OPTION will be non-null if --option was detected in last pass through loop
[[ ! $OPTION ]] && [[ "$o" != $FINDOPT ]] && { continue; } # is a positional arg (no previous --option)
[[ ! $OPTION ]] && [[ "$o" == $FINDOPT ]] && { OPTION="$o"; continue; } # is the arg to last --option
[[ $OPTION ]] && isOption "$o" && { break; } # no more arguments
[[ $OPTION ]] && ! isOption "$o" && { ARG="$o"; break; } # only allow 1 argument
done
#
# echo "option final: FINDOPT=$FINDOPT, o=$o, OPTION=$OPTION, ARG=$ARG, @=$@" >&2
#
# use '-n' to remove any blank lines
echo -n "$ARG"
[[ "$OPTION" == "$FINDOPT" ]] && return 0
return 1
}
###############################################################################
# MAIN (bash 4.1.0) (excerpt of relevant lines)
###############################################################################
# options
[[ "$@" == "" ]] && { zimdialog --help ; exit 0; }
[[ "$1" == "--help" ]] && { zimdialog --help ; exit 0; }
[[ "$1" == "--version" ]] && { uiShow "version $VERSION\n"; exit 0; }
# options with arguments
TITLE="$(optionArg --title "$@")"
TIP="$( optionArg --tip "$@")"
FILE="$( optionArg --file "$@")"
| Simpler processing of shell script options |
1,393,264,983,000 |
I want to copy the whole log that are stored in the AWS S3 bucket if the following line is present:
\"Key\" : 951332,\n
I've tried escaping by trying this:
aws s3 ls s3://bucket_name | grep "/\"Key/\" : 951332,/\n" --recursive
but not getting anything back, does anyone know how I can run the grep in this manner?
|
Mount S3 using s3fs, then do..:
grep -r Key /PATH/TO/MOUNT/POINT/
... then pipe it through grep 951332 and check if that's enough resolution for your case.
It might take a while and incur in AWS DataTransfer cost on you if you run this locally away from AWS, so you ideally want to run this from an EC2 instance in the same VPC.
If you are anyway bold to run locally away from AWS with this approach even with its costs, you might want to redirect stdout and stderr to some docs in order to check them later if necessary, instead of running the costly command line again.
Wrapping up, I might go with:
grep -r Key /PATH/TO/MOUNT/POINT/ | \
grep 951332 \
>/LOCAL/PATH/TO/grep_stdout \
2>/LOCAL/PATH/TO/grep_stderr
# In /LOCAL/PATH/TO/grep_stdout should be the paths to the docs you
# were searching.
Alternatively:
grep -r Key /PATH/TO/MOUNT/POINT/ | \
grep 951332 &>/LOCAL/PATH/TO/all_grep_outputs
# In /LOCAL/PATH/TO/all_grep_outputs should be the paths to the docs you
# were searching.
| AWS s3 CLi grep command with special characters |
1,393,264,983,000 |
I recently started using linux mint. I am trying to execute the following command in my terminal sudo apt-get update . But I always get this output:
How can I solve this issue?
...
Ign:10 http://archive.canonical.com sarah/partner all Packages
Ign:11 http://archive.canonical.com sarah/partner Translation-en_GB
Ign:12 http://archive.canonical.com sarah/partner Translation-en
Reading package lists... Done
W: Target Packages (partner/binary-i386/Packages) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: Target Packages (partner/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: Target Translations (partner/i18n/Translation-en_GB) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: Target Translations (partner/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: The repository 'http://archive.canonical.com sarah Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch http://archive.canonical.com/dists/sarah/partner/binary-i386/Packages 404 Not Found [IP: 91.189.92.191 80]
E: Some index files failed to download. They have been ignored, or old ones used instead.
W: Target Packages (partner/binary-i386/Packages) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: Target Packages (partner/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: Target Translations (partner/i18n/Translation-en_GB) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
W: Target Translations (partner/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list.d/additional-repositories.list:1 and /etc/apt/sources.list.d/additional-repositories.list:2
Thanks in advance
EDIT: cat /etc/apt/sources.list.d/additional-repositories.list :
deb http://archive.canonical.com/ sarah partner
deb http://archive.canonical.com/ sarah partner
|
First of all, note that apt-get update still updates its indexes as appropriate, it's just complaining that it can't download the information required for some of its repositories.
sarah is a Mint code-name, not an Ubuntu code-name, so it makes no sense to try to use a sarah repository hosted by Canonical; you can safely delete the additional repositories:
sudo rm /etc/apt/sources.list.d/additional-repositories.list
This will fix the errors you're seeing.
| Why am I unable to do a "sudo apt-get update"? |
1,393,264,983,000 |
When tried the command cat < 1.pdf it printed a very large output, which was totally incomprehensible to me. The content of 1.pdf was abc.
The output was like this:
ÀýÓëöûcÎ=ÉÐÎTaüÍ8]ö¹mg:=Rú*@H1S¢▒ùá½~Ì8u_4,¬7ïyt#¯ÚZ|åôÛ~«Æ fM²JKÁNÿ6 ì©ìÞ¾▒bT
¦åÊmBíöÖ¡÷ÄïÝM{Í1¹@;ÄqÄú t]È7DJ Êûc0£jÜÖã\0O8À±(2)èJR'Ø÷=~ÝÆÂµ¡´ oÇKÈ]¹ÞÜY)ÚwÒ?[4ò©Ió¦>G)î¾J&d}ýíÜÅÓò~Ø0 $´Në¿´Èc®pVqí+ëCppG¾ùóßeõõ6GÌ,öfú8Ô7»S[¢S50cq/_9¹jó¿·Ü%×tQSßî▒LðbkÂÒxâ£Ö▒üVAûÇamÏ·Â׫H´+ÆWíç´upèó`I]± ÎëÚwiòtçúwAhO¼²´'Æ©ëÀ0lô?¿ÌIò▒ìXË<»ÅUepçæå¥
SïÒFҽϷº®Ën.Z×´\£ÁEH@®2ÊçC¢n½¡hÑâ>º´¢YÚXEfg sôë¥*|zº7>ù!I©Åÿ«; ;&==
)dS/),÷È´:ÞõH:CÉÑÀiTÌw!u@Âp2÷AÒfµòÜtFIZ^iÿà£ùÖ5ÐsDiërÿ$0b6Ëü~xÏ·._ÏÒõÜr²`wYù;¤²å»äE3óù²ëvÇ»Ó'ãµ~?ÿîMZÍPkh{aÙ1y&tüÙòÕMoó¬²<ñ/ÇÖa?üʯuÝÓjû,¨Üå@/GMa-èGkD}¤ð©fZbYÑlt/ ±Øj¦èRhCå1âÆñ±S@ÖòÁ~e}
>NÀ^²Jà-Û[Mø¡FËB7ÉVy0|ôÉÏjx[ÙÁnneê)wã+ök'R6"dÞqît¿ý,ߢ]MöV>»Ñ@ÞwM0®èçã^F`çFÕ²æL((¬±S¢ÅïÂy§púÓË5y1pÆ{uxëÈOþ'¾7+Öº!í
uV-R²f*`æ\ías\Øl^÷ ÿ`r1|yÅ-YØ,º·¢▒ÀPæá¸EW0d¤q]&ÿdV6ß.cùÂ~´óðCß▒(¨îMëb#òEnÑ»PÅV½!ÀÈѵ c´è
jFÇé¨J$ǵÀcu?4·[ö&å:1&OÓö(øyKxòëÑq¸çÎÇÈI#5¨çû,'µÐûfG¸Í§³UÚëÎCDøõe²Ñú$Á½é½Ocø»Éßs! ÀõE²©)8½îv¿<Üî|è¶»B▒ÿYw¹·ÌÞÆ¶âôIÇ.>¾H¡n¬Éüׯ*m«¶£L£#7È?¾sÊNoXµ·àMÚ
?ó´ZìâþÌçùä½ÿ$qÀÊcOºùdewænår▒ÖB½dfÕ;t4Êe3#ÄúÀ£çP=¨QÌ▒ÕþºÑ\U¼Fµ»â¯/!NZ=>½éú©,EÉ|ªQafu,5Ý%Xw%seàØÇÇTª BZëCaßî;zÃ"Bma¤ y=ÞwÁű~ÿõåEyV/Ò%q¥Ì^Ç 2U¸âQ³1y(¾&¨òYùÆ«}üx#Á®úÅÿÆðö.i8
ïþ¨è|Âý6\ U+ᬮ[®eVéüvíÜ{ÈL+]¬)ùxþecäæº°ÿoö?,Ä:¯Oò9T:1G4qÞ.ÌtÉÑëEæáHÔ׬¡ª çc^
nÍPÑU7/ÄñcªXâ§nc]¾¨XPayÚGºxª.wÈç¤}¬ÓÏÇ\rf`¤ñ@zJnî´a'¾¨sNÔAëG½PL6ºIQkíJÍçØ¼ÔKýF¾)$\&§^» Eý¨_{tÂp¥ñT`mùPvcìÃç1ÿûKáz¹â®ò÷pר?äIIö 6²¬QªMÚIµÈTã+¤i1âN¾8ɽNww²Îf¹¿kVr²ù½Ä¼Ìå±"ªúº+äÿ¥
óv¡t5!(«:Ö+Ovl<¦aö6Kì»â2óÎ娨|üËàÇÒ.j§·¸[ãæ¿ï`¡÷¥¾©,ÝßiÝPMåoÑéïToãw¿dyçëÀã·ó6ês\ÔR;ÕXÚ»ûÿõå▒öÁ▒¡\Ðs·~=ðÈTDÝCCijÚ`¹ÎÔ¬\·ðñ_ÿü§¯$Âõj®Û¢_]Lù¦8áÌæ²»BJÖÛn¼ûXÏjY8Ò6éØí©YóZtÛt´ÌníUè¨PGØÊzý+ÚT¦M1¥e¬åxendstreamýC~¢6A¬»hå?5µÎÍbKÏÔlwæ l▒_%L;8ê8jßQüg-í× Jâ`d¬*»ö</nä"nAíÀ ÿ]©äXĦMYS▒
endobjÎ{°m-°õ1Hgîºû:h*µVØK°F8ñGÔÎl~V3ÄÞ!bÊcÞDGë¯×Yl(.ãâÝå`£=cü§ýÔb£ÄèMu Íëve«XîÝ£#"VØgáKÔ?öþ§®êϺݡ[3uש²Nµq÷Ú▒ßób¸l6=?'«ì>BÔ?t_Ñ gÁ£õ=q@ÜÕÅûªE3¶L+ÕÅ©Cå}b-7Q,ì·Túlñ¨þ¦:=`î¹aÐçeÆãÜw°¥ès
E▒ªpÇ !}¡1{¹_ZlÈë¡Á;u§·+ú,fo ä-AÏ[HM¥×▒ÌÝåìtò*9¼Â^ѧ▒aÛ`B>/Cö0Þ÷ðiNËþÊ âÄCH´/9fVÎÉó6!vóÑ@ ðÉ!w±y;¯m$i¾äµH+·]YA|åÀD!j{øEÙ^äFÖÑ4▒ääû5þµ)Ãå*y´¹Q« 7í?NýÍ'^õ(*C4f;3ûûn³i|nIï0uo>#n³yµ¹5§*É»&Gtê;c.9 0 objéðÜ}zÔ22T`¦E'ýX®WÈô»&Â>9=ay$àÊGWdwÂ!f·¹eMvÖ=EÞߢ¯ò^¢n`ZÜöQ!Yß§µã gÚEbØù»ÑñÓ 1ªAäØÿPâ'4RÅU]xý'¬¡Â>¹æîtê3Yêy.·¬4ÖçæÍÕOß®×ñh¶ap(<</Type/Font/Subtype/Type1/BaseFont/NimbusRomNo9L-Regu 9îî~ýÚK°ÓÑ*ÈTt÷ ØL
/ToUnicode 8 0 R} Åta°Àj) _ Kû'Üd§éËpôKÜ~¯
/FirstChar 0 /LastChar 255ºP!y%µRÕÖ×bðó°~®_ñA=ùjÒÜW!þy0Æ¢]ìMºõ$ÊÍD96)éàjM[îÍÙù»@y»;«!BÌaÓ;²À ÏÞî¨ZÚ8Ýà ìÏ?å²@ÙÏû¬W$O9²ößÄé«¶Âv(r·?,½ø?u«¬§ýéøZÍñÉÆSêÒfæÿ ÕÀb8ÇxØÝ¯¹ÅAýöµiº\ÉI$▒À}0@bâÚÕq9s'XÝ/Widths[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0®ã¥Vø![
250 333 408 500 500 833 778 333 333 333 500 564 250 333 250 278Õ¶~~Yö*Ó}+«▒rl¥z«° :¬Î>2y®GmÀúÀ
500 500 500 500 500 500 500 500 500 500 278 278 564 564 564 444
921 722 667 667 722 611 556 722 722 333 389 722 611 889 722 722
556 722 667 556 611 722 722 944 722 722 611 333 278 333 469 500e$<Ìßf¼p騸ag#au.ÁÄè6Ý▒
333 444 500 444 500 444 333 500 500 278 278 500 278 778 500 500
500 500 333 389 278 500 500 722 500 500 444 480 200 480 541 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0/NimbusRomNo9L-Regu
0 333 500 500 167 500 500 500 500 180 444 500 333 333 556 556
0 500 500 500 250 0 453 350 333 444 444 500 1000 1000 0 444
0 333 333 333 333 333 333 333 333 0 333 333 0 333 333 333
1000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 889 0 276 0 0 0 0 611 722 889 310 0 0 0 0
0 667 0 0 0 278 0 0 278 500 722 500 0 0 0 0
Why can't `cat' read content of pdf files?
|
If you call cat on a file containing a text in Chinese¹, it won't print out an English translation. With computer formats, it's the same thing: if you call cat on a file containing data in a certain format, it won't translate it to another format such as plain text. That's not its job: its job is to copy its input to its output without modifying it.
A PDF file isn't a text file. A PDF file can contain text, along with formatting instructions, images, hyperlinks, etc. If you want to read the text in a PDF file, you need to use a tool that understands the PDF file format.
There are a few some recognizable bits in the PDF file: NimbusRomNo9L suggests that the text is written in a Nimbus Roman font. This isn't one of the few fonts that all PDF viewers and printers must have, so it had to be embedded in the PDF file. The text itself (abc) isn't buried in the output because it's compressed.
A common tool to view files regardless of what format they're in is xdg-open. On Debian and derivatives, see is an alternative. Both work by guessing the file format from the extension of the file name and calling an appropriate application. If you want to explicitly extract the text parts (and forget about other information such as images, fonts, the location of the text on the page, etc.), you can call a program to convert the PDF file into text, such as pdftotext.
¹ If you understand Chinese, substitue Georgian, or Kanada, or Cree, or whatever language you don't speak.
| Why 'cat' can't read content of pdf files? |
1,393,264,983,000 |
For debugging purpose sometimes I access a remote Linux box via VNC. At that time someone else may be working on that same Linux box through VNC and it may happen that I do not have facility to chat with that person. So essentially what happens is I start a new tab in the GNOME and type a message like this :
[xxx@slc04lyo abc]$ Hi this is X. I will use the box for some time
Hi: Command not found.
Is it possible to not have shell print the following?
Hi: Command not found.
If yes how can I do that selectively and not always? So that when needed I can use the shell as a raw chat platform?
P.S: I do not need to see an output. The other user is already in the VNC and can see what I typed. So the output is superfluous.
|
End the line by pressing Ctrl+C. It will look like
[user@box ~]$ Are you there?^C
ETA: Ctrl+C sends the signal SIGINT, which in this context basically means "stop what I'm doing and give me back a prompt". It's just the same as when you're running a program from your prompt and pressing Ctrl+C - it will kill the running program and give you your prompt back. Except in this case you've not actually started the program.
This can be useful in other situations too, e.g. when the cat has been having a dance all over the keyboard...
| Can I have the shell ignore the command line sometimes but not always? |
1,393,264,983,000 |
Very often I need to use the sudo command because the command I'm running needs higher privileges. Is there some method to minimize the usage of sudo and/or a way to use it that's faster than typing my password, but which is still secure?
|
Many operations and programs do not in themselves need sudo, only for access to certain files. These files often also allow access for a group (e.g. /dev/mixer for group audio on my Debian), and you can avoid the sudo if you add your user to that group. The strace command is a good tool to find out which files are the problem; just look for an open() call that returns a negative value aside from -1.
If you need the sudo command for specific applications (a classic for me being pbuilder, which needs to chroot), it might be a good idea to insert that command and the NOPASSWD flag into /etc/sudoers. That isn't the most secure way (the root user inside the pbuilder environment can do all sorts of crap), but better than typing your password in normal system use and getting used to that.
| Minimize need for sudo |
1,393,264,983,000 |
I want to find the location of all files named index.php that contain the string "hello".
thanks.
|
Using grep with find:
find /top-dir -type f -name index.php -exec grep -l 'hello' {} +
where /top-dir is the path to the top-most directory that you want to search.
With -type f, we only look at regular files with find, and with -name index.php we restrict the search to files called index.php.
-exec grep -l 'hello' {} + will run grep on the found files and it will output the paths of all the files that matches the pattern ('hello'). It's the -l with grep that causes the output of the paths.
With + at the end, find will give as many files as possible to each invocation of grep. Changing this to ';' or \; would result in grep being invoked with one file at a time, which may be slow if there are many files.
| How do I find the location of all files with a particular name whose content contains a particular string? |
1,393,264,983,000 |
Let say that more users have a tmp directory in their home directory. I want to rename each tmp directory in each user home directory. What is the easiest way?
Something like this:
sudo mv /home/*/tmp /home/*/temp
is not ok.
And something like this:
for dir in /home/*; do
if [ -d $dir/tmp ]; then
mv $dir/tmp $dir/temp
fi
done
seems too much.
|
Perl comes with a rename(1) command that is installed on most Linux systems. On Debian-based systems it is in /usr/bin and for this case, you would use it like this:
$ rename 's/tmp$/temp/' /home/*/tmp
The first argument is a perl expression that acts on the subsequent arguments generating a new name. Each is then renamed according to the result of that expression.
If a home directory already has a file/directory called temp, you'll just get an error for that directory and rename will continue:
/home/c/tmp not renamed: /home/c/temp already exists
You can run it first with the -n flag to see what rename would do without actually doing it and make sure it all looks right. Then drop the -n and let it do its job.
| How to rename directory from all user accounts |
1,393,264,983,000 |
I'm trying to rename all files that start with an "m" to be the same name except with the first character (or "m" in this case) stripped away.
My strategy is to:
List all the files, with ls
Filter for the ones I want, with egrep
Generate the string I don't want next to the one I want, separated by a space, with awk, for example, mfoo foo
Feed into xargs to mv mfoo foo
Some questions:
Is this a good strategy?
What is a better one?
I'm stuck on Step 3, below is how I've approached the problem.
I'm working in the following directory:
$ find .
.
./cat
./mbar
./mbaz
./mfoo
I'm able to quickly get 1-2:
$ ls | egrep '^m'
mbar
mbaz
mfoo
Step 3 is more difficult. I used gsub to generate the second string I want, but I'm not sure how to "stick it together with the original value separated by a space":
$ ls | egrep '^m' | awk '{ gsub(/^./, ""); print }'
bar
baz
foo
Step 4 by it's makes sense to me, although I'm not sure how to finish Step 3 so I can't finish it yet. Below is one example of how I think it should work:
$ echo mfoo foo | xargs mv
$ find .
.
./cat
./foo
./mbar
./mbaz
I think I'm close I just need to find out how to save the old value and print it next to the gsubed value. I've tried the following small example but it's not working:
$ echo mfoo | awk '
pipe quote> { old = $0 }
pipe quote> { new = gsub(/^./, "") }
pipe quote> { print $old " " $new }'
awk: illegal field $(mfoo), name "old"
input record number 1, file
source line number 4
How do I make a substitution to $0 but save the old value?
Why am I getting this error?
|
ls | egrep '^m' | awk '{ x=$0; gsub(/^./, ""); $0 = x " " $0 }1' | xargs -l -t mv
Posix-ly implementation is via the -L option to xargs as:
ls | egrep '^m' | awk '{ x=$0; gsub(/^./, ""); $0 = x " " $0 }1' | xargs -L 1 -t mv
ls | egrep '^m' | awk '{ x=$0; gsub(/^./, ""); print x, $0 }' | xargs -L 1 -t mv
Based on what I replied to your earlier query regarding xargs, we can put that learning to good use in this example.
I slightly modified your awk code: it preserves the original line ($0) since the gsub func is gonna clobber it. Then we put together the old & the new to get the line we want to send over to xargs which will then invoke mv with the right arguments to effect the rename.
| How do I make a substitution to `$0` but save the old value? |
1,393,264,983,000 |
I found What do the numbers in a man page mean? which explains the sections for command/library documentation quite nicely, and I was looking at the output for man regex and noticed the See Also referred to regex(3).
I tried to run man 3 regex, but got the following message:
No manual page for regex in section 3
My question is - where is it?
This is on Ubuntu 10.04 if that makes a difference.
|
REGEX(3)
NAME
regcomp, regexec, regerror, regfree - POSIX regex functions
Works fine here on Arch Linux and also on the Internet...
You might need to (re)install them:
sudo apt-get install manpages manpages-dev manpages-posix manpages-posix-dev
| No manual page for regex in section 3 - where is it? |
1,393,264,983,000 |
I wanted to execute set -o vi, but instead I did set -ovi.
Now when I checking set -o I see extra weird options such as:
$ set -o
set -o
...
update_terminal_cwd;
and as result, update_terminal_cwd; is printed on each command.
How do I undo my typo without restarting shell?
|
If you did set -ovi then repeat as set +ovi the effect of one will be reversed by the other.
What you actually did was activate the verbose option set -o v which shows history (if set) as well.
The most important values set are printed by echo "$-", which is usually just himBH in interactive shells.
A longer list of values is printed either by set -o or set +o. The former will print an human readable description of values set (on) or unset (off). The later will print an almost complete list that could be executed back to restore set state.
With either set -o or set +o list check the state of verbose and unset
it with either of this commands:
set +o verbose
set +o v
Yes, + means unset (off), weird, I understand.
| How to undo typo `set -ovi`? |
1,393,264,983,000 |
I have a large pipe-delimited file where I need to find the line number of all lines where a certain field is empty.
I can use cut -d \| -f 6 filename.txt to output just that column.
What is a utility/tool/command I can use to find what output lines from the above are empty?
|
# cut -d \| -f 6 test.txt | grep -v -E .\+ -n
grep
-v invert match
-E .\+ match any 1+ character
-n output line numbers
| Get line numbers for lines with empty fields |
1,393,264,983,000 |
I have MacOS and .bash_profile content:
export PS1="\[\e[0;31m$\]\[\e[m\] \[\e[0;32m\w\e[m\] : \]"
as a result I have pwd printed in terminal like this:
but when I press up and down arrows to use terminal history I have bug:
|
no need to export PS1: it's a variable for the shell, other processes aren't going to use it.
looks like you don't have the escaping brackets quite right. They are there to surround non-printing sequences, so bash can accurately figure how wide your prompt is. Try this:
PS1="\[\e[0;31m\]\$ \[\e[0;32m\]\w\[\e[0m\] : "
# 1.........1 2.........2 3......3
So the printing bits (\$, \w, the colon and the spaces) are outside the brackets.
Further reference: https://www.gnu.org/software/bash/manual/bashref.html#Controlling-the-Prompt
| Bash prompt getting garbled when I browse history? |
1,393,264,983,000 |
I have copied the rm executable from my machines "/bin/rm" to another linux machine that happens to be so minimal that it does not include the rm command.
When I tried to execute the rm command I got this error:
/bin/rm: /bin/rm: 1: Syntax error: "(" unexpected
Why won't it work? How could I "add" the rm functionality to this box?
(This box does not have a package manager install either.)
|
rm is a binary file and therefore architecture dependent. It would only work if you copy from the same architecture and with the same required libraries installed.
Alternatively, you can compile it from source code or install the binary package. In Debian systems, it's a package.
In case you already have a binary and want to know its architecture, use file or objdump commands.
| Why would a copied rm executable not work on another linux machine? |
1,393,264,983,000 |
Is there a simple way to search a text file (source code) for all instances of a specific integer. This should not trigger on larger numbers that happen to include the integer as a sub-string, but it can't simply exclude such lines since they could contain both cases:
searching for '6'...
int a=6; // found
int b=16; // not found (despite the '6' in '16')
int c=6, d=16; // found
I'm really looking for a command-line approach to this, but am also curious if there is a FOSS GUI-type editor that will do it.
|
grep -E '\b6\b'
\b is a "word boundary"
Edit: After pointing @nobar in the right direction, he found/pointed out the shortcut-option -w (word-regexp) in the manpage, which simplifies the above to:
grep -w 6
If used a lot, you could use a function similar to
wgrp(){ grep -w "$1" "$2"; }
Note (to @glenn-jackman): If you don't quote "$2" here, you can use the function as a pipeline filter. But yes, then it won't work with filenames with spaces.
After reading yet another great answer from @Gilles, I now propose
igrp(){ grep -E "(^|[^0-9])$1($|[^0-9])" "$2"; }
| How to search a text file for a specific integer |
1,393,264,983,000 |
I'm currently developing a home automation framework for my apartment. This involves getting JSON over Serial from an Arduino. When the JSON can't be parsed (usually only on startup) I log it as an error.
Today something strange happened though, the broken JSON caused one of my terminals to become weird.
[nodemon] restarting due to changes...
[nodemon] starting `node build/server`
sensor living-room-humidity on serial:arduino-master:42
sensor living-room-temperature on serial:arduino-master:42
sensor living-room-motion on serial:arduino-master:42
sensor living-room-brightness on serial:arduino-master:42
sensor kitchen-humidity on serial:arduino-master:43
sensor kitchen-temperature on serial:arduino-master:43
sensor kitchen-motion on serial:arduino-master:43
sensor kitchen-brightness on serial:arduino-master:43
listening on http://127.0.0.1:50000
serial opened: arduino-master
serial error can't parse JSON: S≤┼├▒│E⎼⎼⎺⎼: U┼e│⎻ec├ed ├⎺┐e┼
⎽e⎼☃▒┌ d☃⎽c⎺┴e⎼ed: /de┴/├├≤USB▮
/#Y⎼dB▒±dPSX2≤TB☃8AAAA c⎺┼┼ec├ed
/#Y⎼dB▒±dPSX2≤TB☃8AAAA ▒┤├▒e┼├☃c▒├ed
[┼⎺de└⎺┼] ⎼e⎽├▒⎼├☃┼± d┤e ├⎺ c▒▒┼±e⎽↓↓↓
[┼⎺de└⎺┼] ⎽├▒⎼├☃┼± ◆┼⎺de b┤☃┌d/⎽e⎼┴e⎼◆
⎽e┼⎽⎺⎼ ┌☃┴☃┼±↑⎼⎺⎺└↑▒┤└☃d☃├≤ ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:42
⎽e┼⎽⎺⎼ ┌☃┴☃┼±↑⎼⎺⎺└↑├e└⎻e⎼▒├┤⎼e ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:42
⎽e┼⎽⎺⎼ ┌☃┴☃┼±↑⎼⎺⎺└↑└⎺├☃⎺┼ ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:42
⎽e┼⎽⎺⎼ ┌☃┴☃┼±↑⎼⎺⎺└↑b⎼☃±▒├┼e⎽⎽ ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:42
⎽e┼⎽⎺⎼ ┐☃├c▒e┼↑▒┤└☃d☃├≤ ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:43
⎽e┼⎽⎺⎼ ┐☃├c▒e┼↑├e└⎻e⎼▒├┤⎼e ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:43
⎽e┼⎽⎺⎼ ┐☃├c▒e┼↑└⎺├☃⎺┼ ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:43
⎽e┼⎽⎺⎼ ┐☃├c▒e┼↑b⎼☃±▒├┼e⎽⎽ ⎺┼ ⎽e⎼☃▒┌:▒⎼d┤☃┼⎺↑└▒⎽├e⎼:43
┌☃⎽├e┼☃┼± ⎺┼ ▒├├⎻://127↓▮↓▮↓1:5▮▮▮▮
⎽e⎼☃▒┌ ⎺⎻e┼ed: ▒⎼d┤☃┼⎺↑└▒⎽├e⎼
⎽e⎼☃▒┌ d☃⎽c⎺┴e⎼ed: /de┴/├├≤USB▮
⎽e⎼☃▒┌ e⎼⎼⎺⎼ c▒┼'├ ⎻▒⎼⎽e JSON: π"☃d":42←"b⎼☃±▒├┼e⎽⎽":126←"└⎺├☃⎺π"☃d":42←"b⎼☃±▒├┼e⎽⎽":125←"└⎺├☃⎺┼":▮←"├e└⎻e⎼▒├┤⎼e":23↓8▮←"▒┤└☃d☃├≤":29 S≤┼├▒│E⎼⎼⎺⎼: U┼e│⎻ec├ed ├⎺┐e┼ ☃
/#5┌9⎻PA°2P└_└┐c_WAAAA c⎺┼┼ec├ed
/#5┌9⎻PA°2P└_└┐c_WAAAA ▒┤├▒e┼├☃c▒├ed
/#5┌9⎻PA°2P└_└┐c_WAAAA d☃⎽c⎺┼┼ec├ed
/#▒▒73J1G⎺CK↑▒XdVbAAAB c⎺┼┼ec├ed
/#▒▒73J1G⎺CK↑▒XdVbAAAB ▒┤├▒e┼├☃c▒├ed
/#▒▒73J1G⎺CK↑▒XdVbAAAB d☃⎽c⎺┼┼ec├ed
/#├┌7⎺°_▒R6FDON─┐HAAAC c⎺┼┼ec├ed
/#├┌7⎺°_▒R6FDON─┐HAAAC ▒┤├▒e┼├☃c▒├ed
/#├┌7⎺°_▒R6FDON─┐HAAAC d☃⎽c⎺┼┼ec├ed
/#┤_Q┬├XdLW1└C▒Z⎻8AAAD c⎺┼┼ec├ed
/#┤_Q┬├XdLW1└C▒Z⎻8AAAD ▒┤├▒e┼├☃c▒├ed
/#┤_Q┬├XdLW1└C▒Z⎻8AAAD d☃⎽c⎺┼┼ec├ed
/#3☃┐R±°_⎼E⎻☃Q┼K┘▒AAAE c⎺┼┼ec├ed
/#3☃┐R±°_⎼E⎻☃Q┼K┘▒AAAE ▒┤├▒e┼├☃c▒├ed
I know that this most likely can be fixed by just restarting that shell, but I still want to understand why this happens and (if possible) prevent it from happening again.
|
That happened because the output you produced included codes that your terminal interface interpreted as control codes.
This is normally resolved with either reset or stty sane.
| What can cause my shell to look like this? [duplicate] |
1,393,264,983,000 |
I am fairly new to regular expressions and am looking for a sed/awk/grep/wc command to find the following in a pipe delimited text file the number of occurrences of the digit 1 after the 12th pipe.
Here is an example of the text file:
2|JOHN||HAY||2955|ROSE|ST|#39D|Tool|TX|769065589|2542444320|||2222299310|SSD||01/08/2014^M
8|ALEN|BOBRISE|FITZGERALD||5432|Red|Ave|Apt 253|Bloomington|MN|559322972||9582544754|||MINNESOTA JIL|MN|01/08/2014^M
My preferance is sed or wc since this is what I am most familiar with, but I'll take what I can get.
|
I would use cut
cat myfile.txt | cut -d '|' -f 12 | grep -c 1
| Number of occurrences in a text file where first character after 12th pipe is equal to 1? |
1,393,264,983,000 |
Is there an easy way to re-apply a previous command to a new command line entry?
Say I typed in chmod u+r,g+x file.txt but forgot the sudo. Could I simply type sudo <some easy symbol>'?
|
You can do:
sudo !!
Another good one is alt ., to insert the last parameter of the previous command
| Command to 're-apply' previous command? |
1,393,264,983,000 |
I have a file with a number of lines in a file filename.
I want to count how many lines start with character 'a', with 'b' and so on in one go.
What command i should execute.?
|
Try this:
<file.txt sed 's/^\(.\).*/\1/' | sort | uniq -c
Or, if you want it case insensitive, this:
<file.txt sed 's/^\(.\).*/\1/' | tr a-z A-Z | sort | uniq -c
| Count how many lines start with which characters |
1,393,264,983,000 |
I have big directory tree of files. I often use the find command to locate something in that tree. The first time after a reboot it takes some time, but subsequent uses are almost instant. Obviously find uses some internal datastructure that it has to recreate after a reboot.
Is there a way to keep this datastructure between reboots?
additional info:
the root of the directory tree is always the same, but it is on another drive that is not always mounted
it has ~50000 files over ~2000 directories
i use the -iregex option for find
|
If your directory tree is relatively static (i.e. files and directories are created or removed infrequently), rather than find, you might try using locate.
locate(1) General Commands Manual locate(1)
NAME
locate - find files by name
SYNOPSIS
locate [OPTION]... PATTERN...
DESCRIPTION
locate reads one or more databases prepared by updatedb(8)
and writes file names matching at least one of the
PATTERNs to standard output, one per line.
If --regex is not specified, PATTERNs can contain globbing characters.
If any PATTERN contains no globbing characters,
locate behaves as if the pattern were *PATTERN*.
By default, locate does not check whether files found in database
still exist (but it does require all parent directories to exist
if the database was built with --require-visibility no).
locate can never report files created after the most recent
update of the relevant database.
…
| How to save the progress of the "find" command? |
1,393,264,983,000 |
I wonder why a command that executes commands from a file in the current shell is named source. I can't see a relation between run commands in the current shell and the meaning of the english word source. Is there a history behind that name?
|
From Lexico, the Oxford Dictionary site:
source
VERB
[WITH OBJECT]
Obtain from a particular source.
Isn't that exactly what this command is doing? Obtaining variable, alias and function definitions, and other shell settings, from a particular file?
| Why does the command 'source' have that name? |
1,393,264,983,000 |
I have images, I need to delete some files with the same size. But no need to remove all such images, but only the next in the queue (in alphabetical order):
1.png # 23,5 Kb
2.png # 24,6 Kb
4.png # 24,6 Kb > remove
8.png # 24,6 Kb > remove
16.png # 23,5 Kb
|
If you're on Linux or otherwise have access to GNU tools, you can do this:
last=-1; find . -type f -name '*.png' -printf '%f\0' | sort -nz |
while read -d '' i; do
s=$(stat -c '%s' "$i");
[[ $s = $last ]] && rm "$i";
last=$s;
done
Explanation
last=-1 : set the variable $last to -1.
find . -type f -name '*.png' -printf '%f\0' : find all files in the current directory whose name ends in .png and print their name followed by the NULL character.
sort -gz : sort \0-separated input (-z) numerically (-n). This results in a sorted list of file names.
while read -d '' i; do : read the list of file names. The -d '' sets the field delimiter to \0 which is needed to process NULL-separated data correctly.
s=$(stat -c '%s' "$i"); : the variable $s now holds the size of the current file ($i).
[[ $s = $last ]] && rm "$i"; : if the current file's size is the same as the last file's size, delete the file.
last=$s : set $last to the current file's size. Now, if the next file has the same size, the previous step will delete it.
| How to remove the same size files in a directory? |
1,393,264,983,000 |
What's a single command I can run that shows me the total amount of space free on a hard drive? I don't want to do any math, I just want a command that shows me the total free space on my hard drive.
|
You can use df with the total flag
--total
produce a grand total
df --total
or
df --total -h
for human readable output (i.e K,M,G)
This wil produce output such as
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 23G 13G 8.7G 60% /
udev 4.0G 124K 4.0G 1% /dev
tmpfs 4.0G 72K 4.0G 1% /dev/shm
total 31G 13G 17G 44%
To only show the total for physical harddisk space(including nfs)
df -x tmpfs --total -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 23G 13G 8.7G 60% /
total 23G 13G 8.7G 60%
Depending on which filetypes you want to exlude (as udev appears to be able to be either tmpfs or devtmpfs) you can use
df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda2 ext3 23731644 13486576 9039564 60% /
udev tmpfs 4093900 124 4093776 1% /dev
tmpfs tmpfs 4093900 72 4093828 1% /dev/shm
^
This column
To check the filetypes and then just put the required ones into the -x command
Also note that all of these are GNU extensions so you require GNU df.
| What command would I use to find out how much total space my hard drive has left? |
1,393,264,983,000 |
I followed this tutorial to install FreeBSD 10.1 and at the step where it says "Add the following lines to /etc/rc.conf" I must to add the following lines in there:
hald_enable="YES"
dbus_enable="YES"
performance_cx_lowest="Cmax"
economy_cx_lowest="Cmax"
But I'm new in Unix and I don't know how to add these lines in /etc/rc.conf, I tried with cd but it says Too much arguments. How can I add these lines in the /etc/rc.conf?
EDIT: I didn't installed yet a desktop environment.
|
You need to learn some sort of text editor. There are several available for FreeBSD like nano, ed, vi, emacs, and many others. I don't want to start a flame war so I'll encourage you to learn them on your own.
If you want to get the really quick and dirty way to acomplish what your asking try:
cat >> /etc/rc.conf << "EOF"
dbus_enable="YES"
performance_cx_lowest="Cmax"
economy_cx_lowest="Cmax"
EOF
| How can I add lines in /etc/rc.conf? |
1,393,264,983,000 |
I want to be able to search all available software like one would using "Synaptic Package manager" or "Ubuntu Software Center" through Command Line.`
I want a better way than pressing Tab after typing a few letters after "sudo apt-get install ". It is not enough and can't search deep like Synaptic Package Manager.
|
You can use apt-cache search. For example to search firefox:
apt-cache search firefox
This is the according snippet from man 8 apt-cache:
search regex [ regex ... ]
search performs a full text search on all available package lists
for the POSIX regex pattern given, see regex(7). It searches the
package names and the descriptions for an occurrence of the regular
expression and prints out the package name and the short description,
including virtual package names. If --full is given then output
identical to show is produced for each matched package, and if
--names-only is given then the long description is not searched,
only the package name is.
Separate arguments can be used to specify multiple search patterns
that are and'ed together.
| How to search for available software in repositories though CLI? |
1,393,264,983,000 |
When I try to edit or assign the IP using this command:
system-config-network-tui &
Terminal opens a console which is uncontrollable, just like this:
This happens on CentOS and Red Hat.
|
try not opening it as background process system-config-network-tui instead of system-config-network-tui & this has worked for me and later i switched to editing config files at /etc/sysconfig/network-scripts/
| "system-config-network-tui &" doesn't work |
1,284,113,005,000 |
I want to know about the "filter-command" which are available in Unix.
I am confused regarding this:
What is the purpose of "Filter-Command" ?
Which are the Filter-commands available in Unix?
I have read some books/articles on web, in some books i found few filter commands and in some books i found some other.
|
I'm not sure what you're asking without context. "Traditional" Unix tools read from standard input and write to standard output, so you can chain them together using a pipe, which is the | command:
ls | grep "banana" | more
Things which read from standard input and write to standard output are filters.
There is an article here: http://en.wikipedia.org/wiki/Filter_(Unix)
| Unix - Filter Commands |
1,284,113,005,000 |
I have some .vcf files and I want to filter some variants out. This is just small part of my file: there are some header lines at the beginning of the file (starting with ##) and then variants (one row per variant).
##fileformat=VCFv4.2
##source=combiSV-v2.2
##fileDate=Mon May 8 11:32:53 2023
##contig=<ID=chrM,length=16571>
##contig=<ID=chr1,length=249250621>
##INFO=<ID=END,Number=1,Type=Integer,Description="End position of the variant described in this record">
##INFO=<ID=SVCALLERS,Number=.,Type=String,Description="SV callers that support this SV">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=DR,Number=1,Type=Integer,Description="# High-quality reference reads">
##FORMAT=<ID=DV,Number=1,Type=Integer,Description="# High-quality variant reads">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample
1 10862 id.1 N <INS> . PASS SVTYPE=INS;SVLEN=101;END=10862;SVCALLERS=cutesv,SVIM GT:DR:DV 1/1:0:26
1 90258 id.2 N <INS> . PASS SVTYPE=INS;SVLEN=118;END=90258;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:0:9
1 90259 id.3 N <INS> . PASS SVTYPE=INS;SVLEN=36;END=90259;SVCALLERS=Sniffles GT:DR:DV 0/1:44:7
1 185824 id.4 N <DEL> . PASS SVTYPE=DEL;SVLEN=80;END=186660;SVCALLERS=Sniffles,cutesv GT:DR:DV 1/1:0:15
1 186241 id.5 N <DEL> . PASS SVTYPE=DEL;SVLEN=418;END=186662;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:2:12
1 526111 id.6 N <DEL> . PASS SVTYPE=DEL;SVLEN=624;END=526735;SVCALLERS=Sniffles,cutesv GT:DR:DV 0/1:8
2 91926078 id.3958 N <BND> . PASS SVTYPE=BND;SVLEN=.;END=;SVCALLERS=Sniffles,NanoSV GT:DR:DV 0/1:60:15
While keeping the header lines, I want to remove rows with SVLEN < 100 and those with only one SVCALLERS included. These are two criteria that both must be met, in other words I want to keep only rows with SVLEN > 100 and at least two SVCALLERS).
In addition there are some rows where ALT is BND and the file does not provide any SVLEN for this type of variant, so if the row contains BND, I just want to keep it if it is supported by two callers.
Examples: I want to drop this variant because SVLEN is less than 100 and only one SVCALLERS detected it:
SVTYPE=INS;SVLEN=36;END=90259;SVCALLERS=Sniffles GT:DR:DV 0/1:44:7
1 185824 id.4 N <DEL> . PASS
Or this row as well, although there are two callers but SVLEN is less than 100:
SVTYPE=DEL;SVLEN=80;END=186660;SVCALLERS=Sniffles,cutesv GT:DR:DV 1/1:0:15
1 186241 id.5 N <DEL> . PASS
Is there an easy way to do it? Thanks
My final file should look like this:
##fileformat=VCFv4.2
##source=combiSV-v2.2
##fileDate=Mon May 8 11:32:53 2023
##contig=<ID=chrM,length=16571>
##contig=<ID=chr1,length=249250621>
##INFO=<ID=END,Number=1,Type=Integer,Description="End position of the variant described in this record">
##INFO=<ID=SVCALLERS,Number=.,Type=String,Description="SV callers that support this SV">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=DR,Number=1,Type=Integer,Description="# High-quality reference reads">
##FORMAT=<ID=DV,Number=1,Type=Integer,Description="# High-quality variant reads">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample
1 10862 id.1 N <INS> . PASS SVTYPE=INS;SVLEN=101;END=10862;SVCALLERS=cutesv,SVIM GT:DR:DV 1/1:0:26
1 90258 id.2 N <INS> . PASS SVTYPE=INS;SVLEN=118;END=90258;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:0:9
1 186241 id.5 N <DEL> . PASS SVTYPE=DEL;SVLEN=418;END=186662;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:2:12
1 526111 id.6 N <DEL> . PASS SVTYPE=DEL;SVLEN=624;END=526735;SVCALLERS=Sniffles,cutesv GT:DR:DV 0/1:8
2 91926078 id.3958 N <BND> . PASS SVTYPE=BND;SVLEN=.;END=;SVCALLERS=Sniffles,NanoSV GT:DR:DV 0/1:60:15
|
Here's a perl way:
$ perl -F'\t' -lane '
if(/^#/){ print; next };
$F[7] =~ /\bSVLEN=(\d+)/;
$svlen=$1;
$F[7] =~ /\bSVCALLERS=([^;]+)/;
@callers=split(/,/,$1);
print if $svlen > 100 && scalar(@callers) > 1' file.vcf
##fileformat=VCFv4.2
##source=combiSV-v2.2
##fileDate=Mon May 8 11:32:53 2023
##contig=<ID=chrM,length=16571>
##contig=<ID=chr1,length=249250621>
##INFO=<ID=END,Number=1,Type=Integer,Description="End position of the variant described in this record">
##INFO=<ID=SVCALLERS,Number=.,Type=String,Description="SV callers that support this SV">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=DR,Number=1,Type=Integer,Description="# High-quality reference reads">
##FORMAT=<ID=DV,Number=1,Type=Integer,Description="# High-quality variant reads">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample
1 10862 id.1 N <INS> . PASS SVTYPE=INS;SVLEN=101;END=10862;SVCALLERS=cutesv,SVIM GT:DR:DV 1/1:0:26
1 90258 id.2 N <INS> . PASS SVTYPE=INS;SVLEN=118;END=90258;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:0:9
1 186241 id.5 N <DEL> . PASS SVTYPE=DEL;SVLEN=418;END=186662;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:2:12
1 526111 id.6 N <DEL> . PASS SVTYPE=DEL;SVLEN=624;END=526735;SVCALLERS=Sniffles,cutesv GT:DR:DV 0/1:8
Explanation
perl -F'\t' -lane: the -a makes perl work kinda like awk in that it automatically splits each input line on the character given by -F (whitespace, by default, like awk) and saves it in the array @F. So, since arrays start counting at 0, the first field is $F[0[, the second $F[1] and so on. Next, the -l removes trailing newlines from each input line, and also adds a \n to each print call, the -n means "read the input file line by line and apply the script given by -e to each line.
if(/^#/){ print; next }; : if this is a header line, just print it and move on to the next line.
$F[7] =~ /\bSVLEN=(\d+)/; $svlen=$1;: match the longest string of numbers after SVLEN= in the 8th field and save it as $svlen. This will be the length. The \b ensures we only match at word boundaries so this won't fail if you have something like NOTSVLEN= in your file.
$F[7] =~ /\bSVCALLERS=([^;]+)/; @callers=split(/,/,$1);: now search the 8th field for the string SVCALLERS=, take the longest stretch of non-; characters after it, and then split that on , into the array @callers. This now has the list of CNV callers used for this CNV.
print if $svlen > 100 && scalar(@callers) > 1: we now print the line if the length was more than 100 and the number of callers (scalar(@array) gives the number of elements in an array) is more than 1.
And here's the same basic thing in a more concise and less clear way if you prefer your commands mildly golfed:
perl -F'\t' -lane '$F[7]=~/\bSVLEN=(\d+)/;$s=$1;$F[7]=~/\bSVCALLERS=([^;]+)/; /^#/ || ($s>100&&scalar(split(/,/,$1)) > 1) || next; print' file.vcf
If you also want to keep lines with no SVLEN as long as they have at least two callers, use this:
$ perl -F'\t' -lane 'if(/^#/){ print; next }; $F[7] =~ /\bSVLEN=([.\d]+)/; $svlen=$1; $F[7] =~ /\bSVCALLERS=([^;]+)/; next unless ($svlen > 100 || $svlen == ".") && scalar(split(/,/,$1)) > 1; print' file.vcf
##fileformat=VCFv4.2
##source=combiSV-v2.2
##fileDate=Mon May 8 11:32:53 2023
##contig=<ID=chrM,length=16571>
##contig=<ID=chr1,length=249250621>
##INFO=<ID=END,Number=1,Type=Integer,Description="End position of the variant described in this record">
##INFO=<ID=SVCALLERS,Number=.,Type=String,Description="SV callers that support this SV">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=DR,Number=1,Type=Integer,Description="# High-quality reference reads">
##FORMAT=<ID=DV,Number=1,Type=Integer,Description="# High-quality variant reads">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample
1 10862 id.1 N <INS> . PASS SVTYPE=INS;SVLEN=101;END=10862;SVCALLERS=cutesv,SVIM GT:DR:DV 1/1:0:26
1 90258 id.2 N <INS> . PASS SVTYPE=INS;SVLEN=118;END=90258;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:0:9
1 186241 id.5 N <DEL> . PASS SVTYPE=DEL;SVLEN=418;END=186662;SVCALLERS=SVIM,NanoSV GT:DR:DV 1/1:2:12
1 526111 id.6 N <DEL> . PASS SVTYPE=DEL;SVLEN=624;END=526735;SVCALLERS=Sniffles,cutesv GT:DR:DV0/1:8
2 91926078 id.3958 N <BND> . PASS SVTYPE=BND;SVLEN=.;END=;SVCALLERS=Sniffles,NanoSV GT:DR:DV0/1:60:15
| filter lines based on some criteria |
1,284,113,005,000 |
I am asking why these three commands give three different answers:
$ printf "%s\n" `echo ../.. | sed 's/[.]/\\&/g'`
&&/&&
$ printf "%s\n" $(echo ../.. | sed 's/[.]/\\&/g')
../..
$ printf "%s\n" "$(echo ../.. | sed 's/[.]/\\&/g')"
\.\./\.\.
|
This is a tough question. The easiest example to explain in the last one.
Fully quoted
Well, actually, a fully quoted example:
$ printf "%s\n" "$( echo "../.." | sed 's/[.]/\\&/g' )"
\.\./\.\.
Here there are no tricks nor changes done by the shell because everything is quoted. The most internal echo "../.." is quoted and therefore not subject to filename expansion. It goes to the sed unchanged and sed changes each dot to a \&. Then the result of that command is also quoted "$(...)" which (again) avoids any change by the shell and the printf command also prints \.\./\.\.. No surprises here.
Change to ##/##
If there is filename expansion (globbing) over ../.. we end with ../.. anyway. So, the end string is the same. But lets test this issue:
$ echo ../../t*
../../test2.txt ../../tested ../../test.txt
$ set -f # remove all filename expansion
$ echo ../../t* ../..
../../t* ../..
And, anyway, an string that doesn't contain either a *, a ?, or a [ is not subject to Pattern Matching Notation
Proof: Set GLOBIGNORE
$ (GLOBIGNORE='.*:..*'; echo "any" ../../t* ../.. "value")
any ../.. value
If ../.. were subject to globbing (filename expansion) then it would be removed due to the value of GLOBIGNORE.
However, to be sure that there is no filename expansion we can switch to a (most probably) non-existent filename: ##/##
Unquoted command execution
There is no reason that the shell should remove a \ even if (the command execution is) unquoted:
$ printf '%s\n' $(printf '%s\n' '\#\#/\#\#')
\#\#/\#\#
In fact, none of the shells I tested shows what you report in your second example. (please correct me!).
EDIT: In bash 5.0.17 there is a bug. But only with the dot.
$ b50sh
$ echo $BASH_VERSION
5.0.17(3)-release
$ printf '%s\n' $(printf '%s\n' '\.\./\.\.')
../..
$ printf '%s\n' $(printf '%s\n' '\#\#/\#\#')
\#\#/\#\#
$ printf '%s\n' $(printf '%s\n' '\>\>/\>\>')
\>\>/\>\>
$ exit
$ echo $BASH_VERSION
5.1.4(1)-release
$ printf '%s\n' $(printf '%s\n' '\.\./\.\.')
\.\./\.\.
$ printf '%s\n' $(printf '%s\n' '\#\#/\#\#')
\#\#/\#\#
And seems that it has been solved in bash 5.1.4
Backticks
Here is the trickiest issue: Inside backticks every \\ becomes a \.
So, the sed command (as seen by sed) is: s/[#]/\&/g. To do what I believe you meant you need to duplicate the \s:
$ printf '%s\n' `printf '%s\n' '##/##' | sed 's/[#]/\\&/g'`
&&/&&
$ printf '%s\n' "`printf '%s\n' '##/##' | sed 's/[#]/\\&/g'`"
&&/&&
$ printf '%s\n' "`printf '%s\n' '##/##' | sed 's/[#]/\\\\&/g'`"
\#\#/\#\#
| Can someone tell me the difference between these three command substitutions? |
1,284,113,005,000 |
What's the generic way to mute/unmute my system's default sound output?
$ amixer set Master mute
amixer: Unable to find simple control 'Master',0
$ amixer scontrols
Simple mixer control 'IEC958',0
Simple mixer control 'IEC958',1
Simple mixer control 'IEC958',2
Simple mixer control 'IEC958',3
Simple mixer control 'IEC958',4
Simple mixer control 'IEC958',5
I know the sound control had been moving away from amixer to Pulseaudio, however, I'm still able to use the ALSA "Master" control in my Debian 10, but not my Ubuntu 21.10, see above.
There is pactl set-sink-mute 0 1 from https://superuser.com/questions/805525/, but I tried it but that doesn't work for my Ubuntu 21.10 above.
All in all, I just need a generic way to mute/unmute my system's default sound output that is good across all my machines and all my Linuxes, just like the ALSA "Master" control.
|
I've been using this command for ages now:
pactl set-sink-mute @DEFAULT_SINK@ toggle
This mutes/unmutes depending on the current state.
Also to increase volume: pactl set-sink-volume @DEFAULT_SINK@ +3%
or decrease volume: pactl set-sink-volume @DEFAULT_SINK@ -3%
| How to mute/unmute default sound output |
1,284,113,005,000 |
I have a strings in file:
7017556626 TEST BSAB 20191108 TEST123 3333 1111 BSAB 11
7007760674 TESTCHAS 20191108 TEST123 4444 5555 CHAS 22
7017556626 TEST 20191108 TEST123 3333 1111 CHAS 33
7017556626 TEST SSEQ 20191108 TEST123 2222 7777 BSAB 44
7007760674 TESTCHAS 20191108 TEST123 1111 0000 55
I need to add space before position 16
7017556626 TEST BSAB 20191108 TEST123 3333 1111 BSAB 11
7007760674 TEST CHAS 20191108 TEST123 4444 5555 CHAS 22
7017556626 TEST 20191108 TEST123 3333 1111 CHAS 33
7017556626 TEST SSEQ 20191108 TEST123 2222 7777 BSAB 44
7007760674 TEST CHAS 20191108 TEST123 1111 0000 55
How can I do it?
|
With awk:
awk 'substr($0,16,1) != " " { $0=substr($0,0,15)" "substr($0,16) }1' file
If the string at position 16 is not a space character, change the current line to the prefix, space character and suffix.
Then print the current line (1).
| Add space before position in file |
1,284,113,005,000 |
I'm doing some hands on pen testing and following some guides to get an understanding of the tools of the trade. I'm following along with the guide here and I understand everything except for the last page.
I need assistance understanding sudo -l below. I know that it details what the current user can do. However, what does the output below mean?
And how about the command below (excluding touch)? It kind of confuses me because after running that command (exploit?), I was able to get root.
From my understanding, the line is saying to run the command as root or elevate to root, zip the file called exploit, and place it in tmp/exploit. I believe I'm wrong but that's where my understanding of that line stops.
I'm confused as to how I got root with that command and what that line is doing.
|
For your first question, the indicated lines of output are telling you that you are permitted to run /bin/tar and /usr/bin/zip via sudo as the root user without even needing to provide zico's password.
For your second question, we get the answer from zip's manual page:
--unzip-command cmd
Use command cmd instead of 'unzip -tqq' to test an archive when the -T option is used.
So, since you're privileged to run zip as the root user through sudo, the exploit is simply telling zip "hey, when you're testing this archive, use the command sh -c /bin/bash to test it, would you?" and it's helpfully doing so, giving you a root shell.
The exploit file is just there to provide zip something to compress, so that there would be something to "test". It's never being run or anything and indeed in your demonstration is simply an empty file.
$ sudo -u root zip /tmp/exploit.zip /tmp/exploit -T --unzip-command="sh -c /bin/bash"
is instructing sudo to, as the root user, run this command:
$ zip /tmp/exploit.zip /tmp/exploit -T --unzip-command="sh -c /bin/bash"
This command will take the file /tmp/exploit and put it into a new archive, /tmp/exploit.zip. The -T switch tells zip to then Test the integrity of the archive, and the --unzip-command switch is telling zip how to test the archive. This last thing is the actual exploit: because zip is being run as root, running sh -c /bin/bash gives you a shell as the root user.
| Understanding sudo and possible exploit |
1,284,113,005,000 |
I have two directories that both have a couple thousand files each, and I am trying to grep certain IPs from the files. My grep string is:
grep "IP" cdr/173/07/cdr_2018_07*
This grep string returns "grep: Argument list too long". However, when I do the following:
grep "IP" cdr/173/06/cdr_2018_06*
it returns what I am looking for.
Below is the ls -l for the parent directory for each of these. It seems that the difference is about 400KB, so I'm not sure that size is really the issue here. Am I missing something?
jeblin@debian:~$ ls -l cdr/173
total 18500
REDACTED
drwxr-xr-x 2 jeblin jeblin 2781184 Jul 2 09:34 06
drwxr-xr-x 2 jeblin jeblin 2826240 Aug 1 07:33 07
If it makes a difference, I wrote a Python script that automates this process (searching for multiple IPs), and it works for 06, but not 07 as well, which is why I tried to do the manual grep search first.
|
The shell is not able to call grep with too many files, or rather, the length of the command line1 for calling an external utility has a limit, and you're hitting it when the shell tries to call grep with the the expanded cdr/173/07/cdr_2018_07* globbing pattern.
What you can do is either to grep each file individually, with
for pathname in cdr/173/07/cdr_2018_07*; do
grep "IP" "$pathname" /dev/null
done
where the extra /dev/null will force grep to always report the filename of the file that matched, or you can use find:
find cdr/173/07 -maxdepth 1 -type f -name 'cdr_2018_07*' \
-exec grep "IP" /dev/null {} +
which will be more efficient as grep will be called with as many matching pathnames as possible in batches.
It could also be that if you first cd into cdr/173/07 and do
grep "IP" cdr_2018_07*
it may work since the generated list of filenames would be shorter due to not containing the directory bits, but you're probably very close to the limit with 44.7k files and should seriously consider moving to another way of doing this, especially if you're expecting the number of files to fluctuate around that number.
Related:
Understanding the -exec option of `find`
What defines the maximum size for a command single argument? (tangentially related)
How to cause "Argument list too long" error?
Other questions on U&L relating to "Argument list too long"
1The limit is on the combined length on the command line and the length of the environment (the sum of the length of each argument and environment variable's name and value, also accounting for the pointers to them), and it is a limit imposed by the execve() system call which is used by the shell to execute external commands. Built-in commands such as echo etc. do not have this issue.
| grep works with one filepath, not another |
1,284,113,005,000 |
Say I have these files:
essay.aux essay.out
essay.dvi essay.pdf
essay.fdb_latexmk essay.tex
essay.fls essay.toc
essay.log ......
How do I rename them to:
new_name.aux new_name.out
new_name.dvi new_name.pdf
new_name.fdb_latexmk new_name.tex
new_name.fls new_name.toc
new_name.log ......
The problem is that they have different extensions rather than different names, so I cannot use answers from this question. Also, I'm on macOS which doesn't have a rename command.
|
Here is a solution I was able to get working:
#!/bin/bash
shopt -s nullglob
my_files='/root/temp/files'
old_name='essay'
new_name='new_name'
for file in "${my_files}/${old_name}"*; do
my_extension="${file##*.}"
mv "$file" "${my_files}/${new_name}.${my_extension}"
done
shopt -s nullglob
This will prevent an error if the directory it's parsing is empty
for file in "${my_files}/${old_name}"*; do
We are going to loop over every file in /root/temp/files/ so long as it begins with essay
my_extension="${file##*.}"
This will greedily trim anything up to the last . found in the filename (hopefully leaving you with only the extension)
mv "$file" "${my_files}/${new_name}.${my_extension}"
This moves the old file to the new filename while reserving the extension. (rename)
| How to rename files with different extensions |
1,284,113,005,000 |
I am trying to exit a while loop as soon as it returns no output.
So if I am monitoring (with a while loop) the output of a command that changes, how do I exit the loop once the string I am monitoring no longer exists. (Say "purple" disappears from the output string in the example below)
$ while :; do clear; "is_purple_present_monitoring_script" | grep purple ; sleep 15; done
|
It's the last command in condition-list that determines when the while loop exits.
while
clear
"is_purple_present_monitoring_script" | grep purple
do
sleep 15
done
You could move the condition to the action-list and use break there (and use true or : as the condition-list) like with:
while
true
do
clear
"is_purple_present_monitoring_script" | grep purple || break
sleep 15
done
But that would be quite a contrived way to use a while loop.
| How to exit a while loop |
1,284,113,005,000 |
I declared a constant using define statement in my C file.
#define COMPRESSION_VERSION 1.0.0
Now I have created libcompression.a library which includes the above C file. Now I need to check my defined constant value in the library using terminal.
|
#define COMPRESSION_VERSION 1.0.0
is a C pre-processor directive, which isn’t expected to survive macro expansion, let alone compilation.
If you want a symbol that appears in your library, you need to add it explicitly; for example
static const char * COMPRESSION_VERSION = "1.0.0";
This will then appear in your library:
$ nm -A libcompression.a
libcompression.a:compression.o:0000000000000000 d COMPRESSION_VERSION
and you can see its value using objdump -s.
A common technique is to embed the version in the symbol; e.g. for OpenSSL:
$ nm -D /usr/lib/x86_64-linux-gnu/libssl.so.1.1|grep OPENSSL_1
0000000000000000 A OPENSSL_1_1_0
0000000000000000 A OPENSSL_1_1_0d
| How to see constant values in .a lib files? |
1,284,113,005,000 |
We have a CI server application run as the build user. Any command with arguments run by the CI server is visible via ps. Although non-admin users do not have access to load a shell on the CI server, they do however have access to running unix commands via a task.
My concern is; user A can potentially expose a user B's task which has command line arguments (which could potentially be sensitive info) by simply doing a ps.
Note that all tasks within the CI server is run as the build user. Users cannot switch to different user.
I could perhaps block the ps command so within a task a user cannot execute ps, which should solve my problem however I'm curious to know:
Are there other commands that can be run to expose command line arguments without having root privileges?
Given the context of this problem, is there a better/secure way to manage it?
|
I'm afraid all commands are run as the build user.
Then anybody who submits a build can see, and even interfere, with the jobs of another user. Running a build can execute arbitrary code; this allows anybody who submits a build not only to run ps but also to read and write files belonging to other jobs. If you can't trust the users who submit builds then you must run the builds as separate users.
If you're concerned only with users who have an account on that CI server but aren't allowed to submit builds, then the hidepid option may help you. Alternatively, educate build submitters to pass confidential information in files or environment variables instead of command line arguments. Note that the ps command isn't what you need to take care of, it's just a pretty-printer for information found in the proc filesystem. The command line of process 1234 can be printed with cat /proc/1234/cmdline.
If you have confidentiality concerns with builds, I recommend that rather than attempting to plug one potential information leak at a time, you run all builds in a container or virtual machine.
| How to stop a user from seeing command line arguments? |
1,284,113,005,000 |
I have some very large tables and I need to extract specific rows. I am illustrating the task using a simple example. Say, I have weighed a number of apples, bananas and oranges. I need to extract the weight of the smallest apple, banana and orange
Original table:
Apple 3
Banana 8
Orange 2
Apple 7
Banana 9
Orange 13
Apple 9
Banana 1
Orange 11
Desired output:
Apple 3
Banana 1
Orange 2
|
With awk:
$ awk '($2<a[$1] || !a[$1]){a[$1]=$2}END{for(f in a){print f,a[f]}}' file
Orange 2
Banana 1
Apple 3
a[$1]=$2 sets up an array called a, whose keys are the 1st field and whose value is the second. The script above will save the second field as the value for the first in the array if i) it is smaller than the value stored or ii) there is no value stored. The END block iterates over the array printing its contents.
With GNU sort:
$ sort -nk2 file | sort -u -k1,1
Apple 3
Banana 1
Orange 2
The first sort will print the lines in ascending order of weight (the 2nd field) and the second will only keep unique lines, but checks the 1st field only. The result is that the first occurrence of each string is printed which, because of the 1st sort, will be the smallest value for that fruit.
And a (slightly) shorter Perl:
$ perl -lane '$k{$F[0]}//=$F[1]; $k{$F[0]}=$F[1] if $F[1]<$k{$F[0]};
END{print "$_ $k{$_}" for keys(%k)}' file
Orange 2
Apple 3
Banana 1
The //= will assign a value unless the variable already has one. Then, the approach is the same as the awk one. We create the hash %k whose keys are the fruit and whose values their weight, and save the smallest value. The -a flag causes perl to act like awk and split its input on whitespace into the @F array.
| Find the smallest numbers in the second column corresponding to index values in first column |
1,284,113,005,000 |
Here is what I did
less -N file1 > file2
what I want is writting file1 into file2 with line-numbers option.
But I failed with that.
Any suggestion to do that?
Why I failed to do it?
Thanks.
|
less is the wrong tool for the job.
You can use cat for that:
cat -n file1 >file2
Or nl:
nl -ba file1 >file2
Or pr:
pr -n -t -T file1 >file2
Or sed:
sed '/./=' file1 | sed '/./N; s/\n/\t/' >file2
Or grep:
grep -n . file1 | sed 's/:/\t/' >file2
Or awk:
awk '{ $0 = NR "\t" $0 } 1' file1 >file2
Or again awk:
awk '{ sub(/^/, NR "\t") } 1' file1 >file2
Or perl:
perl -pe '$_=$.."\t".$_' file1 >file2
Or again perl:
perl -pe 's/^/$.\t/' file1 >file2
Or seq and paste:
seq $(wc -l file1 | cut -d' ' -f1) | paste - file1 >file2
Or even a plain shell script:
count=0
while IFS= read -r line; do
let count++
printf '%d\t%s\n' $count "$line"
done <file1 >file2
But less is the wrong tool. :)
| Redirect output of less utility to a file |
1,284,113,005,000 |
What is the most reliable way to give all users read/write privileges for a given directory, all of its subdirectories, and files in CentOS 7?
In an eclipse web application project that uses Maven, I am getting the following compilation error in the pom.xml:
Parent of resource: /home/user/workspace/MinimalDbaseExample/target/m2e-wtp is marked as read-only.
Since this sounds like a permissions issue, I typed in the following in the CentOS 7 terminal:
chmod -R ugo+rw /home/user/workspace/MinimalDbaseExample/target/
And I also tried:
chmod -R 0777 /home/user/workspace/MinimalDbaseExample
But eclipse is still showing the compilation error, even after multiple Project clean and Maven update operations. However, I am able to import the same zipped project file into a Windows version of eclipse, and there is no compilation error related to file permissions in the Windows version, so this causes me to wonder if perhaps my above chmod statements did not actually open up the file permissions in the CentOS 7 machine.
Is there a better statement syntax that can reliably open up read write permissions to all users for the given directory and all its recursive subdirectories and files?
|
You said you wanted to grant read and write permissions to all subdirectories and files under: /home/user/workspace/MinimalDbaseExample ... right?
Octal 0777 permissions grant rwxrwxrwx symbolically.
Octal 0755 permissions grant rwxr-xr-x symbolically.
Octal 0666 permissions grant rw-rw-rw- symbolically.
To set read/write/execute permissions to the /home/user/workspace/MinimalDbaseExample directory and all files and folders within it, choose which permission set you want, and do the following as an example:
1) Make your present working directory : /home/user/workspace
2) Type: chmod -R 0777 MinimalDbaseExample/
Following this procedure exactly, grants the folder MinimalDbaseExample/ and all files and subdirectories therein 0777/drwxrwxrwx permissions.
I tested this setting up some dummy directories under my '~' directory and verified it worked.
Credit goes to this thread, but it should not be at all this complex... I hope you make progress.
https://stackoverflow.com/questions/3740152/how-to-set-chmod-for-a-folder-and-all-of-its-subfolders-and-files-in-linux-ubunt
| reliable way to give all read/write access recursively in CentOS 7 |
1,284,113,005,000 |
I have seen on many occasions a name of a function (frankly speaking I just call it function because of it typical appearance, they are though sometimes named commands or system calls but I do not know the idea behind labelling them differently),
which contains a number within the brackets part of it, like in exec(1) exec(2) exec(3).
What is the meaning behind putting numbers into them ?
|
exec here could be a system call or a bash built-in or something else from this . And respective man pages related to system call or bash built-in refer to the exec's man page with numbers in the brackets. So if I want to refer to manpage of bash built-in, I would say exec(1) and if I want to refer to manpage of system call exec() i would say exec(2)
The number referrs to particular manpage.
When you see exec(2) in a manpage. To know about that particular referred exec you should say man 2 exec
| What is the reason for having numbers within the brackets of a function ? [duplicate] |
1,284,113,005,000 |
There have been multiple questions asked about this, like Understanding ls output, What are columns in ls -la?, What does 'ls -la' do?, What do the fields in ls -al output mean?, etc..
I've also come across many other websites with articles attempting to explain it.
What every single one of them seems to have in common is that despite writing down the meaning of the columns, there are never any links/references to where they acquired that information in the first place. An answer in the third question links to the coreutils manual, but much like the manpage that still provides no clarification.
The aforementioned resources are incomplete, as I'm developing a driver and found out ls -l provides the major and minor number for block/character devices (which is different from the regular output for files or directories):
Here the major/minor numbers for the device are 1 and 3 respectively.
I only discovered this because someone mentioned it in an answer (unrelated question). Had I wanted to know what these numbers meant before, I probably wouldn't have been able to find out save for the unlikely event I stumbled on that answer. Or went looking in the source code.
It seems pretty weird for a tool that pretty much every single linux user uses, not to have any proper information available about it's output. So am I missing something? Where is it documented?
EDIT: Muru in the comments referred to yet another question How to find what the fields in ls -l mean - the suggested answers in that question mention mostly the manpages (one outright pastes it), which for GNU coreutils does not provide a complete answer (manpage makes no mention of major/minor device numbers). The BSD manpage does, but Stephen's answer of the posix standard is (I think) the most correct.
|
ls is specified by POSIX, that’s the common reference. The output formats are described in the “STDOUT” section.
| Where do I find documentation for the output of ls -l? |
1,284,113,005,000 |
In Ubuntu 16.04, I'm trying to find a way to append the Day of the Week to the end of each line in a text file given the date in field 4.
Sample data:
Server ID,Make,"Server Room",Datestamp,Timestamp,Distance,Ping,Download,Upload,Payload,"Src IP Address",Hour,DOW
x6883101,HP,"Server Room A",2019-07-14,04:50:02,26.444,11.521,49193480,41904833,,192.168.1.1,4,
s3398577,Dell,"Server Room B",2019-09-21,10:50:02,56.574,37.608,48955461,45858381,,192.168.1.1,10,
x6883551,Dell,"Server Room A",2019-08-16,02:00:04,26.444,17.921,86551957,88775986,,192.168.1.1,2,
s1555023,HP,"Server Room C",2018-02-06,04:50:01,516.574,402.527,907658,608152,,192.168.1.1,4,
s3398023,HP,"Server Room B",2019-01-17,10:50:01,56.574,40.233,48484827,45620028,,192.168.1.1,10,
s1555098,IBM,"Server Room C",2018-11-18,02:00:03,516.514,404.671,819027,601233,,192.168.1.1,2,
x6883582,Dell,"Server Room A",2019-05-19,04:50:02,26.444,12.506,88871436,84360552,,192.168.1.1,4,
For example, for data line #1 and #2:
x6883101,HP,"Server Room A",2019-07-14,04:50:02,26.444,11.521,49193480,41904833,,192.168.1.1,4,Sunday
s3398577,Dell,"Server Room B",2019-09-21,10:50:02,56.574,37.608,48955461,45858381,,192.168.1.1,10,Saturday
I've tried various SED and AWK with nothing. I've tried the DATE command, but it doesn't seem to like input. I've been able to isolate the actual date from
grep -w -o "20[0-9][0-9]-[0-9][0-9]-[0-9][0-9]*"
but nothing that I see is able to convert it and append the DOW at the end of the lines.
What am I missing that makes appending the Day of the Week to the end of each of the lines of data?? Also, I need to be able to do this from a CRONTAB job.
|
With GNU awk, you could do:
gawk -i /usr/share/awk/inplace.awk -F, -v OFS=, -v date_field=4 '
(t = mktime(gensub("-", " ", "g", $date_field) " 0 0 0")) > 0 {
$NF = strftime("%A", t)};1' your-file
-i /usr/share/awk/inplace.awk: enables the in-place editing mode of gawk, whereby the output is written into a new file destined to be the replacement of the input file. Do not use -i inplace as gawk tries to load the inplace extension (as inplace or inplace.awk) from the current working directory first, where someone could have planted malware. The path of the inplace extension supplied with gawk may vary with the system, see the output of gawk 'BEGIN{print ENVIRON["AWKPATH"]}'
-F, and -v OFS=, set the input and output field separators
mktime() is a GNU awk extension that parses a string in the year month day hour minute second format and returns the corresponding Unix epoch time. Here, we use gensub() (another gawk extension) to replace the - with spaces in the 4th field (YYYY-MM-DD) so as to pass a YYYY MM DD 0 0 0 time to mktime().
(t = mktime(...)) > 0 {...} and 1 are two condition {action} pairs which are run on every input record (here lines).
For the first one, the condition checks if the value returned by mktime() (assigned to t) is greater than 0 (mktime() returns -1 if it cannot parse the date specification), in which case the action is run. strftime() (another gawk extension) like its C equivalent is used to format a time (here the unix epoch time stored in t with format %A: the localised week day name). We assign the result to the NFth field ($NF), NF being the special variable that contains the number of fields in the current record and $ being an operator to retrieve fields contents (or the full record with $ 0).
the second one (1) is missing the action part which defaults to {print} (prints the current record), and the condition (1) is always true. That's the idiomatic short way to unconditionally print the current record, it you wanted to be more verbose, you could do:
gawk -i /usr/share/awk/inplace.awk \
-v FS=, \
-v OFS=, \
-v date_field=4 \
-v current_record=0 \
-v always=1 '
{
date_for_mktime = gensub("-", " ", "g", $date_field) " 0 0 0"
unix_time = mktime(date_for_mktime)
}
unix_time > 0 {
$NF = strftime("%A", unix_time)
}
always {print $current_record}' your-file
If you want the week day names to always be in English regardless of the locale of the user, you can fix the locale to C (LC_ALL=C gawk...).
| Looking for a way to append Day of Week to end of line |
1,284,113,005,000 |
At some point I used a shell command that continuously sent a very short string of text to the standard output, but at this moment I can't recall it's name.
Its name was something very short, like 'abc', useful to quickly create a file filled with text. I remember I was surprised I had never seen it, so I guess it might not be a Linux built-in command. It actually might be a zsh shell command, but at the moment I do not have access to a zsh shell. I tried to find it in bash with "compgen -c" but either is not there or I can't recall the name
I know I can script it, but I am curious whether someone knows about it
|
There's the command yes(1)
yes - output a string repeatedly until killed
| Linux command that continuously outputs string of text |
1,284,113,005,000 |
In case of test -e for example:
VAR="<this is a file path: /path/to/file/or/dir"
test -e $VAR && do_something || do_anotherthing
Question: Should I use "$VAR" here?, here I don't like verbose if not necessary, because $VAR obviously in this case is a path then if it's empty string it should always fail because there's no path that is empty string, then with my logic, double quote it is not necessary.
But it case of string test, test -n for exmpale:
VAR="<this is a string"
test -n $VAR && do_something || do_anotherthing
then with my logic, $VAR should be put in double quote: "$VAR" because it can be expanded to an empty string that if not in double quote will be expanded to -n argument only and always true.
So the actual question because of that I'm in doubt of that should we only use double quotes in test command only with -n and -z against strings?
|
A general rule is to double quote any variable, unless you want the shell to perform token splitting and wildcard expansion on its content.
because $VAR obviously in this case is a path then if it's empty string it should always fail [...] then with my logic, double quote it is not necessary.
On contrary. The behavior of test -e with no operands is another reason you should quote the variable in this particular case:
$ unset foo # or foo=""
$ test -e $foo # note this is equivalent to `test -e'
$ echo $?
0
$ test -e "$foo"
$ echo $?
1
$
| Shell: only double quote on test -n/-z? [duplicate] |
1,284,113,005,000 |
Is there a way to validate or confirm that the user wrote what it meant to write in read?
For example, the user meant to write "Hello world!" but mistakenly wrote "Hello world@".
This is very similar to contact-form validation of an email / phone field.
Is there a way to prompt the user with something like "Please retype the input", in read?
I found no such option in man read.
Note: The input is a password so I don't want to print or compare it with an already existing string.
|
With the bash shell, you can always do
FOO=a
BAR=b
prompt="Please enter value twice for validation"
while [[ "$FOO" != "$BAR" ]]; do
echo -e $prompt
read -s -p "Enter value: " FOO
read -s -p "Retype to validate: " BAR
prompt="\nUups, please try again"
done
unset -v BAR
# do whatever you need to do with FOO
unset -v FOO
read options used:
-s Silent mode. If input is coming from a terminal, characters are not echoed.
-p prompt Display prompt on standard error, without a trailing newline, before attempting to read any input.
| read value validation |
1,284,113,005,000 |
I'm looking for a tool which automatically checks whether a LaTeX document is a correct bracket term.
It's very easy to write such a tool but before I do, I want to know whether one already exists.
It needs to be a command-line tool or shell code so I can use it in a script. A GUI tools just won't help me. It needs to check for the brackets () {} [] <>.
I view the document as a bracket expression. All the non-bracket characters don't matter. For a bracket term T with only 1 type of bracket to be valid, it needs to meet these conditions:
The number of opening and closing brackets in T must be equal.
There must be no prefix of T which contains more closing than opening brackets.
If there are several type of brackets (a set B of brackets), T must meet the above-mentioned conditions for all β ∈ B and all substrings of T induced by paired brackets must meet the above-mentioned conditions. A substring (t_1, ..., t_s) of T is said to be induced by paired brackets of type β iff (β_opening, t_1, ..., t_s, β_closing) is a substring of T.
|
With GNU grep built with PCRE support, you could do:
find . -size +0 -type f -exec \
grep -zLP '\A((?:[^][<>{()}]++|<(?1)>|\{(?1)\}|\[(?1)\]|\((?1)\))*+)\z' {} +
To find such files (assuming they don't contain NUL bytes and that each is small enough to fit whole in memory).
Or call perl directly (allowing files with NUL bytes):
find . -size +0 -type f -exec perl -l -0777 -ne 'print $ARGV unless
/^((?:[^][<>{()}]++|<(?1)>|\{(?1)\}|\[(?1)\]|\((?1)\))*)$/' {} +
Some perl/PCRE specific operators:
\A and \z match respectively at the start and end of the subject. Like ^ and $ (or with the -x option) but without ambiguity when the subject is multiline (needed in some versions of GNU grep).
++ and *+ are the non-backtracking versions of the + and * operators. Here helps the regexp engine not to try too hard to find a match when we know it can't.
(?1) refers to the regexp in the corresponding capture group. That allows for recursive regexps.
(?:...), same as (...) but only for grouping (no capturing...)
Note that it finds a great proportions of the *.tex files on my system as </> are used for comparison operators in TeX and some of those characters are found unmatched in comments or escaped.
| Command-line tool for determining validity of bracket term |
1,284,113,005,000 |
I have a text file with two columns and I want to print only the strings that are present in both of them. For example:
column1 column2
stringA stringZ
stringP stringT
stringZ stringX
stringE stringR
stringT stringG
Expected output:
stringZ
stringT
|
Shamelessly stolen from @cherdt with some improvement (assumes a shell like zsh or bash with support for ksh-like process substitution):
f=filename; comm -12 <(cut -f1 < "$f" |sort) <(cut -f2 < "$f" | sort)
Keeping filename in variable helps not repeat it
No need to write to files, then to compare. Writing to files usually requires to delete them afterwards for cleanup. Don't do this with huge files though. Process substitution makes it look like comm is reading from files whereas it's stdout redirection to a temporary fd
| Print string if present in two separate columns |
1,284,113,005,000 |
I've found out the useful shortcut !! which can be used
when you forget sudo before the command.
You'll easily type
$ sudo !!
I've been doing this another way for a long time with up arrow,
ctrl + home and type sudo. The advantage is, that you see
the command.
Would you recommend to change the habit and use rather !! and why?
|
Both ways have their pros and cons as you and Julie have noticed. But the really best solution is to use both of them when needed. So, when you're not sure of what !! will result in,
first UpArrow to see what it was
then DownArrow
and then sudo !!
There are more tricks regarding the history. This is from man bash.
Event Designators
An event designator is a reference to a command line entry in the his‐
tory list. Unless the reference is absolute, events are relative to
the current position in the history list.
! Start a history substitution, except when followed by a blank,
newline, carriage return, = or ( (when the extglob shell option
is enabled using the shopt builtin).
!n Refer to command line n.
!-n Refer to the current command minus n.
!! Refer to the previous command. This is a synonym for `!-1'.
!string
Refer to the most recent command preceding the current position
in the history list starting with string.
!?string[?]
Refer to the most recent command preceding the current position
in the history list containing string. The trailing ? may be
omitted if string is followed immediately by a newline.
^string1^string2^
Quick substitution. Repeat the previous command, replacing
string1 with string2. Equivalent to ``!!:s/string1/string2/''
(see Modifiers below).
!# The entire command line typed so far.
To see the whole story about history expansion, go to man bash and there find the section HISTORY EXPANSION.
| Is it better to use !! or history? |
1,284,113,005,000 |
I am given a txt file (war and peace..), and i need to create a text file sorted alphabetically of all the words that appear 10 or more times (without the quantity).
The twist in this question, is that every punctuation is considered as a beginning of a new word, meaning you're is considered two words, you re.
I flipped all punctuation into new lines, and all spaces to new lines. And i used trim -c so now i have all words and their count, don't know how to show only those that appear 10 or more times.
Any help regarding a way to find all words that appear 10 or more times would be really appreciated!
|
< text tr -cs '[:alnum:]' '[\n*]' |
awk '++count[$0] == 10' |
sort
Replace $0 with tolower($0) if you want to ignore case.
That translates sequences of characters that are the complement of the alphanumerical ones to newlines. awk prints the 10th occurrence of each.
Note that on GNU systems, tr doesn't work properly on multi-byte characters. However, on those systems, you can use GNU grep's -o extension instead:
< text grep -Eo '[[:alnum:]]+' |
awk '++count[$0] == 10' |
sort
You can change that to
< text grep -Eo '[^[:punct:][:space:]]+' |
awk '++count[$0] == 10' |
sort
to consider characters that are neither punctuation nor space (or tr -s '[:punct:][:space:]' '[\n*]' above for non-GNU system or all-ASCII text) which on that War and Peace text gives the same result.
Note that on GNU systems at least, that could still give wrong results as Unicode combining accents for instance are classified as punctuation and not alnums (they don't appear in that text though where the accented characters are in their combined form).
| Find in a text all words that appear 10 or more times |
1,284,113,005,000 |
I am trying to sort a file as following which has multiple columns, separated with comma, and one of the column has date with the following format mm/dd/yyyy.
$cat filename
AN1143,45.7,03/05/2012,
H9477,45.3,01/15/2010,
DN1222,45.1,03/05/1800,
J960,26.7,06/02,1990,
Z959,28.2,03/21/2016,
H12421,27.7,06/21/2000
My intention is to sort first based on the first column and then the third column which has the date. I tried the following command:
sort -t"," -k1,1 -k3,9n.3,10n -k3,1n.3,2n -k3,4n.3,5n filename
but I faced this error any help with explanation is appreciated.
sort: stray character in field spec: invalid field specification â3,9n.3,10nâ
|
Try this instead:
sort -t, -k1,1 -k3.7n -k3.1,3.2n -k3.4,3.5n < filename
There's no need to quote the comma delimiter
The first sort-key definition uses column 1
The second sort-key definition uses column 3's "year" field, sorted numerically
The third sort-key uses column 3's "month" field, sorted numerically
The fourth sort-key uses column 3's "day" field, sorted numerically
Sample run with an enhanced sample data file, showing the sorting:
Input:
AN1143,45.7,03/05/2012,
AN1143,45.7,02/05/2012,
AN1143,45.7,03/04/2012,
AN1143,45.7,03/05/2011,
H9477,45.3,01/15/2010,
DN1222,45.1,03/05/1800,
J960,26.7,06/02,1990,
Z959,28.2,03/21/2016,
H12421,27.7,06/21/2000
Output:
AN1143,45.7,03/05/2011,
AN1143,45.7,02/05/2012,
AN1143,45.7,03/04/2012,
AN1143,45.7,03/05/2012,
DN1222,45.1,03/05/1800,
H12421,27.7,06/21/2000
H9477,45.3,01/15/2010,
J960,26.7,06/02,1990,
Z959,28.2,03/21/2016,
| How to sort multiple column with a column including date? |
1,284,113,005,000 |
I'd like to make fixed height output from any command using piping:
some_command | magic_command -40
If, for example, some_command prints 3 lines, magic_command should add 37 newlines,
or if some_command prints 50 lines, magic_command should cut extra lines (like head -40)
|
POSIXly:
{ command; while :; do echo; done; } | head -n 40
On GNU system:
{ command; yes ""; } | head -n 40
| Padding output with newlines |
1,284,113,005,000 |
I've looked around at multiple sources for this question and I know how to extract the file, but neither source told me how to state where to put the folder once it has been extracted.
I tried this:
tar -xvf tarball.tar.gz my/folder/im/extracting
When I did this it seemed to extract it as it listed out it's contents, but also followed with the error:
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
I checked the current directory but I didn't see the folder.
How can this be done or is the error preventing the creation of it in the current directory?
|
tar -xvf tarball.tar.gz my/folder/im/extracting
This extracts the archive member my/folder/im/extracting at the location my/folder/im/extracting. If the archive member is a directory, its contents are extracted (including subdirectories, recursively).
If you want to extract to a different directory, with GNU or FreeBSD tar (so on non-embedded Linux, Cygwin, FreeBSD and OSX), you can use
tar -xvf tarball.tar.gz --transform 's!my/folder/im/extracting!somewhere/else!' my/folder/im/extracting
If you just want to put my under a different (existing) directory, you can use
tar -xvf tarball.tar.gz -C different/directory my/folder/im/extracting
The error “gzip: stdin: unexpected end of file” has nothing to do with the way you're using tar. “Unexpected end of file” means that gzip reached the end of the file but the file format indicates that there should have been more data. In other words, the file was truncated, e.g. because your download was interrupted.
| Extracting a certain folder from a tarball - how do I tell it where to put the file once extracted? |
1,284,113,005,000 |
I have what looks to be a directory directory dir2, I once moved it from another directory dir1 to back it up and created a new dir1.
Now I suddenly saw it again, realized I didn't need it and did rm -r -f dir2, and found out that my new dir1 is also empty.
I got my files back (is was a code repository, so just some changes lost - bummer), but I still want to remove this redundant link or whatever dir2 is.
When I tried rmdir (before I realized I deleted everything) I got an error Not a directory. What is it and how do I remove it? Using bash on OSX terminal.
Update: Per suggestions below:
ls -ld dir2 outputs drwxr-xr-x. 19 asaf users 4096 Mar 8 13:09 dir2/
file dir2/ outputs dir2/: directory
file dir2 outputs dir2: symbolic link to ...'
|
It's impossible to know what happened given that the evidence is now deleted. Your descriptions of the symptoms is consistent with dir2 being a symbolic link to a directory. A symbolic link is a sort of special file that says “the real file is actually over there”. The symbolic link itself isn't a directory, so rmdir can't do anything with it. But accesses to the content of the symbolic link (files in the directory for a symbolic link pointing to a directory, file contents for a symbolic link pointing to a regular file) go to the target of the link transparently, so you wouldn't have noticed anything when using cd dir2 or when editing files in the directory.
If this is the case (which is plausible, but not at all certain!), then the command rm -r -f dir2 only deleted the symbolic link, and the directory containing your changes still exists… somewhere. Since you've deleted the link, it might be difficult to find where, but you can try looking for the file names that you know were in that directory with the locate command or with an equivalent GUI (Spotlight?).
In the future, run
ls -ld dir2
That would tell you what kind of file dir2 is. If the line begins with d, it's a directory. If the line begins with l, it's a symbolic link, and the output indicates where it points to (the part after ->).
| rmdir dir gives error 'Not a directory' |
1,284,113,005,000 |
Here is my initial situation:
In a folder named for example Father, are stored some files in the following way: Father contains 24 children folders (let's call them Child1, Child2, ...), and each one of them has 2 files in it, file1.avi and file1.nfo for the first child, file2.avi and file2.nfo for the second one, etc.
What I'd like to do is to have all the .avi files in the Father folder. I don't care here about the other files being lost.
So far, the best I've gone is with a cp -R ./*.avi . but it did not extract the files from the folder and was furthermore really long to process.
How should I write it?
|
The glob * can be used to match not only plain files, but also directories, so the command you are looking for is
mv ./*/*.avi .
| What command should I use to move these particular elements? |
1,284,113,005,000 |
I would like to see the ifconfig file with test user (under Linux Debian) that's why I have used the sudo task, but the terminal said that: test is not in the sudoers file. How can I take the test user in the sudoers file?
I've tried the /etc/pamd/su but it is not found?
|
The sudeoers file is usually located at /etc/sudoers.
You need administrative privileges to edit this file. Editing it directly is strongly discouraged: you could irrevocably damage your system in case of syntax errors.
The visudo tool is provided with the sudo package for safe editing. It will automatically check file's consistency before saving and abort on syntax errors.
visudo invokes the text editor set in the EDITOR environment variable, or vi otherwise. For instance, if you want to invoke Emacs instead:
$ EDITOR=emacs visudo
Once you have the editor fired up, you want to add the following line at the User privilege section:
test <host>=(ALL) /sbin/ifconfig
or, if you do not want to get prompted for a test's password
test <host>=(ALL) NOPASSWD: /sbin/ifconfig
You need to replace by your machine hostname, or 'ALL' if you want the privilege to apply on any machine.
See the man pages: sudoers(5), sudo(8) and visudo(8) for the complete documentation.
| Add test user to the sudoers file, to run ifconfig |
1,284,113,005,000 |
When I have a nested directory find . -name "*.py" -print command gives me all the python scripts beneath current directory. However, find . -name *.py -print returns only the python scrips in current directory.
Is this expected behavior? What makes this difference? I use Mac OS X 10.7.
|
It's probably not the same command. You could put echo in front to check.
$ echo find . -name "*.py" -print
find . -name *.py -print
$ echo find . -name *.py -print
find . -name foobar.py barfoo.py -print
Without quotes, the shell expanded *.py, so find gets different arguments, which yields different results.
You should always quote * when you want a command to see * literally. Otherwise the behaviour will be erratic (the command works as long as there are no *.py files for the shell to expand to).
| The difference that quotation marks make in find command [duplicate] |
1,284,113,005,000 |
I get sometimes files with following ls output format:
/etc/cron.d:
-rw-r--r-- 1 root root 128 May 15 2020 0hourly
-rw------- 1 root root 235 Dec 17 2020 sysstat
/etc/cron.daily:
-rw------- 1 root root 235 Dec 17 2020 sysstat
Is there any chance using normal gnu tools or even clear bash internals to manipulate that content to:
-rw-r--r-- 1 root root 128 May 15 2020 /etc/cron.d/0hourly
-rw------- 1 root root 235 Dec 17 2020 /etc/cron.d/sysstat
-rw------- 1 root root 235 Dec 17 2020 /etc/cron.daily/sysstat
That would be great.
I mean the easiest is to remove the file paths like that:
cat <filename> | grep -v -E "^\/[a-z]"
But like I said how to move these paths down to the follow-up lines with the filenames?
The command that is the given is this one: ls -lR /etc/cron* > <filename>.
I don't have influence to that output, but rather I get these command outputs executed by ls redirected to a separate file <filename> that is transferred to me.
And what I like to do is manipulate it's content into the mentioned second result. basically obtaining the first line an appy the path to the files lines 2 and 3 and then take line 4 and apply it to line 5. And then configured that one as a general approach.
I think that should be possible using awk.
|
If none of your file or directory names contain white space then you could do the following using any POSIX awk:
$ awk '
NF==1 && sub(/:$/,"/") { dir=$0; next }
match($0,/[^[:space:]]+$/) { $0=substr($0,1,RSTART-1) dir substr($0,RSTART) }
{ print }
' file
-rw-r--r-- 1 root root 128 May 15 2020 /etc/cron.d/0hourly
-rw------- 1 root root 235 Dec 17 2020 /etc/cron.d/sysstat
-rw------- 1 root root 235 Dec 17 2020 /etc/cron.daily/sysstat
or if your file/directory names can contain spaces but your directory paths always start with / and your ls output always has exactly the same number of fields before the file name as shown in your example then you could do something like this:
$ awk '
/^\// && sub(/:$/,"/") { dir=$0; next }
match($0,/^([^[:space:]]+[[:space:]]+){8}/) { $0=substr($0,1,RLENGTH) dir substr($0,RLENGTH+1) }
{ print }
' file
But ls doesn't always produce output with those fields (what ls outputs for the date/time depends on the age of your files and locale setting, and user IDs can contain spaces, for example) and all of the characters in the per-file lines could be present in a directory name and file names can end with : since file and directory names can contain any characters except / or NUL so YMMV with whatever you come up with to try to tell the lines apart and then figure out where the file name starts in the per-file lines. Plus file names can contain newlines which is a whole other world of problems.
So there is no robust way to parse the output of ls for every possible output it could produce. If you want to do this then you just have to figure out what kind of pattern matching you think/hope will be good enough for your needs given whatever context you call ls in and then write your script based on that.
Since some other tool is creating a file of ls output for you to then have to parse you should try to get that other tool fixed since it's well known that you shouldn't try to parse the output of ls (see http://mywiki.wooledge.org/ParsingLs and Why *not* parse `ls` (and what to do instead)?) so that tool is setting you up for failure.
| manipulate ls text output to add path to filenames |
1,284,113,005,000 |
I have a bash for loop like this:
for file in *.mp4; do
command1 "${file}" && command2 "${file}"
done
This makes command2 to run whenever command1 is successful, which is expected.
Now what I want is for the loop to wait only for command1 to finish , but no need to wait for command2 to finish before iterating. Is there a way to do this?
What I've tried, but did not work:
command1 && command2 & - does not wait for command1 to exit, runs parallel
command1 && ( command2 & ) - waits for command2 to exit, no need
|
A bit more verbose, but this should work:
for file in *.mp4; do
if command1 "${file}"; then
command2 "${file}" &
fi
done
| How to wait for the first command but not the second, where the two are joined by the && operator |
1,284,113,005,000 |
I have a file containing multiple lines, with fields separated by tab:
ID Code Date
1 XX 23/1/2018
1 XX 11/3/2021
2 XX 14/5/2011
2 XX 20/9/2013
3 XX 08/7/2014
3 XX 11/9/2016
3 XX 27/10/2018
I would like to keep for each participant ID just one entry, based on the entry with the earliest date in the Date column. For each participant, the dates are ordered from oldest to newest.
The output I would like is:
1 XX 23/1/2018
2 XX 14/5/2011
3 XX 08/7/2014
|
Since you state that the records for each participant are ordered from oldest to newest, and you want to print only the record with the oldest date for each ID, this amounts to printing the first row encountered for each new ID. This is easily possible using awk:
awk -F'\t' 'FNR>1 && !seen[$1]++' input.txt
This will first set the field separator to \t. Then, it will evaluate the condition between ' ... ' to decide whether to print the current line. A line will be printed if
the per-file line-counter is larger than one (in order to skip the header line), and
the array seen does not yet contain an entry for the current value of the first column ($1). This works because dereferencing an array value that was not yet assigned evaluates to false. Also, the postfix operator ++ will only be applied after this evaluation, so for the first encounter of a specific ID this returns true, but for any later encounters, where seen[$1] is larger than 0, it will return false and thereby inhibit printing of the line.
If you want to keep the header line, just remove the FNR>1 condition:
awk -F'\t' '!seen[$1]++' input.txt
(It will be printed, because the ID of this line is literally ID, and of course the first occurence of that particular value.)
| How do I pick only one record per ID based on the earliest date in another column? |
1,284,113,005,000 |
I have a rather long csh script that doesnt work, or doesnt work properly. in bash I would do set -xv to get verbose logging. what can I do in cshell? I tried adding set -xv it complained that - isnt allowed, and set xv didnt do anything.
|
You can use
csh -xv script
or you can add
set verbose
set echo
to your script.
| what is the csh equivalent of set -xv? |
1,284,113,005,000 |
I wanted to copy all dotfiles from my ~ folder to a git repo to back them up, and I'm using ZSH.
I came across this command that seems to work:
cp -a ~/.[^.]* . - where the final . is the git dir.
I don't understand how this works. Can anyone give me a guide, or tell me what to google to learn more?
I tried ZSH + [^] + globbing
|
That syntax is for some shells other than zsh and even there, it would be wrong.
.[^.]* matches on file names that start with . followed by a character other than ., followed by 0 or more characters.
That's the kind of syntax you'd need in shells that include . and .. in the expansion of .*.
. and .. are navigating tools used to refer to the current and parent directories respectively. They have no place in glob expansions as globs are tools to generate lists of actual files¹. Still, historically shells have been including them in their glob expansions as they were being reported by readdir().
zsh, like the Forsyth shell and its descendants (pdksh, mksh, OpenBSD sh...) or the fish shell have fixed that and never include . nor .. in the result of filename generation², even in globs like:
$ echo (.|..)
zsh: no matches found: (.|..)
It's also wrong in the general case, as it misses files like ..foobar.
Also note that [^.], though supported by many shell, is not standard POSIX syntax.
In POSIX sh syntax, you'd need:
cp -a ~/.[!.]* ~/..?* .
(where we add ..?* which matches on .. followed by one or more characters, to cover for the ..foobar type of file names mentioned above).
In zsh (and those other shells mentioned above), you only need:
cp -a ~/.* .
Hopefully, that will eventually be allowed/recommended for sh by POSIX and we'll see more other shells follow suit.
¹ On a history note, and according to legend, the concept of files whose name starts with . being hidden originates in a bug in an early version of the ls utility in the 70s which ended up causing all filenames starting with . to be hidden when the intent was only to hide . and ... That bug became a feature when people started to rely on it to hide files
² Bash added a globskipdots option for that in the 5.2 version.
| ZSH Globbing syntax explanation |
1,591,436,750,000 |
I just want somebody to walk me through the steps of switching a command line interface linux box into a gui based one. I know this has to do with the X Window System but I don't exactly know how to go about installing it fully. Now, if firefox is installed for example and I try to run it, it will give me: "error: no display environment variable specified"
Of course I need to specify a display, right? I used this: export DISPLAY=:0 and when I typed firefox nothing happens. When I type firefox & and then enter the command jobs I can see that firefox is running. But nothing is displayed. No window pops up.
I searched about how to solve this error but I didn't really get it. I just want to apply changes to my linux box so that when I open a gui based software it just opens with a window. Actually that's one thing that is doable. I have done it long ago but I forgot how as I'm not a regular linux user. The other thing that I want to know: Is it also doable to change a cli linux box into an overall gui based linux permanently like those which are ready made such as Ubuntu and linux mint? Or does that require an actual coder?
I'm actually using a VM in virtualbox and experimenting with it. I can reverse any harm done to it. It is actually ubuntu 14.04 VM. It is the linux version of the metasoloitable3 VM by rapid7 used for pentesting: https://github.com/rapid7/metasploitable3
Thanks in advance
|
You say you are using Ubuntu. To add the desktop. You need to install it.
However you say you are using Ubuntu 14.04 Ubuntu version numbers are YY.MM (year and month of release) see support has been dropped (it still has long-term security support until 2020-04).
To install do apt install kde-plasma-desktop (or other desktop) -- it will pull in all dependencies including X11.
However it may be better to use a different Virtual machine for desktop use. You could run them both, and connect to the not graphical one from the graphical one. You can still run graphical programs on the non-graphical one but display them on the graphical one using ssh -X, and installing just the program (e.g. firefox), but no desktop.
| How to transform a CLI linux into a GUI one? Or at least how to run a gui app like firefox in CLI linux? Installing x windowing system? |
1,591,436,750,000 |
I can pipe data from one command to another, example:
$ echo test | cat
test
Unsure of what to call the operation I can get a similar effect using:
$ cat < <(echo test)
test
Where <(echo test) is a bashism for creating a file on the fly. Using regular files it looks like:
$ cat file
test
$ cat < file
test
This works equally well over ssh:
$ ssh server cat < <(echo test)
test
Using the ssh-example as a base one might think you could do something like:
$ pdsh -a cat < <(echo test)
But no data is sent to cat on the connected machines and the command never terminates.
tee seem quite able to send what it receives on stdin to more than one place:
$ tee >(cat) >(cat) < <(echo test)
test
test
test
Is it possible to achieve the same over pdsh?
|
I was able to get in contact with one of the pdsh developers and learned the following:
What you want is "stdin broadcast" and unfortunately support for this
was never added to pdsh. It would be a nice feature, but there was not
historically much need for it, so it wasn't ever done.
Which seem to confirm what has already been established in this post.
However, it was followed by:
BTW, it isn't that it is impossible to do the stdin broadcast.
Parallel launchers that are part of HPC schedulers can do it, like
srun(1) and the like. The mechanism is that stdin is read in once,
then copied to a buffer for each remote process, i.e. the duplication
is done inside of the parallel launcher.
The reason for the follow-up is that there are some misleading answers
on that stackexchange post.
Another way to get around the serial for-loop problem would be to run
ssh from GNU parallel or pdsh -R exec. Example with pdsh -R exec:
$ pdsh -R exec -w host[0-10] bash -c 'ssh %h cat < <(echo test)'
Of course the drawback here is that you are creating the temporary
file for redirection N times. Might be better to put your output into
a local file and then just cat that file to each ssh command.
The benefit of pdsh/parallel over a for loop is that you get the
parallelism.
In my own tests I had some trouble getting that exact example working:
root@master# pdsh -R exec -w host1 bash -c 'ssh %h cat < <(echo test)'
host1: bash: -c: line 0: syntax error near unexpected token `<'
host1: bash: -c: line 0: `ssh n1 cat < <(echo test)'
pdsh@master: host1: bash exited with exit code 1
One small tweak gets it alive, and that is using a "normal" file:
root@master# cat data.txt
test from file
root@master# pdsh -R exec -w host1 bash -c 'ssh %h cat < data.txt'
host1: test from file
Conclusion: pdsh could grow a feature to handle what I was asking for better, but even as it is now there are ways to achieve what I asked for.
| pdsh input from file possible? |
1,591,436,750,000 |
I use clipmenu to choose something to paste into terminal that running zsh as shell.
Problem is that zsh will echo error when for example I paste a shell function that contains some # for comments inside that function. I have to manually go back and clear all lines contain #.
System: archlinux/zsh/clipmenu
EDIT: example of function:
test() {
# must remove this line manually after paste into zsh's shell
<do something>
}
|
Perhaps you just need to setopt interactivecomments?
| Zsh: remove # - comment when pasting to terminal? |
1,591,436,750,000 |
Input text: test
Online MD5 hashsum generator: 098f6bcd4621d373cade4e832627b4f6
echo "test" | md5sum: d8e8fca2dc0f896fd7cb4cb0031ba249
Also the same happens to sha512sum and sha1sum.
Why does Linux and an online generator generate different hashes?
|
One of these is the hash of "test" and one of them is the hash of "test\n".
$ printf 'test' | md5sum
098f6bcd4621d373cade4e832627b4f6 -
$ printf 'test\n' | md5sum
d8e8fca2dc0f896fd7cb4cb0031ba249 -
echo outputs a newline character after its arguments.
| Command line generates different hashsum than online hash generator… [duplicate] |
1,591,436,750,000 |
i'd like to display the output of lsb_release -ds in my Conky display. ftr, in my current installation that would output Linux Mint 18.3 Sylvia.
i had thought of assigning the output of that command to a local variable but it seems Conky doesn't do local vars.
maybe assigning the output of that command to a global (system) variable? but that's a kludge and it's not at all clear that Conky can access global vars.
sounds like an exec... might do it but the docs stress that that's resource inefficient and since this is a static bit of info (for any given login session) it seems a waste to keep running it over and over.
so, what to do? suggestions most welcome.
|
You should prefer the execi version of exec, with an interval, where you can give the number of seconds before repeating:
${execi 999999 lsb_release -ds}
| how to display lsb_release info in Conky? |
1,591,436,750,000 |
Is there a CLI tool similar to gnome-search-tool?
I'm using locate, but I'd prefer that it grouped results where directory name is matched. I get a lot of results where the path is matched which is not what I want:
/tmp/dir_match/xyz
/tmp/dir_match/xyz2/xyz3
It needs to be fast and thus use a search index.
|
locate is very versatile can take -r and a regexp pattern, so you can do lots of sophisticated matching. For example, to match directories a a0 a1 and so on use '/a[0-9]*/'. This will only show directories with files in them since you need the second / in the path. To match the directory alone use $ to anchor the pattern to the end of the path, '/a[0-9]*$'.
Note, there are at least 2 versions of the locate command, one from GNU, and one from Redhat (known as mlocate). Use --version to find which you have. They differ slightly in the regex style. For example, if we change the above pattern '/a[0-9]*$' to use + instead of * to avoid matching a on its own, then mlocate needs \+ and gnu just +.
For example, to match a directory a and all underneath it you might use for both versions
locate -r '/a\(/\|$\)'
For mlocate you might prefex --regex which uses extended syntax
locate --regex '/a(/|$)'
To do the same for gnu locate you would need to add option --regextype egrep, for example.
| Simple CLI tool for searching |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.