date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,665,540,504,000
I want to get some parts from my log file, I tried to cut request part for to get the user, module, action, doAjax and ajaxAction For example, I have 195.xx.x.x - - [25/Apr/2017:09:60:xx +0200] "POST /userx/index.php?m=contacts&a=form&... 192.xx.x.x - - [25/Apr/2017:09:45:xx +0200] "POST /usery/index.php?m=customer&doajax=request&action=getContacts... 197.xx.x.x - - [25/Apr/2017:09:20:xx +0200] "GET /userx/index.php?m=meeting&doajax=date&id=3 and I want to have: [user]|[module]|[action]|[doAjax]|[ajaxAction] usery contacts form null null userx customer null request getContacts userz meeting null date null Where: userx --> user m=xxx -->module a=xxx -->action doajax=xxx-->doAjax action=xxx-->ajaxAction I tried to use awk, set but for to cut just the 7th column where I can find my request with this command: awk '{printf $7; next ; }' logfile So How can I do for to extract the user, module, action, doAjax and ajaxAction after to print just my request?
A perl "one-liner": $ perl -lne ' BEGIN{ printf "%-10s%-10s%-10s%-10s%-15s\n", qw([user] [module] [action] [doAjax] [ajaxAction]); } $usr = $mde = $act = $doAj = $ajAc = "null"; $usr=$1 if m|\s/([^/]+)/|; $mde=$1 if /m=(.+?)(&|$)/; $act=$1 if /a=(.+?)(&|$)/; $doAj=$1 if /doajax=(.+?)(&|$)/; $ajAc=$1 if /action=(.+?)(&|$)/; printf "%-10s%-10s%-10s%-10s%-15s\n", ($usr,$mde,$act,$doAj,$ajAc)' file [user] [module] [action] [doAjax] [ajaxAction] userx contacts form null null usery customer null request getContacts userx meeting null date null The basic trick here is to search for each of the strings identifying your URL parts and, if found, set the corresponding variable to it. In each case, we look for the identifier followed by an = (e.g. m=) and then either a & or the end of the line (&|$). Because the matched portion is put in a parenthesis (e.g. m=(.+?)), we can then refer to it as $2 and that's what's saved in each variable. If you really need to have | as a separator, and don't object to the fact that it will make the output less readable, you can use this instead: $ perl -lne ' BEGIN{ printf "%s|%s|%s|%s|%s\n", qw([user] [module] [action] [doAjax] [ajaxAction]); } $usr = $mde = $act = $doAj = $ajAc = "null"; $usr=$1 if m|\s/([^/]+)/|; $mde=$1 if /m=(.+?)(&|$)/; $act=$1 if /a=(.+?)(&|$)/; $doAj=$1 if /doajax=(.+?)(&|$)/; $ajAc=$1 if /action=(.+?)(&|$)/; print join "|", ($usr,$mde,$act,$doAj,$ajAc)' file [user]|[module]|[action]|[doAjax]|[ajaxAction] userx|contacts|form|null|null usery|customer|null|request|getContacts userx|meeting|null|date|null A better (more readable output) approach would be to use printf instead:
Get specific information from a log file
1,665,540,504,000
The ultimate goal here is to turn on/off touchpad on mouse plug, so I'm trying to get some property of my mouse and my touchpad from udev database, using udevadm but I don't get how this working and unfortunately the manpage isn't clear enough to me… $ lsb_release -irc Distributor ID: Debian Release: 8.4 Codename: jessie Here is the kind of information I'm looking for : KERNEL=="input16" SUBSYSTEM=="input" DRIVER=="" ATTR{name}=="Bluetooth Laser Travel Mouse" ATTR{phys}=="5c:e0:c5:9d:63:fd" ATTR{uniq}=="00:07:61:ec:be:5c" ATTR{properties}=="0" From here I have tried this : $ udevadm info -a /sys/devices/pci0000\:00/0000\:00\:1c.3/0000\:03\:00.0/usb2/2-1/2-1\:1.0/0003\:1EA7\:0064.0002/input/input25/mouse1/` and i'm getting this Unknown device, absolute path in /dev/ or /sys expected. If I monitor, I get this result: $ udevadm monitor -k -s input monitor will print the received events for: KERNEL - the kernel uevent KERNEL[4375.486738] remove /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0002/input/input25/mouse1 (input) KERNEL[4375.496500] remove /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0002/input/input25/event11 (input) KERNEL[4375.532441] remove /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0002/input/input25 (input) KERNEL[4377.840574] add /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0003/input/input26 (input) KERNEL[4377.840667] add /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0003/input/input26/mouse1 (input) KERNEL[4377.840759] add /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0003/input/input26/event11 (input) So I have also tried this: $ udevadm info -a -p /sys/devices/pci0000\:00/0000\:00\:1c.3/0000\:03\:00.0/usb2/2-1/2-1\:1.0/0003\:1EA7\:0064.0002/input/input25/ and this $ udevadm info -a -p /devices/pci0000\:00/0000\:00\:1c.3/0000\:03\:00.0/usb2/2-1/2-1\:1.0/0003\:1EA7\:0064.0002/input/input25/ and get this result syspath not found The only way I manage to get some property is using this command: $ udevadm info --query=all --name=/dev/input/mouse1 And I get this but, I don't have the attribute I'm looking for…(ie ATTR{name}) P: /devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0003/input/input26/mouse1 N: input/mouse1 S: input/by-id/usb-1ea7_2.4G_Wireless_Mouse-mouse S: input/by-path/pci-0000:03:00.0-usb-0:1:1.0-mouse E: DEVLINKS=/dev/input/by-id/usb-1ea7_2.4G_Wireless_Mouse-mouse /dev/input/by-path/pci-0000:03:00.0-usb-0:1:1.0-mouse E: DEVNAME=/dev/input/mouse1 E: DEVPATH=/devices/pci0000:00/0000:00:1c.3/0000:03:00.0/usb2/2-1/2-1:1.0/0003:1EA7:0064.0003/input/input26/mouse1 E: ID_BUS=usb E: ID_INPUT=1 E: ID_INPUT_MOUSE=1 E: ID_MODEL=2.4G_Wireless_Mouse E: ID_MODEL_ENC=2.4G\x20Wireless\x20Mouse E: ID_MODEL_ID=0064 E: ID_PATH=pci-0000:03:00.0-usb-0:1:1.0 E: ID_PATH_TAG=pci-0000_03_00_0-usb-0_1_1_0 E: ID_REVISION=0200 E: ID_SERIAL=1ea7_2.4G_Wireless_Mouse E: ID_TYPE=hid E: ID_USB_DRIVER=usbhid E: ID_USB_INTERFACES=:030102: E: ID_USB_INTERFACE_NUM=00 E: ID_VENDOR=1ea7 E: ID_VENDOR_ENC=1ea7 E: ID_VENDOR_ID=1ea7 E: MAJOR=13 E: MINOR=33 E: SUBSYSTEM=input E: USEC_INITIALIZED=77840674 So clearly I've a misunderstanding on how to query udev to get the attribute of a device. Hope I'm clear enough if anyone has an idea where I'm mistaking any input is welcome ! Thanks ! Matth.
Note that the input number changed (from 25, what you tried, to 26, what ), because those are not guaranteed to be constant across boots. Try udevadm info -q path -n /dev/input/by-id/usb-1ea7_2.4G_Wireless_Mouse-mouse with the constant by-id symlinkg to get the path in the format udev expects, then something like udevadm info -a -p /path/you/just/got to walk the path and output all attributes along the way (you may need to use parent attributes to identify it). You can also combine that: udevadm info -a -p $(udevadm info -q path -n /dev/whatever)
confused about udevadm usage
1,665,540,504,000
I have a file created with mysqldump that is 11GB. I need to use the "mysql" command to import it into a database, but I need to add: USE db1; at the top. It would take forever to rewrite the file. Is there a way I can concatenate another file at the beginning of the input redirect to fool it into looking at it as a single file? text.txt contents: USE db1; sql_out.sql contents: data from mysqldump using the --skip-add-drop-table and --no-create-info options command attempted: mysql --host=<host> --user=<user> --password=<pwd> < echo $(cat text.txt sql_out.sql) When I do that I get: echo: No such file or directory If I try it without the echo, I get: $(cat text.txt sql_out.sql): ambiguous redirect Is there a way to do this?
You can pipe it in: cat text.txt sql_out.sql | mysql --host=... Alternatively, to avoid having to create a new file: (echo "USE db1;"; cat sql_out.sql) | mysql --host=...
using cat at input file
1,665,540,504,000
Let's say I started a service by using sudo service service_name start, I am interested in knowing all the port opened by this service. I can know port - program mapping by sudo netstat -tulnp but I need to know for a specific service.
For this, I usually use lsof with the option to show the ports in use. Here's an example: [root@localhost ~]# lsof -Pi | grep myprog myprog 23411 user 9u IPv6 9828537 0t0 TCP 1.2.3.167:51163->1.2.3.54:8090 (ESTABLISHED) myprog 23411 user 16u IPv4 9827813 0t0 TCP 1.2.3.167:60783->1.2.3.186:23 (ESTABLISHED) myprog 23411 user 23u IPv4 9827817 0t0 TCP 192.168.2.8:37435->192.168.2.1:20003 (ESTABLISHED) myprog 23411 user 24u IPv4 9827815 0t0 TCP 192.168.2.8:38942->192.168.2.1:20001 (ESTABLISHED) myprog 23411 user 30u IPv4 9849168 0t0 TCP 1.2.3.167:52352 (LISTEN) myprog 23411 user 31u IPv4 9849242 0t0 TCP 1.2.3.167:52352->1.2.3.186:59323 (ESTABLISHED) myprog 23411 user 33u IPv4 9852370 0t0 TCP 1.2.3.167:40328 (LISTEN) [root@localhost ~]#
How to check port opened on running a service?
1,665,540,504,000
A bash scrip is invoked like this: $./script 25 "str1 str2" and it is supposed to launch a terminal, that runs another script that receives both arguments, exactly as they are above (including quotation marks). I've tried this: lxterminal --command=$"./script2 "$"$@" but this seems to omit the quotations marks, so the call would end up as ./script2 25 str1 str2. What is the correct notation to replicate the arguments as they were in the original command line?
The problem is that the argument to lxterminal's --command is just one string, it can't take a command and its arguments like some other terminals like xterm do. lxterminal parses that string using its own rules to determine the command to run an its arguments. That's similar but not identical to the Bourne shell parsing. It does recognise '...' as strong quotes and space as argument separator, so you can implement quoting for it as: lxquote() { awk -v q="'" ' function lxquote(s) { gsub(q, q "\\" q q, s) return q s q } BEGIN { for (i = 1; i < ARGC; i++) { printf sep "%s", lxquote(ARGV[i]) sep = " " } }' "$@" } And call lxterminal as: lxterminal --command="$(lxquote ./script2 "$@")" Alternatively, if script's interpreter is bash, you can do: printf -v code '%q ' ./script2 "$@" CODE=$code lxterminal --command="bash -c 'eval \"\$CODE\"'"
Pass on arguments with double quotation marks from one bash script to another
1,665,540,504,000
How can I remove the first letter from a directory name? for example: Folder is named as "AFolder_01" how can I rename it to "Folder_01" The reason for my question is that I have list of folders and I want to rename all these folders at once by removing the first letter. I found this code online to remove the last character(s): while IFS= read -r dir; do [[ -d $dir ]] && mv -i "$dir" "${dir%?}"; done <all.txt How can this code be revised to remove the first letter? i.e. In my example rename "AFolder_01" to "Folder_01" How can this code be revised to add charterer back at the beginning of the folder name: i.e. in my example rename "Folder_01" to "AFolder_01"
Once you have your directory name in a variable (e. g. dir), you can: mv "$dir" "${dir:1}" This will strip the first character from the variable. I shall leave sanity-checking that the new directory does not yet already exist up to you. To add something to the beginning (e. g. the letter A): mv "$dir" "A$dir"
Remove first character in a folder name [duplicate]
1,665,540,504,000
From a bash or sh shell, how can I determine if it was called with the bash or sh command, a login shell, an xterm, and, in the case of the former, how was that called? For example, if I call bash from an xterm, and then call it again, inside that instance, it might output something like me@mylinuxmachine:~$ bash me@mylinuxmachine:~$ bash me@mylinuxmachine:~$ magic_command Called by /bin/bash { Called by /bin/bash { Called by xterm } }
You may use pstree for this: $ bash bash-4.4$ pstree -p "$$" -+= 00001 root /sbin/init \-+= 85460 kk tmux: server (/tmp/tmux-1000/default) (tmux) \-+= 96572 kk -ksh93 (ksh93) \-+= 72474 kk bash \-+= 14184 kk pstree -p 72474 \-+- 51965 kk sh -c ps -kaxwwo user,pid,ppid,pgid,command \--- 91001 kk ps -kaxwwo user The pstree utility will show the parent-child relationships for all processes currently running on the system. With -p "$$" you restrict its output to only contain processes related the the current shell (whose process ID is stored in the $ variable). To cut the output off at the point where it gets to the current shell, you could use sed: bash-4.4$ pstree -p "$$" | sed "/= $$ /q" -+= 00001 root /sbin/init \-+= 85460 kk tmux: server (/tmp/tmux-1000/default) (tmux) \-+= 96572 kk -ksh93 (ksh93) \-+= 72474 kk bash For Linux systems, which apparently use a different implementation of this utility from what I'm using (on OpenBSD), you may want to use $ pstree -salup "$$" to get a similar output, and $ pstree -salup "$$" | sed "/,$$\$/q" to cut the output off at the point where it gets to the current shell. Here's a shell function pls (for "process ls", that's the best I could come up with) that does the above for any given PID (or the PID of the current shell if left out): function pls { local pid="${1:-$$}" pstree -salup "$pid" | sed "/,$pid\$/q" }
Determine how bash or sh was called
1,665,540,504,000
I have found a behavior of shell I don't understand. When you execute echo foo > /tmp/bar & exit , the terminal is closed. But when you execute exit & echo foo > /tmp/bar , the terminal stays open and something like [1] 4001 is printed on it. The output of the first version can be seen, too, if you log into a different account using su, because then, only the session of the user you logged in to using su is ended and your terminal stays open. It's something like: [2] 13777 exit [1]+ Done exit Why isn't the session ended if exit is the first command? Just in case you can't replicate this on your Unix system: I'm using Ubuntu 16.04.
Your first command, echo foo > /tmp/bar & exit starts a subshell in the background to run echo foo > /tmp/bar, and exits (the foreground shell). This closes your terminal, in the same way as simply typing exit. The background shell won't stay around very long at all, so you won't get a race; but if you do this with a longer-running command, depending on your shell and your options, you'll get different behaviours: sleep 60 & exit might not exit but complain that you have running jobs instead. In both these cases, the shell does print a line of the form [1] 7149 but if your terminal closes you won't have time to see it. Your second command, exit & echo foo > /tmp/bar starts a subshell in the background to run exit, which exits (from the subshell), and runs echo foo > /tmp/bar. So the “main” shell doesn't exit and the terminal stays open, which allows you to see [2] 13777 followed by exit (printed by the shell exiting) and then [1]+ Done exit (printed by the main shell when the background shell exits).
End of session if exit command run in parallel to different command
1,480,706,472,000
I've got a peace of large text file with readings like below, name=ABC class=3 age=7 roll_no=41 name=XYZ class=4 age=9 roll_no=23 So, how can I separate each name with their respective age and write the result in a single line, values separated by a space, like this ABC 3 XYZ 9 Is there any tool/script to save the result in JSON format ? Tried hours with awk, sed, tr, grep etc. etc. but I'm horrible at command line text processing, thanks in advance.
I'd use awk: awk -F"=" ' {data[$1] = $2} function output() { if ("name" in data && "age" in data) print data["name"], data["age"] delete data } NF == 0 {output()} END {output()} ' filename
Separate two values from a large of text, while each of readings separated by a blank line
1,480,706,472,000
The computer is running Yosemite. I was trying to move a couple documents from the download folder to documents via the command line using find . -iname '*.pdf' -exec mv "{}" ./Documents \ The answer given answered a almost everything. I just an wondering now what happens to the files moved. Are they in the root users Documents folder?
The command might better have been as follows. find ~/Downloads -iname "*.pdf" -exec mv '{}' ~/Documents \; Note the semi-colon after the backslash. Or, also better as follows. find ~/Downloads -iname "*.pdf" -exec mv '{}' ~/Documents + Note the plus used instead. Or even as follows. find ~/Downloads -iname *.pdf -exec mv {} ~/Documents + The single quotes around {} are used to tell the shell not to interpret the characters within the quotes as any kind of shell punctuation. They are normally not needed, but using the single quotes is not a bad habit to have. As far as I know, it's only fish that expands {} to something else. The {} characters themselves basically represent the list of found files. The double quotes around the iname parameter are also not needed, but also not a bad habit to use them. The command in the question uses the continuation character, so pressing Enter after it should only have provided another line on which you could continue to type. So, what else was typed after that command, or did you just forget to type the semi-colon in the question? If the command was typed as a regular user, then the PDF files may still be in your home directory somewhere. It would have been most useful to open Terminal and type man find. Then see the section, Examples, for usage notes that would have covered this scenario. To find all PDF files on your system: cd && find / -iname "*.pdf" 1>found.txt This will create a list, found.txt, of all PDF files that the regular user account has permission to list. I don't know if there is enough information to answer where the missing files have gone.
Using 'find' in the command line
1,480,706,472,000
I have this command: ptr=`host $hostname` Which results in this: test.tester.test has address 192.168.1.1 This Works! What I want now is to extract only the IP address (192.168.1.1), pass it to the variable $myptr and run the following command: if $myptr | sed -n '/\(\(1\?[0-9][0-9]\?\|2[0-4][0-9]\|25[0-5]\)\.\)\{3\}\(1\?[0-9][0-9]\?\|2[0-4][0-9]\|25[0-5]\)/p' ; then host $myptr else echo "No PTR Record found" fi But it does not work. Please help?
You don't need any extravagant text processing on the output of host, you can just use dig +short to get only the IP address (and do the required reverse lookup on the IP). dig +short "$hostname" e.g. ip="$(dig +short "$hostname")" host "$ip" Or directly: host "$(dig +short "$hostname")"
Pass variable IP address to if else
1,480,706,472,000
I'm looking for a one-line command to make a file more readable. I want to replace all ; characters with newline unless it is inside a set of (). This is on a firewall, so I can only use bash; no perl etc. Example input: ProductName: Threat Emulation; product_family: Threat; Destination: (countryname: United States; IP: 127.0.0.1; repetitions: 1) ; FileName: (file_name: myfile) ; Expected output: ProductName: Threat Emulation product_family: Threat Destination: (countryname: United States; IP: 127.0.0.1; repetitions: 1) FileName: (file_name: myfile)
A little bit confusing regex for sed but workable sed ' :a #mark return point s/\(\(^\|)\)[^(]\+\);\s*\([^)]\+\((\|$\)\)/\1\n\3/ #remove ; between ) and ( ta #repeat if substitute success s/[[:blank:];]\+$// #remove ; with spaces at end ' Breif regex explanation: ^\|) from the line start or ) [^(]\+ any symbols but ( ;\s* semicolon with possible space(s) (\|$ up to the line end or (
Replace particular character but not if it is inside ()
1,480,706,472,000
I am on Debian 8 and I am searching for a command in debian to use root privilege without password. I am asking because I already saw it somewhere but can't find it anymore.
The command is sudo. Add a line such as below into /etc/sudoers sigis ALL=(ALL) NOPASSWD: ALL This means user sigis can now run things like the command below without requiring password. sudo shutdown -h now
Use root privilege without password
1,480,706,472,000
I do this in BASH echo test "$1" ..expecting to get.. test test ..but I get.. test Is this something possible to do? It would make my life easier since having a list files I could do something like mv a/b/test.py proj_copy/$1
You can use history expansion $ echo test !#:^ echo test test test test $ echo a/b/test.py proj_copy/!#:^ echo a/b/test.py proj_copy/a/b/test.py a/b/test.py proj_copy/a/b/test.py !# The entire command line typed so far. :^ The first argument You could also use brace expansion $echo test{,} test test $echo {,proj_copy}/a/b/test.py /a/b/test.py proj_copy/a/b/test.py
Possible to reuse first argument of BASH line in the same line?
1,480,706,472,000
I am writing a script that checks a certain directory for any sub-directories starting with a specific word. Here is my script thus far: #!/bin/bash function checkDirectory() { themeDirectory="/usr/share/themes" iconDirectory="/usr/share/icons" # I don't know what to put for the regex. regex= if [ -d "$themeDirectory/$regex" && -d "$iconDirectory/$regex" ]; then echo "Directories exist." else echo "Directories don't exist." fi } So, how would you use regex to check if a particular directory has any folders starting with a specific word?
If you only want to find directories matching a given pattern/prefix, I think you could just use find: find /target/directory -type d -name "prefix*" or, if you only want immediate subdirectories: find /target/directory -maxdepth 1 -type d -name "prefix*" Of course, there's also -regexif you need an actual regex match. (caveat: I can't remember if -maxdepth is a gnu-ism.) (update) Right, you wanted an if statement. Find always returns a zero, so we can't use the return value to check if anything was found (unlike with grep). But we can e.g. count the lines. Pipe the output through wc to get the count, then see if it's not zero: if [ $(find /target/directory -type d -name "prefix*" | wc -l ) != "0" ] ; then echo something was found else echo nope, didn't find anything fi
Use regex to check if particular directory has folders starting with specific word
1,480,706,472,000
I want to delete file not by date access or created, but by filename. The filenames will be dates and I want to have a cronjob run once a week that will purge filename dates older than 7 days. I could do a find /my/directory -type f -name '*file-name.yyyy-mm-dd.qz' -delete But I would have to change the script on a weekly basis to run. I would like to avoid having to modify the job every week.
Here is a more robust form that correctly handles spaces (or even newlines) in filenames and directory names. find . -type f -name '*.[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].qz' -exec sh -c 'fdate="${1%.qz}"; fdate="${fdate##*.}"; [ "$fdate" "<" "$(date +%F -d "7 days ago")" ] && rm "$1"' find-sh {} \; This involves a lot of shell trickery that might look alien to some people, so let's break it down: Starting in the current directory, recursively find all regular files... find . -type f ...whose names end in the exact pattern ".YYYY-MM-DD.qz"... -name '*.[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].qz' ...then, run a shell command on each matching file (note the single quotes)... -exec sh -c ' ...which first strips off the trailing ".qz"... fdate="${1%.qz}"; ...then strips off the leading extra part, leaving only "YYYY-MM-DD"... fdate="${fdate##*.}"; ...and compares that string to see if it sorts (lexically) earlier than "YYYY-MM-DD" of the date seven days ago... [ "$fdate" "<" "$(date +%F -d "7 days ago")" ] ...and if so, removes the file... && rm "$1"' ...and we'll use "find-sh" as the "script name" (i.e. $0) to be used for error reporting... find-sh ...and set the filename found by find to parameter one ($1) of the inline shell script. {} \;
How do I delete file by filename that are set as dates?
1,480,706,472,000
I'm currently doing my thesis and thus working on a school server. And unfortunately, I'm struggling with even the most basic concepts. Such as this. I have a directory /home/myname/Data If I'm in /home, and use ls I don't see the directory myname, yet, when I specify cd myname it still works and l works as well. Can somebody explain why ls doesn't work, but l does? Edit: If I'm in /home, I do see other directories, but I'm missing a few. I'll check with my supervisor what should be there.
This feels like your home directory is being automounted on demand. This configuration is most frequently used when there are a number of free access workstations. It allows for backups to be taken of files on the central server, and the workstations can be rebuilt at any time from a standard image that has no need to worry about persistent local file storage. When you access a directory under /home it is mounted automatically from a central file server. When you cease using it, the directory is (eventually) unmounted again. It is very confusing, because directories that "aren't there" spring into existence when you try to reference them. Since you're a beginner, all I can really suggest is that you ignore this complexity and concentrate on learning within subdirectories of your home directory ($HOME). Good luck.
Command works, and then doesn't (cd/ls)
1,480,706,472,000
I have a nginx -V 2>&1 | \ grep -qi 'nginx/1.9.10\|ngx_pagespeed-release-1.9.32.10\|openssl-1.0.2f\|modsecurity-2.9.‌​0' \ && echo "has the stuff we need" \ || echo "missing something" which is going against [root@mage2appblock vagrant]# nginx -V nginx version: nginx/1.9.10 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) built with OpenSSL 1.0.2f 28 Jan 2016 TLS SNI support enabled configure arguments: --user=www-data --group=www-data --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --add-module=/src/nginx/ngx_pagespeed-release-1.9.32.10-beta --add-module=/src/nginx/modsecurity-2.9.0/nginx/modsecurity --with-http_auth_request_module --with-http_sub_module --with-http_mp4_module --with-http_flv_module --with-http_addition_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_ssl_module --with-openssl=/src/nginx/openssl-1.0.2f --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-pcre --with-ipv6 --with-file-aio --with-http_realip_module --without-http_scgi_module --without-http_uwsgi_module Seems that if I change the substrings from 'nginx/1.9.10\|ngx_pagespeed-release-1.9.32.10\|openssl-1.0.2f\|modsecurity-2.9.‌​0' to 'nginx/1.9.10\|ngx_pagespeed-release-1.9.32.10\|openssl-1.0.2f\|modsecurity-2.9.‌​1' I still get "has the stuff we need" even though not everything was present. I need to match all or nothing.
Work with a little modification: [ $(nginx -V 2>&1 | grep -cFf <( echo 'nginx/1.9.10 ngx_pagespeed-release-1.9.32.10 openssl-1.0.2f modsecurity-2.9.0' )) -eq 4 ] && echo "has the stuff we need" || echo "missing something"
grep match multiple substrings and pass or fail on missing
1,480,706,472,000
a question - in some case, I saw command line like this . ./test.sh I'm curious why use "." before "./test.sh" what condition we have to use "." before a command?
Running . ./test.sh is similar to running source ./test.sh. It's not running the file test.sh as an executable. Instead it's running it's contents line by line into your current shell. So it could for example also modify your current environment.
what condition we need to use "." in command line? [duplicate]
1,480,706,472,000
Edited due to people want to know more then me just wanting to know how to deal with two directories with same names in different places and move one into another when mv will not allow it if files are in destanation area .. they want to know what I am trying to do and why so ... my script gets all of the directores within a parent dir then one at a time goes through each dir with the mp3 or flac files in it, then checks to see if resampling needs to be done by bitrate checking, then if bitrate greater then set resample bitrate it resamples the file, if not it skips it, next step : renaming all of the files by TAG information, if it does not have TAG information in the file itself I take it off the dir it is in then add that to it by artist dir and album dir or other means nevertheless the TAG information is added to file if needed then I use that META TAG informatio to rename the file. song - artist.mp3 with this script I keep the files witin the orgial dirs when the script has completed working on all of the files within that artist dir it then needs to be move out of that partent dir so that when this script is ran again it will not go back over all of the files that have just been completed ... so I have it rename that artist dir to the artist name because some of them have artist / album / discography @320MP3 crap all written within the Directory name -- so i clean it up bu just renaming it the artist off the META TAG Artsit info, then move it to another parent dir to keep all of the ones completed within that dir sys. because I have many of same artist different albums dir names, as well as duplecate files within different dirs all in a parent dir. all the same artist with different named dirs -- Example - 1 Alice Cooper mad man/album name/Old-files when complete it will be Alice Cooper/album name/New-files 2 Alice Cooper whatever else is here/album/CD1/Old-iles Alice Cooper whatever else is here/album/CD2/Old-files when done it will be Alice Cooper/album/CD1/New-iles Alice Cooper/album/CD2/New-files the steps : loop one is to get dir name to work on loop two is to go through every file within that dir structor and resmaple, reTag if needed, rename files , when all files are completed rename the dir to the artist name, then move it out of that parent dir into a different Parent dir to get it out of the way - next dir within that parent dir repeat steps problem when loop comes across a match for Album name that has already been renamed and move to the different parent dir it raises that error. mv -f /from/here/sameNameDir /to/here/ - that has the same name dir in it already If Dir not moved then the script will just cycle through all of the files that are already completed waisting time as I have over 40,000 files it takes time to do this so I just move the complete dir out of there so on a second run of the script the next day it can start new My Script it works but has a few bugs in it so it is still in testing mode so it has a lot of echo's in it : as well as I reuse code I 've written so you may see movie comments in there as well. #!/bin/bash # started writting this on 11-24-2015 typeset -i FilesCountInSecondLoop FilesInDirCount filesLeftToDo DirCountDn let FilesCountInSecondLoop=0 FilesInDirCount=0 filesLeftToDo=0 DirCountDn=0 typeset -i cycleThroughFilesCount let cycleThroughFilesCount=0 working_dir="/media/data/test-run" move_to="/media/data/move_to_test" # where you keep this script to run it in a terminal # it is written so that you do not have to put it in the # working directory or directories to eliminate having to put # a separate script in every folder to run it ############################################## script_dir="$HOME/working" # max rate you want your orgianl files to be check for to be # resampled at LameCheckRate=192 # bitrate you want your FLAC files coverted to mp3 in # you can convert your FLAC to a higher rate then the # resmapled mp3 if you like ################################## flac_convert_brate=192 # this is the FLAC settings it runs at # flac -cd "$FILENAME" | lame -b "${flac_convert_brate}" - "$newFile" # LAME settings VBR low and High end LameLowEnd=128 LameHighEnd=192 # this is the LAME settings it runs at ##lame -V2 -b"${LameLowEnd}" -B"${LameHighEnd}" -F --vbr-new -m j -q2 ##################### ## DO NOT CHANGE #### runTime=1 ########### convertN=$1 ######### ##################### # gets amount of files wanted to resample on run # from command line args function rst(){ if [[ convertN -lt runTime ]]; then echo " you forgot to enter an amount to resample" ConvertNum=0 exit 1 else ConvertNum=$((convertN)) return $ConvertNum #pass the var to space land fi } rst # var to keep amount of dirs desired to be done per run of script # amount of files in the dir may not allow to get done in one day amount_of_dir_to_work_on=$ConvertNum echo ""$working_dir" START - creating list of files" # get all of the names of the base dir to change name a var containing ampunt of basenamedir in last place here # amount_of_dir_to_work_on this is gotten off of the command line find "$working_dir" -mindepth 1 -maxdepth 1 -type d | while [ $DirCountDn -lt $amount_of_dir_to_work_on ] ; do read DIRNAME; echo "$DIRNAME" #get list of all files in dir and sub dir's of current Dir to work off of MAXMP3="$(find "$DIRNAME" -type f -name "*.mp3" | wc -l)" MAXFLAC="$(find "$DIRNAME" -type f -name "*.flac" | wc -l)" echo;echo;echo echo "amount of mp3 "$MAXMP3" in "$DIRNAME"" FilesCountInSecondLoop=$(($MAXMP3 + $MAXFLAC)) filesLeftToDo="$FilesCountInSecondLoop" echo "Just starting off" echo "MAXMP3 is : "$MAXMP3"" echo "MAXFLAC is : "$MAXFLAC"" echo "FilesCountInSecondLoop : "$FilesCountInSecondLoop"" echo "Files left to do : "$filesLeftToDo"" echo "cycleThroughFilesCount : "$cycleThroughFilesCount"" # MAXMP3 starts with a number # if not equle to # cycleThroughFilesCount starts with zero find "$DIRNAME" -type f -name "*.*" | while [ $FilesCountInSecondLoop -ne $cycleThroughFilesCount ] ; do read FILENAME; #Directory to put the files back into it after resampling is completed r=$FILENAME c=$FILENAME xpath=${c%/*} xbase=${c##*/} xfext=${xbase##*.} xpref=${xbase%.*} path=${xpath} pref=${xpref} ext=${xfext} #checks to see if varitable is empty meaning no files to that extention to #resample are left to do -- if [ -z "$ext" ]; then echo "all Done - dar eay." exit 1 fi ############################# ############################ ### ### GET RID OF EVERYTHING THAT IS NOT A MP3 OR FLAC FILE ### ############################################################## #Checks each movie file to see if it is just a not needed sample of the move to regain file space by deleting it for file in "${path}" ; do # echo "in for loop ext1 is -> "$ext"" if [[ "$ext" != 'flac' && "$ext" != 'mp3' && "ext" != 'jpg' ]]; then # echo "in loop if statment ext is -> "$ext"" # echo "Removing "$FILENAME"" removeme="$FILENAME" rm -v "$removeme" # set a different ext type so that it will not go into following if statement due to it is still a movie extention # causes it to skip over and go to next file ## ext1="foobar" let InIfLoop++ # echo "in IF Loop ="${InIfLoop}"" fi let inLoop++ #echo "inside of loop ="${inLoop}"" done let leftLoop++ #echo "left loop count = "$leftLoop"" #################### ### ### START ON MP3 or FLAC FILE ### ############################################### #echo "Extention off before into first if statment "${ext}"" # echo if [[ "${ext}" == 'mp3' || "${ext}" == 'flac' ]] ; then echo;echo echo $FILENAME " Looking to resample this FILE now!" echo;echo ############################################################# #get the metadata tags off the mp3's using exiftool-perl ################# ALBUM1="`exiftool -Album "$FILENAME" -p '$Album'`" ARTIST1="`exiftool -Artist "$FILENAME" -p '$Artist'`" SongName="`exiftool -Title "$FILENAME" -p '$Title'`" TRACK1="" TRACK2="" TRACK1="`exiftool "-Track Number" "$FILENAME" -p '$TrackNumber'`" TRACK2="`exiftool -Track "$FILENAME" -p '$Track'`" #GENRE1="`exiftool -Genre "$FILENAME" -p '$Genre'`" # echo "track 1 -> "$TRACK1"" # echo "track 2 -> "$TRACK2"" #gets the number off the left side of file name if any then # hopefully fixs it to tag track number in file number=${pref//[^0-9 ]/} number=${number%% *} number=${number%/*} #removes leading white space on both ends of string number="$(echo -e "${number}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" # echo "NUMBER IS now = "$number"" if [ -z "${TRACK1}" ] && [ -z "${TRACK2}" ] ; then id3v2 -T "$number" "${FILENAME}" echo "aftering adding track" TRACK1="`exiftool "-Track Number" "$FILENAME" -p '$TrackNumber'`" TRACK2="`exiftool -Track "$FILENAME" -p '$Track'`" echo "this is track1 "$TRACK1"" echo "This is TRACK2 "$TRACK2"" echo fi #replaces all the crap and the spaces #between the names with an underscore #"${x// /_}" meaning "${varName//search pattern/replace with}" echo "GETTING OFF TAGS" #echo echo "ARTIST1 going in is "$ARTIST1"" newArt="${ARTIST1// / }" newArt="${newArt#/*}" newArt=${newArt//[^A-Za-z&0-9"'" ]/ } newArt="$(echo -e "${newArt}" | tr "[A-Z]" "[a-z]" | sed -e "s/\b\(.\)/\u\1/g")" #ensure only one space between each word newArt="$(echo -e "${newArt}" | sed -e 's/\s+/ /g')" #removes leading white space on both ends of string newArt="$(echo -e "${newArt}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" echo "newArt comming out is -> "$newArt"" newTit="${SongName// / }" newTit=${newTit//[^A-Za-z&0-9"'" ]/ } #Capitalizes each word newTit="$(echo -e "${newTit}" | tr "[A-Z]" "[a-z]" | sed -e "s/\b\(.\)/\u\1/g")" #ensure only one space between each word newTit="$(echo -e "${newTit}" | sed -e 's/\s+/ /g')" #removes leading white space on both ends of string newTit="$(echo -e "${newTit}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" #echo "NEW TITLE comming out is" echo "$newTit" #echo "ALBUM1 going in is -> "$ALBUM1"" newAlb="${ALBUM1%/*}" newAlb=${newAlb//[^A-Za-z&0-9"'" ]/ } #Capitalizes each word newAlb="$(echo -e "${newAlb}" | tr "[A-Z]" "[a-z]" | sed -e "s/\b\(.\)/\u\1/g")" #ensure only one space between each word newAlb="$(echo -e "${newAlb}" | sed -e 's/\s+/ /g')" #removes leading white space on both ends of string newAlb="$(echo -e "${newAlb}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" echo "newAlb commming out is -> "${newAlb}"" #echo "DONE GETTING OFF TAGS" #echo #strip the orginal file name off the path from FILENAME c=$FILENAME xpath=${c%/*} xbase=${c##*/} xfext=${xbase##*.} xpref=${xbase%.*} path=${xpath} pref=${xpref} ext=${xfext} #################################### c=$FILENAME ############################## # if MP3 has no needed tag information then # strips names off of directory folders then uses them # as artist/band -- and album names in tags before renaming mp3 file ########################## # echo "GETTING OFF OF DIRECTORIES" # echo "STARTING TO EXTRACT DIRECTORIES NAMES" file=${c##*/} album1=${c#*"${c%/*/"$file"}"/} Artist=${album1%/*} Artist1=${c#*"${c%/*/"$album1"}"/} album=${album1%%/*} Artist2=${Artist1%%/*} # echo "right here YO" dir=${FILENAME#*/*/*/*/} dir=${dir//\/*} echo "$dir" #rename directory NewDirectoryName="$dir" # echo "$NewDirectoryName" NewDirectoryName=${NewDirectoryName%%'('*} NewDirectoryName=${NewDirectoryName%%'320cbr'*} NewDirectoryName=${NewDirectoryName%'[Bubanee]'*} NewDirectoryName=${NewDirectoryName%'MP3'*} NewDirectoryName=${NewDirectoryName%'2CD'*} NewDirectoryName=${NewDirectoryName%'Discography'*} NewDirectoryName=${NewDirectoryName%'discography'*} NewDirectoryName=${NewDirectoryName//[^A-Za-z ]/ } #Capitalizes each word NewDirectoryName="$(echo -e "${NewDirectoryName}" | tr "[A-Z]" "[a-z]" | sed -e "s/\b\(.\)/\u\1/g")" #ensure only one space between each word NewDirectoryName="$(echo -e "${NewDirectoryName}" | sed -e 's/\s+/ /g')" #removes leading white space on both ends of string NewDirectoryName="$(echo -e "${NewDirectoryName}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" #echo "newAlb after striaghtening it up -> "${newAlb}"" # echo "NewDirectoryName is --- -> "$NewDirectoryName"" # echo e=$FILENAME xpath=${e%/*} xbase=${e##*/} xfext=${xbase##*.} xpref=${xbase%.*} path1=${xpath} pref1=${xpref} ext1=${xfext} # echo "song off directory is -> "$pref1"" songTitle="${pref1}" songTitle=${songTitle//[^A-Za-z&0-9"'" ]/ } #Capitalizes each word songTitle="$(echo -e "${songTitle}" | tr "[A-Z]" "[a-z]" | sed -e "s/\b\(.\)/\u\1/g")" #ensure only one space between each word songTitle="$(echo -e "${songTitle}" | sed -e 's/\s+/ /g')" #removes leading white space on both ends of string songTitle="$(echo -e "${songTitle}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')" # echo "newAlb after striaghtening it up -> "${newAlb}"" # echo "new songTitle is -> "$songTitle"" # echo "DONE GETTING OFF OF DIRECTORIES" #echo;echo; if [ -z "$ALBUM1" ] ; then id3v2 -A "$newAlb1" "${FILENAME}" echo "tagging Album tag to file is -> "$newAlb1" " echo fi if [ -z "$ARTIST1" ] ; then id3v2 -a "$Artist" "${FILENAME}" echo "tagging Artist tag to file is -> "$Artist" " newArt=$Artist echo fi if [ -z "$SongName" ] ; then id3v2 -t "$songTitle" "${FILENAME}" echo "tagging Title tag to file is -> "$songTitle" " newTit=$songTitle echo fi # MAKING NEW FILE NAME ########################### ALBUM1="`exiftool -Album "$FILENAME" -p '$Album'`" # echo "JFSDFSDFSDFSDFSDFSDFSDFSDFSDF" function GetArt () { if [[ ! -n "$ARTIST" ]]; then Art=$((ARTIST)) #echo " got someting " return $Art #pass the var to space land fi } GetArt echo "this is the newAt justbefore making newFIle "$newArt"" newFile=""${newTit}" - "${newArt}".mp3" # get the size of the Orginal File and keep for later use FileSize1="`exiftool '-File Size' "$FILENAME" -p '$FileSize'`" #if song was already resampled it skips it then goes onto next one echo "******************************************************************" # echo "this is old file name about to be checked if matches new FileName" # echo "right here -> "$pref" new File name -> "${newFile%.*}"" # echo ## REMOVE the Extention off of NewFile to check string # if [[ "$pref" != "${newFile%.*}" ]] ; then if [[ "$pref" == "${newFile%.*}" ]] ; then echo;echo echo "This file -> :: "${newFile%.*}" " :: has already been done, skipping"" let cycleThroughFilesCount++ let filesLeftToDo-- echo "amount of mp3 : "$MAXMP3" in "$DIRNAME"" echo "MP 3 left to do : "$filesLeftToDo"" echo "MP3 done : "$cycleThroughFilesCount"" echo;echo else ####################################### # # CHECK BITRATE of MP3 = 192 - 160 vbr # CHOP OFF ENDING .00000 # STORE IN VAR for checking value ######################################### if [[ "${ext1}" == 'mp3' ]] ; then #rateme="$(mp3info -r a -p "%r\n" "${FILENAME}")" #rateis="${rateme%.*}" # strip off period and everything to the right of it echo rateis="$(mp3info -r m -p "%r\n" "${FILENAME}")" echo "Bitrate for "$pref1"."$ext1" is $rateis" echo echo "LameCheckRate is "$LameCheckRate"" echo echo "flac_convert_brate is "$flac_convert_brate"" echo;echo fi echo;echo putback=${r%/*} echo "THIS IS PUT BACK DIR = "$putback"" echo;echo; echo;echo; echo;echo; echo;echo; echo;echo ############################################################## # Resampling FLAC with LAME 99.9.5 ### ## if [[ "${ext}" == 'flac' ]] ; then echo "got Dflack file "${pref}"."${ext}"" echo "converting to "${flac_convert_brate}" /kbps mp3" echo flac -cd "$FILENAME" | lame -h -b "${flac_convert_brate}" - "$newFile" echo;echo; # get new bitrate and spit it out to the terminal rateis="$(mp3info -r m -p "%r\n" "$script_dir"/"${newFile}")" echo "Bitrate of .. $newFile .. is .. $rateis .." echo;echo eyeD3 -A "$newAlb" "${script_dir}"/"${newFile}" echo "added "$newAlb" tag to file" eyeD3 -a "$newArt" "${script_dir}"/"${newFile}" echo "added "$newArt" tag to file" eyeD3 -t "$songTitle" "${script_dir}"/"${newFile}" echo "added "$songTitle" tag to file" if [[ ! -n "${TRACK1}" ]] ; then eyeD3 -n "$TRACK2" "${script_dir}"/"${newFile}" echo "added T2 - "$TRACK2" tag to file" else eyeD3 -n "$TRACK1" "${script_dir}"/"${newFile}" echo "added T1 - "$TRACK1" tag to file" fi eyeD3 -G "$GENRE1" "${script_dir}"/"${newFile}" echo "added "$GENRE1" tag to file" echo;echo echo "after insert info " echo;echo "after reasiging FLAC resmapling" echo echo fi ############################################################## # Resampling MP3 with LAME 99.9.5 ### #flack file resampled into a MP3 falls through here and gets moved too # if MP3 is out of limits then re-sample it if not then send it through if [[ "${ext}" == 'mp3' ]] ; then # start bitrate 128 start bitrate 160 if [[ "${rateis}" -gt "${LameCheckRate}" ]] ; then lame -V2 -b"${LameLowEnd}" -B"${LameHighEnd}" -F --vbr-new -m j -q2 "$FILENAME" "$newFile" echo echo "MOVING FILES NOW!" echo echo "$newFile" echo ## Resampled file is newFile located in script dir rm -v "${FILENAME}" echo;echo mv -v "${script_dir}"/"${newFile}" "${putback}" echo fileplace="${putback}"/"${newFile}" id3v2 -A "$newAlb" "${fileplace}" id3v2 -a "$newArt" "${fileplace}" id3v2 -t "$newTit" "${fileplace}" echo;echo "after move" exiftool "${putback}"/"${newFile}" let filesLeftToDo-- let cycleThroughFilesCount++ echo;echo "mp3's done "$cycleThroughFilesCount"" else # if MP3 is within limits then skip resmapling then just make # a copy to move it # to new directory/folder ## WORKING SCRIPT DIRECTORY ! echo;echo "is not needing resampling" echo "$pref1"."$ext" echo;echo "new file name is -> "${newFile}"" echo #if old file name changed the change it compareme="${putback}"/"${newFile}" if [[ "${FILENAME}" != "${compareme}" ]] ; then mv -v "${FILENAME}" "${putback}"/"${newFile}" echo;echo "after not needing resample" echo exiftool "${putback}"/"${newFile}" let filesLeftToDo-- let cycleThroughFilesCount++ echo;echo "mp3 done "$cycleThroughFilesCount"" fi echo;echo eyeD3 -A "$newAlb" "${putback}"/"${newFile}" echo "Non resampled stats" #exiftool "${script_dir}"/"${newFile}" fi fi # end first if echo "Total MP3's Files are : "$MAXMP3"" echo "Files done so far is : "$cycleThroughFilesCount"" echo "MP3's left to do are : "$filesLeftToDo"" # echo "After mp3 resampling file ->" # exiftool "${script_dir}"/"${newFile}" # I use EXIFTOOL because it works on FLAC files too for # extracting the information echo;echo; # get the size of the finished file to show differece in size of file echo "putback is -------- "$putback"" checkme=""${putback}"/"${newFile}"" FileSize2="`exiftool '-File Size' "$checkme" -p '$FileSize'`" fi fi # end checking string for done file ########################################### ## DO THE MATH ON MEGABYTES SAVED ######### ########################################### # if it cathces a KB file then it throws off the math. adding # this keeps MB rounded up to the nearest whole one MB. echo Hold1=$FileSize1 Hold2=$FileSize2 k1="${Hold1#* }" echo ""$k1" -- k1" if [[ "$k1" == 'kB' ]] ; then MB1=1 else MB1="${FileSize1% *}" fi k2="${Hold2#* }" echo ""$k2" -- k2" if [[ "$k2" == 'kB' ]] ; then MB2=1 else MB2="${FileSize2% *}" fi # if it cannot stat file -- file unfound - bad file - then put a # zero in place of nothing to keep the total if [[ "$FileSize1" == "" ]] ; then MB1=0 fi if [[ "$FileSize2" == "" ]] ; then MB2=0 fi echo " "$MB1" MB1 - start size" echo "- "$MB2" MB2 - ending size" # doing math on MB's totalSaveOnFile=`echo $MB1 - $MB2 | bc` echo "----------" echo " "$totalSaveOnFile" MB - regained space" echo "%%%%%%%%%%%%%%%" echo #maxSaved=$(( totalSaveOnFile + maxSaved )) maxSaved=`echo $totalSaveOnFile + $maxSaved | bc` echo echo "%%%%%%%%%%%%%%%%%%" echo;echo;echo echo "***************************************" echo;echo;echo echo "AT IF STATMENTS" echo "FILENAME is "$FILENAME"" NEWFILENAME=${FILENAME%/*} #DIRNAME=${DIRNAME#*/*/*/*/} #DIRNAME=${DIRNAME//\/*} echo "DIRNAME is "$DIRNAME"" echo "before if to do it" echo "FilesCountInSecondLoop : "$FilesCountInSecondLoop"" echo "MAXMP3 : "$MAXMP3"" if [[ "$FilesCountInSecondLoop" == "$cycleThroughFilesCount" ]] ; then echo " in if fileCount check" echo " NEWFILENAME is "${NEWFILENAME}"" echo "new file is "${newFile}"" ARTIST1="`exiftool -Artist "${NEWFILENAME}"/"${newFile}" -p '$Artist'`" NewDirName="$ARTIST1" echo "new dir name is "$NewDirName"" echo "this is MP3Count - "$MP3Count"" #var names for dir nd paths and string compair OldDirName="$DIRNAME" echo;echo "OldDirName "$OldDirName"" stringOldDir=${DIRNAME#*/*/*/*/} stringOldDir=${stringOldDir//\/*} echo;echo "stringOldDir "$stringOldDir"" stringNewDir="$NewDirName" echo;echo "stringNewDir "$stringNewDir"" oldDirPathNewName=""$working_dir"/"$NewDirName"" echo;echo "oldDirPathNewName "$oldDirPathNewName"" # if orginal dir name does not equals artist Tag name # change the dir to Artist Tag name then move it if [[ "$stringOldDir" != "$stringNewDir" ]] ; then echo "not = "$stringOldDir" to "$stringNewDir"" #change name of dir to artist/band name echo "mv OldDirName "$OldDirName" to "$oldDirPathNewName"" echo "Working dir "$working_dir"" #change old dir name to new dir name mv -v "$OldDirName" "$oldDirPathNewName" #then check to be sure root dir to move it to is there if [[ ! -d "$move_to" ]] ; then echo "inside if more to dir is there" mkdir -v "$move_to" #then move the new dir name to a different # place for safe keeping echo;echo "just befor move " echo "oldDirPathNewName "$oldDirPathNewName" move to "$move_to"" mv -vf "$oldDirPathNewName" "$move_to" else echo "ELSE oldDirPathNewName "$oldDirPathNewName" move to "$move_to"" #if dir already created then just move the new dir there mv -vf "$oldDirPathNewName" "$move_to" fi fi #if old dir name matches Artist Tag then insure more to dir is there then move it there if [[ "$stringOldDir" == "$stringNewDir" ]] ; then echo "Match strings "$stringOldDir" and "$stringNewDir"" if [[ ! -d "move_to" ]] ; then mkdir -v "$move_to" mv -vf "$OldDirName" "$move_to" else mv -fv "$OldDirName" "$move_to" fi fi fi done let DirCountDn++ echo "Dir Count Dn "$DirCountDn"" echo "******************************************" echo;echo;echo done #FOR DIR Names
had it check to see if parent dir was already created in the different desanation base folder then if true just copy into it then delete the old dir else move the whole thing ## check to see if other parent dir is there if not then make it so if [[ ! -d "$move_to" ]] ; then mkdir -v "$move_to" fi # if old dir does not match new dir name then change it if [[ "$stringOldDir" != "$stringNewDir" ]] ; then echo "not = "$stringOldDir" to "$stringNewDir"" #change name of dir to artist/band name echo "mv OldDirName "$OldDirName" to "$oldDirPathNewName"" #change old dir name to new dir name mv -v "$OldDirName" "$oldDirPathNewName" fi #check if other parent dir and artist are there if not handle it if [[ ! -d "$move_to"/"$stringNewDir" ]] ; then echo "inside ck if move to parent / artist to dir is there" echo echo ""$move_to"/"$stringNewDir" is not there moving "$stringNewDir"" echo #then move the new dir name to a different # place for safe keeping echo echo;echo "just befor move " echo echo "oldDirPathNewName "$oldDirPathNewName" move to "$move_to"" echo mv -vf "$oldDirPathNewName" "$move_to"/"$stringNewDir" echo else echo ""$move_to"/"$stringNewDir" is there moving within it into" echo echo "$move_to"/"$stringNewDir" echo moveinsideof="$oldDirPathNewName" echo cp -Rv "${moveinsideof}"/* "$move_to"/"$stringNewDir" echo rm -rv "$oldDirPathNewName" fi fi
cannot move Directory not empty
1,480,706,472,000
While comparing 2 fairly big directories using diff -rq ... I want to exclude certain file types like tar.gz or error_log. How do I do that?
GNU diff has options for doing this (see manual page): -x, --exclude=PAT exclude files that match PAT -X, --exclude-from=FILE exclude files that match any pattern in FILE The pattern in each case is a glob (* for any number of characters): diff -rq -x '*.tar.gz' -x '*error_log' foo bar See for example: How do you diff a directory for only files of a specific type? How can I make 'diff -X' ignore specific paths and not file names?
Diff command with file type exceptions
1,480,706,472,000
I'm looking for a simple way to configure an external Access Point (IP address, SSID, WPA key, turn WiFi on/off, etc.) via Linux command line instead of the standard web interface that APs offer. This could be either an off-the-shelf AP that offers this feature, or a procedure to accomplish this with any standard APs. I realize the question might seem too broad; I'd like to find the simplest solution for this.
You'll (almost certainly) need to flash a custom firmware on the AP to enable this functionality. The two most common firmwares to use for this are OpenWRT and DD-WRT. They're very similar but have slightly different hardware compatibility lists. If you already have the AP check to see if one of them support it. If you're looking to buy an AP check it's compatibility with one of them before you buy. I've done extensive work with DD-WRT on a Linksys WRT54GL and it works like a charm. Best wifi router ever made IMHO.
Access Point configurable via Linux [duplicate]
1,480,706,472,000
I have in a folder (eg /tmp) the following files 1.id 2.id 3.id 4.id so on... In these files, there is one number inside. For example in 1.id it can be the 1000, in 2.id it can be the 2000 etc. I want an one line bash command to get the value (number) of all these files automatically (*.id) but append also the filename of it. So the output should be: 1.id=1000 2.id=2000
Just use grep in this folder: grep "" *.id Output: 1.id:123 2.id:13 3.id:5 4.id:87876 BTW: I often use this in proc or sysfs filesystems; cd /sys/class/net/eth0 grep "" * This gives you all infos in sysfs about the ethernet interface eth0.
Bash Command Get Data from multiple files and append the file name
1,480,706,472,000
I am attempting to do the matasano cryptopals challenges in bash. The first step is here I found this stackexchange thread with a partial solution. printf 49276d2 | xxd -r -p | base64 which produces SSdt as wanted. I am looking to make a bash script so I can simply do hexto64 49276d2 and get the same result. I'm not sure where to start after the #!/bin/bash . I have not found a similar example which takes arguments and pipes them through other commands and then outputs a result.
In your script file named hexto64, simply write : #!/bin/bash printf "%s" "$1" | xxd -r -p | base64 And then you can use it as such : hexto64 49276d2 Just so you know, $1 means the first parameter you gave after the program name : 49276d2 in our case.
I am attempting to write a bash script to convert hex to base64
1,480,706,472,000
I have a problem with my remote server hosted by my provider, I have only SSH access. The problem consist of getting this error file system rootfs has reached critical status that causes problems with several services like smtp, I want to resize my partitions. I want to: - Decrease size of /home - Increase the size of / Is it possible to do that? is yes how to do that without losing my data and my CentOS installation? root@web [~]# df -hT Filesystem Type Size Used Avail Use% Mounted on rootfs rootfs 20G 16G 3.4G 82% / /dev/root ext3 20G 16G 3.4G 82% / devtmpfs devtmpfs 16G 256K 16G 1% /dev /dev/md3 ext3 1.8T 137G 1.6T 8% /home tmpfs tmpfs 16G 0 16G 0% /dev/shm /dev/loop0 ext3 510M 22M 463M 5% /tmp /dev/loop0 ext3 510M 22M 463M 5% /var/tmp root@web [~]# findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/root ext3 rw,relatime,errors=remount-ro,u ├─/dev devtmpfs devtmpfs rw,relatime,size=16419940k,nr_i │ ├─/dev/pts devpts devpts rw,relatime,mode=600 │ └─/dev/shm tmpfs tmpfs rw,relatime ├─/proc proc rw,relatime │ └─/proc/sys/fs/binfmt_misc binfmt_m rw,relatime ├─/sys sysfs rw,nosuid,nodev,noexec,relatime ├─/home /dev/md3 ext3 rw,relatime,errors=continue,use ├─/tmp /dev/loop0 ext3 rw,nosuid,noexec,relatime,error └─/var/tmp /dev/loop0 ext3 rw,nosuid,noexec,relatime,error
Given your comment on Anthon's answer, I think the actual solution to your problem may be to tighten down your OS's logrotate configuration. While it is possible to move /var/log per Anthon's answer, I wouldn't recommend it.
Live resizing of an ext3 filesytem on CentOS6.5
1,480,706,472,000
I'm taking a basic Unix course and right now we're learning terminal. My directions are create a directory called company and subdirectories called sales, accounting and marketing. So I did that. mkdir company cd company mkdir sales mkdir accounting mkdir marketing And then create files called file1, file2 and file3 inside company. touch file1 touch file2 touch file3 And then copy file 1 2 and 3 to the 3 subdirectories I have created but I'm stuck on copying it to the first directory. cp company/file1 company/file2 company/file3 company/sales But the terminal prints an error, "cp: target 'company/sales' is not a directory." How is this the case when I just made a directory called sales, and when I ls inside of company it lists the sales folder? cd company ls accounting file1 file2 file3 marketing sales
If you are in the company directory then try this: cp file1 sales cp file2 sales OR cp file1 sales; cp file2 sales Or cp file1 file2 file3 sales The last is the easiest and can accomplish copying all files it a single subdirectory in one line. If you want to complete the task and copy each file to each subdirectory in one line, merge the second and third example like this: cp file1 file2 file3 sales; cp file1 file2 file3 accounting; cp file1 file2 file3 marketing One last example is this: for d in */; do cp file* "$d"; done The file* is locating every thing within the current directory that starts with "file" and copies it to every first-level subdirectory.
No such directory after creating it
1,480,706,472,000
I have directory A and B. Each of them contains another directory with item.json inside. Only item.json file is persistent in directories so I can't copy-paste the directories. A: ./path/Item A/item.json ./path/Item B/item.json ... ./path/Item Z/item.json B: ./new/Item A/item.json ./new/Item B/item.json ... ./new/Item Z/item.json How should I copy all item.json files from ./path/ to appropriate folders in ./new ? My solution: To get directories ls -l ./path | grep "^d" | cut -d' ' -f 16 So then I can use the results as: for i in `ls -l ./path | grep "^d" | cut -d' ' -f 16`; echo "Dir: $i"; done So I can do cp with them for i in `ls -l ./path | grep "^d" | cut -d' ' -f 16`; cp "$i/item.json" "../new/$i/item.json" ; done And this solution is ok, but I believe there's much more elegant way.
Your solution is fine! If you like to see alternatives, my suggestion is just cp! I presume you want to copy just "item.json" files and that you can have other contents not to be copied. cd path; cp --parents */item.json ../new
Replace bunch of files maintaining path
1,480,706,472,000
I use this following sed command for display names of files with a specific form: ls -1 *|sed 's/^\(.*\).png/\1\/\1,/g' If I have two files named BOH_Contour.png and BOV_Web.png, I obtain BOH_Contour/BOH_Contour, BOV_Web/BOV_Web, Now I want to remove all _ in the second part of this result and to obtain BOH_Contour/BOHContour, BOV_Web/BOVWeb, How can I do this?
That's typically where you'd use the hold space: ls | sed ' /\.png$/!d; # discard everything but lines ending in .png s///; # remove that .png h; # store on the hold space s/_//g; # remove underscores H; # append (with a newline) to the hold space g; # retrieve that hold space s|\n|/|; # substitute the newline with a / s/$/,/; # add that extra comma.'
Two operations with sed on the same pattern
1,480,706,472,000
I have a task to copy all files from multiple directories with special names to a target directory. So I build this directory to test my command. The test directory tree looks like: . ├── dir1 │   └── file1 └── test My intended command to mv all files from dir1 to test is: find . -type d -name "*dir*" -exec mv {}/* test \; Then I got: mv: rename ./dir1/* to test/*: No such file or directory I guess this is because in that extra -exec expression, the command didn't treat the * as a wildcard. So I did: find . -type d -name "*dir*" -exec mv {}/file1 test \; Which successfully moved file1 to test. But the point is, I need to now the expression for all files so that I can accomplish this file transfer work. How should I express that in the find -exec command group?
mv "$dir_path"/* ... will not only move files but everything in "$dir_path". At least everything whose name does not start with a dot (hidden files). In bash you can change this with the option dotglob. But if the * expands nicely (matches everything but not too much for a command line) then you can use a shell for indirection: find . -type d -name "*dir*" -exec bash -c 'mv "$0"/* /path/to/test' {} \;
How to express all files/directories at the extra -exec option of `find` command?
1,480,706,472,000
I want to remove .txt or .csv files in a single line. What I have in my directory tachomi$ ls file1.csv file1.sql file1.txt file2.csv file2.sql file2.txt I only want .sql files so I want to know if there's a way to execute commands using logical operators such as AND or OR in a single line tachomi$ rm *.txt AND *.csv tachomi$ rm *.txt OR *.csv How to remove all files that match with two given patterns
Simply: rm *.txt *.csv And if your shell supports brace expansion, you can: rm *.{txt,csv}
can I use logical operators to remove all files that matches with one and/or other pattern in a single line?
1,480,706,472,000
I often use the space-backslash combination ( ) to split a command and its parameters into various lines and make it more readable: /home/user> ls -ltra \ > file1.txt \ > file2.txt Recently I used an instruction with a similar format on my terminal window. Since I was going use the same files in various instructions, I decided to highlight\copy the whole command with my mouse and paste them in an open editor. Unfortunately, I only highlighted with my mouse the two file lines and pasted them by mistake into the same terminal window: Like this: /home/user> > /home/user/file1.txt > > /home/user/file2.txt They system thought I was overwritting the files. The data was lost. Fortunately, there was backup! Now my question: Can a terminal session be re-configured so it uses a symbol, other than the > sign, at the start of a split command? Something which won't have such horrendous consequences. Example: /home/user> ls -ltra \ # file1.txt \ # file2.txt UPDATE I am using Korn shell (/usr/bin/ksh) on a Solaris server. Korn is the company's default shell.
If you're using a sh-compatible shell (like bash), that > prompt is called the "secondary prompt". It's set by the value of the PS2 variable, just like PS1 sets the normal prompt. You should be able to change it to # pretty easily: PS2='# ' You might want to put that into your ~/.bashrc (or whatever the equivalent is for whatever shell you're using).
Problem when splitting command with backslash in unix prompt
1,480,706,472,000
I want to perform the following command in command line : $ md5sum $(find . -type f) But this would cause problems when it encounter files with spaces in filenames : md5sum: Kaufmann: No such file or directory md5sum: Mobile: No such file or directory md5sum: 3D: No such file or directory md5sum: Graphics: No such file or directory
Do this way instead: find . -type f -exec md5sum {} \; This way spaces in the matched filenames will be handled correctly.
How to escape spaces while using command's output in command line [duplicate]
1,480,706,472,000
How can I get the speed of my internet connection from the terminal? Is there any script or commandline application(s) available? I'm using CentOS 6.5 .
you can do it, open your terminal and type as wget -O speedtest-cli https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py chmod +x speedtest-cli ./speedtest-cli For example: [raja@localhost ~]$ ./speedtest-cli Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from BSNL (XXX.XXX.XXX.XXX)... Selecting best server based on ping... Hosted by BEAM TELECOM (Hyderabad) [19.00 km]: 39.975 ms Testing download speed........................................ Download: 1.16 Mbit/s Testing upload speed.................................................. Upload: 0.38 Mbit/s [raja@localhost ~]$ Thanks to LinOxide.
check internet speed from terminal? [duplicate]
1,480,706,472,000
I have a directory with daily backups of my entire MySQL database. I want to import the most recent backup into the database. I know to import a backup I need to use mysql -u root -ppasswordhere < backup.sql I've managed to get the most recent file with ls -Art | grep '.sql' | tail -n 1, but I don't really know how to pipe the output to the command mysql -u root -ppasswordhere I am sure the answer is really obvious, I'm just confused.
Have you tried this? mysql -u root -ppasswordhere < $(ls -Art | grep '.sql' | tail -n 1)
Mysql import the most recent file from a directory
1,480,706,472,000
I have file int the following format: . . . Name:abc Occupation:def . . Name:xyz Occupation:ghi . . I want to extract the name and occupation field and save it in another file out.txt using vim in the following format: Name:abc Occupation:def Name:def Occupation:ghi EDIT:Occupation field position in input file updated
ggyG:e out.txt<cr>p:v/Name\|Occupation/d<cr>:w<cr> Explanation gg # Go to beginning of file y # yank (copy) G # to end of file :e out.txt # Open a new file called out.txt p # paste what you just copied :v/Name\|Occupation/d # Delete all lines that don't contain Name or Occupation :w # save
Exracting fields in file and storing it in different file in vim
1,480,706,472,000
I am confused on the -exec command. E.g. in this case: find . -type f -name "*.c" -exec cat {} \;>all_c_files.txt It seems that I get cat file1 file2 file3 ... fileN While in this case: find . -type f -name "*.txt" -exec cp {} OLD \; I get: `cp file1 OLD` `cp file2 OLD` `cp file3 OLD` ... `cp fileN OLD` Similar case e.g. in: find . -type f -name "*.txt" -exec printf "Text file: %s\n" {} \; Seems to do: printf "Text file: file1" printf "Text file: file2" printf "Text file: file3" ... printf "Text file: fileN" So how does exec work? I mean am I right that it behaves differently in these example?
I suppose that you are confused by cat command (and shell redirection), and not by find one. find . -type f -name "*.c" -exec cat {} \; > all_c_files.txt is equivalent to: ( cat file1 ; cat file2 ; cat file3 ; ... cat fileN ) > all_c_files.txt obviously the previous command and the following one have the same identical result: cat file1 file2 file3 ... fileN > all_c_files.txt
How does -exec actually work
1,480,706,472,000
I have a lot of scanned images and images/photos I'm working on with multiple versions. When I print them (using various programs), all I get is the image and, often, it's difficult to tell which image came from which file. This is particularly frustrating with my (27K) photos when I have a print and want another copy and can't find it. (The photos are in KPhotoAlbum, so I can find the minority that I have actually tagged correctly.) What I would like is a utility that would print a bunch of images (e.g. doit *.jpg) and include an automatic (program generated) caption (hopefully configurable) with something like the full path of the file in it. gnome-photo-printer would be perfect if it had an option to print the full paths of the images with them. I need this while projects are in progress and for cleaning up afterwards. This is not for "final" images. It would be cool (and economical) if I could also specify the print size of the image because, often, smaller "thumbnail" images may be enough for organizing/cleaning up and they would save a bunch of time, paper, and ink/toner. I know I could manually create a document with an embedded picture in something like LO writer, but that would be totally manual (at least with my level of expertise) and thus very slow. It would be particularly nice to have the caption "outside" the picture so it would not interfere with the content and so I could control the background and font colors for readability. I figured out (in principle) how to build something like this in bash using convert a couple of times along with composite (both from ImageMagick), but it's fairly convoluted and I'm hoping for something simpler.
You might look at feh (http://feh.finalrewind.org/), which has a --caption-path option: --caption-path PATH Path to directory containing image captions. This turns on caption viewing, and if captions are found in PATH, which is relative to the directory of each image, they are overlayed on the displayed image. e.g with caption path "captions", and viewing image images/foo.jpg, caption will be looked for as "images/cap- tions/foo.jpg.txt" You can even edit the captions while viewing the image: c, C Caption entry mode. If --caption-path has been specified, then this enables caption editing. The caption will turn yellow and be editable, hit enter to confirm and save the caption, or hit escape to cancel and revert the caption. Unfortunately, you can't choose the color, position, or size of the caption, so it's of limited use. EDIT: my comments were on feh 1.3.4. The latest version has additional options for captions. Adding reference to my own question: xv-like image viewer that lets me annotate/mark images?
Automatically printing images with added captions
1,480,706,472,000
When I reboot my machine (as I did today) I seem to lose some functionality, specifically my previous ssh keys that I had copied over to other machines that had allowed me to login without a password seem to have stopped functioning. I've tried replacing the key by generating a new key, destroying the old keys on both my current machine and in the ~/.ssh/authorized_keys on the remote but no luck. I also tried to copy the key back over using the command below but it still doesn't seem to function. ssh-copy-id <myusername>@<remoteserver> Any ideas would be helpful.
If you have encrypted your private key (by supplying a passphrase when you created it), then you have to decrypt it before you can log into remote systems. It is possible you were using an ssh agent on the local system to store the unencrypted key. When you rebooted, the key would have been flushed from the agent's memory. If that is the case, you will need to re-add the key to the agent (using something like ssh-add), and then you should be able to login without supplying a password or passphrase, assuming the public key is in place, permissions and ownership correct, etc. Whether or not ssh-agent is running depends on your environment. I belive most desktop environments these days run the desktop session under ssh-agent, so ssh-add will "just work."
Rebooting makes me lose my password-less keys on other machines
1,480,706,472,000
This question is somewhat similar to this: Unix/Linux command syntax Suppose I have a program foo that takes arguments -a and -b. If both a and b take a string argument what is the meaning of this foo -b -a bar If multiple b:s are allowed foo -b -a -b ?? Is there a true specification of the command line syntax somewhere?
Unless you can find something that says option arguments can't start with a minus sign, then the only possible interpretation is -b=-a bar See also: POSIX Utility Conventions.
Standard command line syntax ambigiuty in interpretation rules?
1,480,706,472,000
Is there an equivalent to the NET.exe suite for linux systems with which I can do net view queries for example?
Samba ships a net executable itself. From the man page: The Samba net utility is meant to work just like the net utility available for windows and DOS.
Program to manipulate Samba shares (net.exe equivalent)
1,480,706,472,000
I have a list of scripts ./myscript <param> | grep "asd" ./myotherscript <param> <param> > file ... How can I automaticaly run another script when one of these command in the list is executed and finished?
There are multiple ways to automatically execute something after a specific command: function Create a function named after your specific command and execute the specific command afterwards. This is in my opinion the simplest and cleanest solution. function myotherscript() { command myotherscript "$@" other_command_to_be_executed } zsh With zsh you can create a precmd function which will be executed before the prompt. This allows you to execute arbitrary other commands but you'll have to determine the executed command yourself. function precmd() { last_cmd=$(history -1 | sed -e "s/^[ ]*[0-9]*[ ]*//g") case "$last_cmd" in *myscript*) other_command;; *myotherscript*) other_other_command;; esac } bash Bash has the PROMPT_COMMAND variable allowing you to implement your own precmd function. To use it you have to set it to a command which will be executed before the prompt: PROMPT_COMMAND="${PROMPT_COMMAND};bash_precmd" function bash_precmd() { last_command=$(history 1 | sed -e "s/^[ ]*[0-9]*[ ]*//g") case "$last_command" in *pattern*) command ;; esac } If you are using bash and want to use something like preexec or precmd have a look at precmd and preexec with bash
Run a script after some command were executed
1,480,706,472,000
When I run: ssh [email protected] bash -c "/home/devops_staging/deployJob.sh example" I encounter the following error: /home/devops_staging/deployJob.sh: line 4: $1: unbound variable If I run it without the bash -c part, it works as expected. ssh [email protected] /home/devops_staging/deployJob.sh example deploy success Why does this happen? This is quite unexpected as I seem to recall always using this syntax of ssh ... bash -c "commands param1 param2" without any issue. The script in question is super simple all I'm doing at line 4 is assigning a variable from $1 (which should be the first parameter): #!/usr/bin/env bash set -euo pipefail CI_PROJECT_NAME="$1" ... Debugging with the bash -x -c ... I see these suspcicous following lines : + '[' -z '' ']' + return + case $- in + return + /home/devops_staging/deployJob.sh /home/devops_staging/deployJob.sh: line 4: $1: unbound variable
I think it's a duplicate of the following question: ssh command with quotes. It had been noted, but the author here stated: after reading that I'm still not sure why it works like this therefore this answer tries to explain the issue specifically in the context of the code used in the current question. The most important information from this good answer to the linked question is: SSH executes the remote command in a shell. It passes a string to the remote shell, not a list of arguments. The arguments that you pass to the ssh commands are concatenated with spaces in between. If you locally run ssh [email protected] /home/devops_staging/deployJob.sh example then the arguments ssh recognizes as code to be passed to the remote side Will be: /home/devops_staging/deployJob.sh, example. The string from concatenation of the arguments will be /home/devops_staging/deployJob.sh example and this will be the shell code to run on the remote side. It so happens it's the string you want. But if you locally run ssh [email protected] bash -c "/home/devops_staging/deployJob.sh example" then the arguments will be: bash, -c, /home/devops_staging/deployJob.sh example and the string for the remote shell will be bash -c /home/devops_staging/deployJob.sh example (as if the arguments were: bash, -c, /home/devops_staging/deployJob.sh, example) and this is not the shell code you want to run on the remote side. Here example will not belong to the option-argument to -c (it will be like the second sh in this another question). If you want exactly this string as remote code: bash -c "/home/devops_staging/deployJob.sh example" then the easiest method is to pass the string to a local ssh as a single argument: ssh [email protected] 'bash -c "/home/devops_staging/deployJob.sh example"' The single-quoted argument contains all the shell code you want to pass to a shell started by the SSH server on the remote side. Note you could even do this locally: ssh [email protected] 'bash -c "/home/devops_staging/deployJob.sh' 'example"' where double-quotes (for the remote shell) belong to two separate local arguments, the resulting string for a remote shell will be the same. Note how many tools interpret and digest the command until the code in your (remote) deployJob.sh runs: The local shell does word splitting, quote removal (and in general few other things). In the result ssh may get one or more arguments it interprets as code to be passed to the remote side. ssh concatenates these arguments with spaces in between, so a single string is passed to a remote shell. The remote shell performs word splitting, quote removal (and in general few other things) on its own. It runs some command(s). If the command is bash -c … then yet another (remote) shell will parse the code being the option argument to -c. Word splitting, quote removal and other things will be performed by this shell as well. And if deployJob.sh contains a sane shebang (or none) then there will be yet another (remote) shell that will in turn interpret the file. In general you need to predict and mastermind what tool gets what arguments after all the previous tools digested their arguments; and what arguments it will pass to the next tool. You need to design your local command, so the ultimate tool gets exactly what you want it to get.
Bash script errors with "unbound variable" only when invoked via SSH [duplicate]
1,480,706,472,000
I have the following directory names: /aaa /bbb /ccc /ddd And I want to run the following command passing in the directory names with just ls: ls | composer show drupal/MY_DIRECTORY_NAME_HERE --available | ack versions How can I create a one line command to pass in the directory name into this composer command in a loop?
Since you obviously want to apply the command to all of the existing sub-directories, the cleanest way to do so (avoids the issue of directory names with special characters) would be for dir in */ do composer show drupal/"$dir" --available | ack versions done This will iterate over all non-hidden directories and symlinks to directories (due to the trailing / on the glob pattern) and execute the command on the current directory. Note that this assumes the command accepts directory paths with trailing /. If not, a little shell string processing to strip that / will help: for dir in */ do composer show drupal/"${dir%/}" --available | ack versions done Additional notes: Of course, writing it as one-liner is also possible: for dir in */; do composer show drupal/"$dir" --available | ack versions; done You can adapt the command to iterate over an explicit list of directory names, as in for dir in '/aaa' '/bbb' '/ccc' do ... done The quotes around the individual list items are necessary if your actual directory names contain special characters (but note that there are cases when that alone won't suffice, so the glob-based approach shown above is still the safest).
Passing directory names into a bash command individually [duplicate]
1,480,706,472,000
I've been going down the i3wm road of pain and can't for the life of me understand how to change the output device with cli commands. Setup: Using i3-gaps (Base Distro is Garuda Linux) pipewire is the audio provider When using pavucontrol I can switch between my Headphones and Speakers as the output port but can't seem to figure what is changing in the background with pactl, wpctl, aplay I have headphones connected to my front aux panel and speakers connected to the rear aux panel. Any help would be appreciated :) Update: Found a solution and posted it in the comments
Found a solution and wrote a short script for it if [[ $(pactl list | grep "Active Port: analog-output") == *"headphones"* ]]; then pactl set-sink-port 0 analog-output-lineout else pactl set-sink-port 0 analog-output-headphones fi Also added this to my i3config: bindsym F6 exec --no-startup-id sh ~/path/to/script/switch_output.sh
Having trouble using cli to switch between playback devices with pipewire
1,658,510,518,000
I have a text file with however many lines. One line is: fixed_stringA = 123, fixed_stringB = 456 I have a second text file (again with however many lines). These 2 lines are in this order, in middle of the file: found_value1=unknown1 found_value2=unknown2 My goal is to modify these two lines second text file so that the it will now read: found_value1=123 found_value2=456 My goal is to do this on the command line without calling another script. The following code works, but I wonder if it can be shortened or improved: value_variable=$(grep "fixed_stringA = [0-9]*, fixed_stringB = [0-9]*" first_file \ | sed -E 's/fixed_stringA = ([0-9]+), fixed_stringB = ([0-9]+)/\1,\2/'); \ value1=$(echo $value_variable | cut -d "," -f 1); \ value2=$(echo $value_variable | cut -d "," -f 2); \ sed -i -E "s/unknown1/$value1/" second_file.txt; \ sed -i -E "s/unknown2/$value2/" second_file.txt
I can get it down to these two lines: values="$(sed -n -E 's/^.*fixed_stringA\s*=\s*([0-9]+)\s*,\s*fixed_stringB\s*=\s*([0-9]+).*$/\1,\2/p' first_file)" sed -i -E -e "s/unknown1/${values%,*}/" -e "s/unknown2/${values#*,}/" second_file.txt The first line uses sed with the -n flag so it suppresses regular output, and it finds the target line, extracts the numeric values into the capture groups \1 and \2 and turns them into 123,456 and prints that alone, saving that to the variable $values. The next line does two substitutions in second_file.txt: everything before the comma in $values for unknown1 and everything after the comma for unknown2. But I'm sure some awk export will be along any minute with a one-liner.
substitute 2 substrings from a single line in a file into 2 different lines in a different file from command line
1,658,510,518,000
Why this sed command: sed /^a/,/i$/p prints all lines and not just the lines that beginning with "a" and end with "i"? It is not too clear to me how sed /BEGIN/,/END/p works. STDIN: arthuri John Johnny Michael STDOUT: arthuri arthuri John John Johnny Johnny Michael Michael
sed by defaults prints the pattern space at the end of each cycle as long as the d command has not been invoked. That can be disabled with the -n option, or by making the first line of the sed script: #n. So, here, in addition to that default printing, you're telling sed to also print the pattern space starting with the lines that starts with a and ending with the first line after that that ends with i. The first line of your input (arthuri) starts with a, but there's no line after than that ends with i, so the /^a/,/i$/ range is never terminated and all lines are printed (twice because of the default printing at the end). sed ranges will always include at least 2 lines (except in the special case where the start is on the last line of the input). To allow the start and end lines to be the same (like arthuri which happens to both start with a and end in i), you can use awk instead: awk '/^a/, /i$/' Or perl: perl -ne 'print if /^a/ .. /i$/' If you wanted to print the lines that start with a and end in i, you wouldn't use an address range, but just: sed '/^a/!d; /i$/!d' Or: sed '/^a.*i$/!d' Or: sed -n '/^a.*i$/p' Though here, you might as well use grep: grep -x 'a.*i' Note that ^, * and $ are special characters in the syntax of many shells. Generally, you want to quote sed code arguments.
Why this sed command prints all the lines?
1,658,510,518,000
About man shutdown at: shutdown(8) — Linux manual page it indicates: ... The first argument may be a time string (which is usually "now"). Optionally, this may be followed by a wall message to be sent to all logged-in users before going down. ... Note that to specify a wall message you must specify a time argument, too. ... Note this post is about the wall message to be broadcast to all the logged users. Therefore using the following pattern as follows: shutdown <time> ["something to share"] Where <time> can be +1, ... +5, ... +15, therefore from the +1 to +15 range - it as follows: shutdown +1 ["something to share"] ... shutdown +5 ["something to share"] ... shutdown +15 ["something to share"] I can confirm the other users that are logged receive - appears in their tty - the broadcast message (including or not the custom message according the case). Therefore as the documentation indicates, the message is broadcast. Until here no reason to create this post. Situation: the reason, I did realise - if <time> is equals or greater than 16 minutes - used through either +16 or hh:mm the message is not broadcast to the other logged users in their tty. For example if date command returns 00:10 then with shutdown 00:26 ... this situation happens. Of course, if is used shutdown 00:15 all work as expected. Question Why the wall message is not broadcast to all logged users if the time is equals or greater than 16 minutes? - Is it normal or is a bug? Through either the man shutdown or shutdown --help theoretically the wall message must be always broadcast to all the logged users, it does not matter the range of time about "from now" +15, +20 etc ... Note I have this situation in many Virtual Machines - for different hosts - for Ubuntu Server 20.04
The information is buried in systemd's sources in src/login/logind-utmp.c: Warn immediately if less than 15 minutes are left (when the command is run): /* Warn immediately if less than 15 minutes are left */ if (elapse - n < 15 * USEC_PER_MINUTE) { r = warn_wall(m, n); if (r == 0) return 0; } As for when there are warning sent at other times, it's a fixed list of possible remaining minutes before action: static const int wall_timers[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 25, 40, 55, 70, 100, 130, 150, 180, }; So if you run for example (-k for fake): shutdown -k +41 test; sleep 62; shutdown -c You'll see a broadcast shutdown message 1mn after and it will be cancelled 2 more seconds after. Using -k +40 won't send a message immediately though. The systemd variant of shutdown(8) only tells: Optionally, this may be followed by a wall message to be sent to all logged-in users before going down. It doesn't document that such message will be sent when the command is run or repeated before shutdown (even if it is), only that it will be sent before going down. By comparision the sysvinit shutdown was documented differently and hopefully the implementation followed the documentation: -q Reduce the number of warnings shutdown displays. Usually shutdown displays warnings every 15 minutes and then every minute in the last 10 minutes of the countdown until time is reached. When -q is specified shutdown only warns at 60 minute intervals, at the 10 minute mark, at the 5 minue mark, and when the shutdown process actually happens.
shutdown command: why the "wall message" is not broadcast to all logged users if the time is equals or greater than 16 minutes?
1,658,510,518,000
I have a following list of files; 11F.fastq.gz 11R.fastq.gz 12F.fastq.gz 12R.fastq.gz I'd like to rename these file names to the following; 11_S11_L001_R1_001.fastq.gz 11_S11_L001_R2_001.fastq.gz 12_S12_L001_R1_001.fastq.gz 12_S12_L001_R2_001.fastq.gz I tried reanme as follows rename 's/F.fastq/_S_L001_R1_001.fastq/' *.gz rename 's/R.fastq/_S_L001_R2_001.fastq./' *.gz but I dont quite know how to add file numbers (11 and 12 in this case) after "S". Any pointers will be appreciated.
With Perl's rename commandline, you can capture the variable part ((...)) and use back references ($1): rename -n 's/(\d+)F\.fastq/$1_S$1_L001_R1_001.fastq/' *F.fastq.gz rename -n 's/(\d+)R\.fastq/$1_S$1_L001_R2_001.fastq/' *R.fastq.gz Remove the -n if you're happy with the output.
Batch renaming file names including a part of old file name
1,658,510,518,000
I'm looking to substitute text inside a file after a certain pattern. For example: The content of example.txt is Something==x.y.z I'd like to change it to Something>=x.y.z,<x.y.z+1.0 I know I can use sed -i 's/==/>=/g' example.txt to change the == but I do not know add <x.y.z+1.0 after a certain pattern. (Please note that x.y.z is a random number) EDIT: it is for Python packages. Examples argcomplete==1.12.3 youtube-dl==2021.6.6 systemd-python==234 would become argcomplete>=1.12.3,<1.12.3+1.0 youtube-dl>=2021.6.6,<2021.6.6+1.0 systemd-python>=234,<234+1.0
The following sed command assumes that there is exactly one == in the line and extracts the parts before and after it as groups 1 and 2 which can be used in the substitution. sed 's/\(.*\)==\(.*\)/\1>=\2,<\2+1.0/' With the input Something==x.y.z argcomplete==1.12.3 youtube-dl==2021.6.6 systemd-python==234 the output is Something>=x.y.z,<x.y.z+1.0 argcomplete>=1.12.3,<1.12.3+1.0 youtube-dl>=2021.6.6,<2021.6.6+1.0 systemd-python>=234,<234+1.0 To edit the file in-place, add option -i and the name of the input file: sed -i 's/\(.*\)==\(.*\)/\1>=\2,<\2+1.0/' example.txt Explanation: Pattern: . = any character * = the preceding pattern repeated from 0 to any number --> .* = any number of any characters \(...\) = capture the text matching the enclosed pattern == = literal text \(.*\)==\(.*\) = any text captured as group 1 followed by == and any text captured as group 2 Replacement: \1, \2 = text from capture group 1 or 2 other parts are literal text here \1>=\2,<\2+1.0 = group 1 >= group2 ,< group2 +1.0 As mentioned in they's comment, the first pattern before the literal == can be omitted, resulting in sed 's/==\(.*\)/>=\1,<\1+1.0/' The explanation is similar with the difference that sed will only modify the matching part of the line. So the part before == will be preserved, and only one capture group for the part after the == is necessary. A difference in the behavior of the two patterns is that .*==... will match the last == while ==... will match the first one because the .* part matches the longest possible text that is followed by ==.
Using sed to substitute text and add text after certain pattern
1,658,510,518,000
When I write multi-line statements in interactive mode in Zsh, it will prefix my statements with the block type I'm in like so: % for i in $(seq 3); do for> echo $i for> done 1 2 3 % function foo() { function> echo bar function> } % foo bar I prefer not to see for> and function> and other code block prefixes. I'm not even sure what these prefixes are called to adequately search for how to suppress them. Bash does this too with just the > character, but I've not had luck figuring it out from that route. Is there a way to disable these in Zsh? --EDIT-- Turns out the default $PS2 value in Zsh is PS2=%_>, for anyone that comes across this via a search engine someday. From the docs: %_ The status of the parser, i.e. the shell constructs (like ‘if’ and ‘for’) that have been started on the command line. If given an integer number that many strings will be printed; zero or negative or no integer means print as many as there are. This is most useful in prompts PS2 for continuation lines and PS4 for debugging with the XTRACE option; in the latter case it will also work non-interactively. Based on the accepted answer, I wound up with my PS2 set like this, which adds a 2 space indent to each block, and accounts for an initial 2 spaces to align with my PS1 length: PS2='${${${(%):-%_}//[^ ]}// / } '
This is the secondary prompt, configured through the variable PS2, in all Bourne-style shells including zsh. In zsh, it defaults to showing which shell constructs (loops, quotes, etc.) are open, using the %_ prompt escape. In bash, it defaults to > and you can use escape sequences but they aren't very useful. If you don't want any secondary prompt, make it empty: PS2= With the prompt_subst option turned on, you can make it have one space per level of nesting, which gives some visual feedback but makes it possible to copy the code from the terminal. setopt prompt_subst PS2='${${(%):-%_}//[^ ]} '
Zsh: how do I remove block prefixes when writing multi-line statements in interactive mode?
1,658,510,518,000
I am trying to build Cube2 Sauerbraten, But I need the OpenGL and SDL2 libraries to run the makefile. (I am using ubuntu here) I tried running sudo apt-get install --yes software-properties-common g++ make then sudo apt-get install --yes libsdl2-dev then sudo apt-get install --yes freeglut3-dev and lastly, to compile, g++ main.cpp -I /usr/include/SDL2/ -lSDL2 -lGL. I got these commands from https://gist.github.com/dirkk0/cad259e6a3965abb4178. When I run them, the first three commands work fine, but the last one did not work, giving me this error. optiplex780@super-OptiPlex-780:~$ g++ main.cpp -I /usr/include/SDL2/ -lSDL2 -lGL cc1plus: fatal error: main.cpp: No such file or directory compilation terminated. optiplex780@super-OptiPlex-780:~$ Should I replace main.cpp with the makefile? Am I just a dunce, or is there a problem here? After installing the packages, I tried going to the ~/sauerbraten/src dorectory, and running make install. I got these errors. optiplex780@super-OptiPlex-780:~/sauerbraten_2020_12_29_linux/sauerbraten/src$ make install make -C enet/ all make[1]: Entering directory '/home/optiplex780/sauerbraten_2020_12_29_linux/sauerbraten/src/enet' make[1]: Nothing to be done for 'all'. make[1]: Leaving directory '/home/optiplex780/sauerbraten_2020_12_29_linux/sauerbraten/src/enet' g++ -O3 -fomit-frame-pointer -Wall -fsigned-char -o sauer_client shared/crypto.o shared/geom.o shared/stream.o shared/tools.o shared/zip.o engine/3dgui.o engine/bih.o engine/blend.o engine/blob.o engine/client.o engine/command.o engine/console.o engine/cubeloader.o engine/decal.o engine/dynlight.o engine/glare.o engine/grass.o engine/lightmap.o engine/main.o engine/material.o engine/menus.o engine/movie.o engine/normal.o engine/octa.o engine/octaedit.o engine/octarender.o engine/physics.o engine/pvs.o engine/rendergl.o engine/rendermodel.o engine/renderparticles.o engine/rendersky.o engine/rendertext.o engine/renderva.o engine/server.o engine/serverbrowser.o engine/shader.o engine/shadowmap.o engine/sound.o engine/texture.o engine/water.o engine/world.o engine/worldio.o fpsgame/ai.o fpsgame/client.o fpsgame/entities.o fpsgame/fps.o fpsgame/monster.o fpsgame/movable.o fpsgame/render.o fpsgame/scoreboard.o fpsgame/server.o fpsgame/waypoint.o fpsgame/weapon.o -Lenet/.libs -lenet -L/usr/X11R6/lib `sdl-config --libs` -lSDL_image -lSDL_mixer -lz -lGL -lrt /bin/sh: 1: sdl-config: not found /usr/bin/ld: cannot find -lSDL_image /usr/bin/ld: cannot find -lSDL_mixer collect2: error: ld returned 1 exit status make: *** [Makefile:163: client] Error 1 optiplex780@super-OptiPlex-780:~/sauerbraten_2020_12_29_linux/sauerbraten/src$
Your program has many files, then a single g++ won’t be enough. A make (No arguments) command is often the right way to compile the software from the Makefile. The Makefile is in the src folder… you should enter it (cd src) before launching make. make install compile the software if not done and install it. According to the readme_source.txt file, it uses zlib, then the zlib1g-dev package will be helpful. Also libsdl-mixer1.2-dev and libsdl-image1.2-dev (On a Debian system, the actual version may vary. You seem to have a 2 version).
How to install OpenGl and SDL2 libraries on ubuntu
1,658,510,518,000
How do I kill all processes with my username that were started within (past hour, past day) etc?
find your processes that are younger than an hour extract the pids kill the pids process list: $ ps -e -o pid,user,etimes,comm \ | awk -v me=$USER '$2 == me && $3 <= 3600 { print }' Produces 661162 jaroslav 3006 chrome 667859 jaroslav 1711 chrome 669145 jaroslav 1471 chrome 671222 jaroslav 1016 chrome 675278 jaroslav 270 chrome 675578 jaroslav 207 sleep 676094 jaroslav 91 chrome 676102 jaroslav 91 chrome 676528 jaroslav 11 chrome 676529 jaroslav 11 chrome 676553 jaroslav 11 chrome 676602 jaroslav 3 top 676615 jaroslav 0 ps 676616 jaroslav 0 awk extract pids: $ ps -e -o pid,user,etimes,comm \ | awk -v me=$USER '$2 == me && $3 <= 3600 { print $1 }' Kill pids: $ ps -e -o pid,user,etimes,comm \ | awk -v me=$USER '$2 == me && $3 <= 3600 { print $1 }' \ | xargs -rt kill The -tr arguments to xargs are optional and will skip xargs if there is no output and report every executed line. You can even test it with kill -0 which does nothing to stop the process, but will report an error if the process is no longer running. $ ps -e -o pid,user,etimes,comm \ | awk -v me=$USER '$2 == me && $3 <= 3600 { print $1 }' \ | xargs -rt kill -0 kill -0 661162 667859 669145 671222 675278 676602 677310 677311 677883 677893 677965 677966 677967 677968 kill: (677966): No such process kill: (677967): No such process Realizing that this pipe / script can kill itself after feedback, (notice etimes=0 in the process list above), here is a revised version which ignores very recent processes: ps -u "$LOGNAME" -o pid,etimes,comm \ | awk '$2 <= 3600 && $2 > 1 { print $1 }' \ | xargs -rt kill -0 This is probably not very portable, but should work on Linux (at least ubuntu 18). Hopefully this gives you some idea about how to approach this problem. <mother-mode> Do run the ps command without awk and xargs and kill first to see what would be killed and be careful if running as root. You could potentially shut down the system or kill some important service that has recently been restarted. </mother-mode>
Kill all of my processes that were started within the past hour?
1,658,510,518,000
I created a new user and logged in successfully on my ubuntu 20.04 machine. When I logged in as root the terminal looks like this: root@ubuntu-s-1vcpu-1gb-fra1-01:~# When I login with my "mynewuser" account I only see a $, nothing more. I want to display the same information as before: mynewuser@ubuntu-s-1vcpu-1gb-fra1-01:~ This was how I created my new user: mkdir -p /home/mynewuser/.ssh touch /home/mynewuser/.ssh/authorized_keys echo "publickey" > /home/mynewuser/.ssh/authorized_keys useradd -d /home/mynewuser mynewuser usermod -aG sudo mynewuser chown -R mynewuser:mynewuser /home/mynewuser/ chmod 700 /home/mynewuser/.ssh chmod 644 /home/mynewuser/.ssh/authorized_keys Did I miss anything?
If you change the order of operations then it can be simplified somewhat: useradd -m -s /bin/bash mynewuser usermod -aG sudo mynewuser su -u mynewuser - mkdir -m 700 .ssh echo "...public key..." | su -u mynewuser - tee .ssh/authorized_keys This should also mean that when you create the home directory the scripts from /etc/skel can get copied in correctly.
Information of new user won´t show in terminal
1,658,510,518,000
I am building a script with usr/bin/time program to monitor the RAM usage of a script and storing it in a variable so i can check if it is higher than a specified limit $mlimit, like this example using ls / as the command: $mlimit=512000 #512mb limit in kilobytes $musage=$(/usr/bin/time -f "%M" ls / | rev | cut -f 1 | rev) echo "RAM usage: $musage" /usr/bin/time in this case returns first the output of the command and then the maximum resident set size (RAM usage). I thought then i could reverse the output, cut it to get the ram usage and reverse it back. But i get this output: RAM usage: bin. bin is the first directory returned by the ls / command. So my strategy to get the RAM usage is not working. Thanks
GNU time outputs the resource usage information on stderr, though can be told to write it elsewhere with -o { musage=$(command time -o /dev/fd/4 -f %M ls / 4>&1 >&3 3>&-) } 3>&1 Would record the max memory usage in the variable whilst leaving ls' stdout and stderr alone. That works by duplicating the original fd 1 (stdout) to fd 3 outside the command substitution with 3>&1, so that inside the command substitution, we can restore the original stdout for ls with >&3 after having made the command substitution pipe available on fd 4. Then time writes its output to that pipe via /dev/fd/4, while ls stdout is the original one.
How to get only memory peak usage with /usr/bin/time?
1,658,510,518,000
this question (to my knowledge) has not been asked before, yet would benefit anyone that uses tmux! I tried searching github too for plugins etc, but no luck yet. What I'd like to achieve: Cycle between windows of the same name. Why? Imagine you have 6 tmux windows, in the following order, status bar would look similar to this: [0:zsh][1:vim][2:zsh][3:vim][4:zsh][5:vim] If Im currently in window 0 (zsh): I would like to cycle between the 3 'zsh' windows (0,2,4) If Im currently in window 1 (vim): I would like to cycle between the 3 'vim' windows (1,3,5) This would allow you to cycle windows of the same type without having to re-order all your windows first (vim next to vim, zsh next zsh etc). bliss! I have tried myself, but no success :(
I created a basic solution. Save the following script as _tmux-cycle-samename and make it executable (chmod +x _tmux-cycle-samename). #!/bin/sh if [ "$1" = "-r" ]; then filter=tac; else filter=cat; fi name="$(tmux display-message -p '#W' | sed 's|\(.\)|[\\\1]|g')" tmux select-window -t "$( tmux list-windows -F '#{window_active} #{window_id} #W' \ | grep '^[01] @[0123456789]* '"$name"'$' | "$filter" \ | awk ' NR==1 {result=$2} { if (seen==1) {result=$2; exit} seen=$1 } END {print result} ' )" The script retrieves the right name (tmux display-message …) and prepares the string (sed …), so when interpreted as regex later the name is matched literally. Then the script lists windows (tmux list-windows …), picks the matching ones (grep …), preserves or reverses the order (cat or tac from the expansion of $filter) and finds the next inactive window (awk …). Finally the found window is selected (tmux select-window …). Add these to your ~/.tmux.conf: bind-key -T prefix > run-shell '/full/path/to/_tmux-cycle-samename' bind-key -T prefix < run-shell '/full/path/to/_tmux-cycle-samename -r' If _tmux-cycle-samename can be resolved via PATH then you don't need to specify the full path. If already inside tmux then run tmux source-file ~/.tmux.conf. A tmux server started anew will source the file automatically. Try prefix> and prefix< in your tmux to test the solution (the default prefix is Ctrl+b).
cycle tmux windows of the same name
1,658,510,518,000
In tmux I get into the 2nd nested session by using C-b C-b (Ctrl+b twice). But if I have a 3rd nested session, I can't use C-b C-b C-b to get to the 3rd nested session. Somehow if I spam C-b, sometimes it can get to the 3rd nested session. What's happening?
You have to use 1*2*2 (=4) control-b's to get a control-b to the third level tmux with the default bindings, and 1*2*2*2 (=8) to get it to a fourth level tmux, and in general 2n-1 to get to the nth tmux. "What is happening?". All the control-b's are read by the first level tmux. The first one is taken to introduce a command sequence. The second one says to run the command to send a control b to the application. The third starts a sequence and the forth runs the command to send a second control-b to the application. Now the application is the second level tmux. As a result of you typing 4 control-b's the first level tmux has sent 2 control-b's to the second level. The first of these starts a command sequence, and the second runs the command to send a control-b to the third level tmux. If you are going to do a lot of this deeply nested tmux usage I suggest you look into adding some custom binds to send the 2, 4, 8, 16 etc control-b characters. For example bind-key -T prefix C-g send -N 1 C-b bind-key -T prefix C-h send -N 2 C-b bind-key -T prefix C-l send -N 4 C-b You can put these into your tmux configuration file, or else enter them at the first level tmux by typing Ctrl-b:bind-keySpace-TSpaceprefixSpaceC-hSpaceetc. This will make Ctrl-bCtrl-g/h/l send commands to the second/third/fourth level tmux. An alternative which I mention for completeness is to use different prefix keys in the different tmux. I don't recommend it, it allows too many opportunities to get confused.
How can I make tmux use "C-b C-b C-b" to get into the 3rd nested tmux session?
1,658,510,518,000
I have txt file that I have to swap the first paragraph with last one. I did it but now I don't know how to paste everything in a new txt file. This is my command tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt I tried to use > like this tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt > gl_ok.txt but it only takes the last paragraph. How can I do it?
try to grouping the commands within { ...; } and redirect the output at the end to a file: { tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt; } > gl_ok.txt note that the last semi-colon before close bracket is mandatory or group commands can be terminated with a newline like below: { tail -14 gl.txt ; head -n 74 gl.txt | tail -n 68 ; head -5 gl.txt } > gl_ok.txt if your shell is bash, see man bash under "Compound Commands": { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command. see also grouping commands using sub-shell ( ... ) and you would do (...) >output
How to paste multiple commands output into single output file
1,658,510,518,000
I am working on vxlan tunneling between Linux - commercial routers. I need to debug some interface settings. The command sudo ip -d link show DEV gives me a great output but the output format is like a long single line as below. katabey@leaf-1:mgmt:~$ sudo ip -d link show vxlan_10 11: vxlan_10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc noqueue master bridge state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 52:6d:3d:aa:b5:bf brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535 vxlan id 10010 local 10.1.1.1 srcport 0 0 dstport 4789 nolearning ttl 64 ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx bridge_slave state forwarding priority 8 cost 100 hairpin off guard off root_block off fastleave off learning off flood on port_id 0x8002 port_no 0x2 designated_port 32770 designated_cost 0 designated_bridge 8000.50:0:0:3:0:3 designated_root 8000.50:0:0:3:0:3 hold_timer 0.00 message_age_timer 0.00 forward_delay_timer 0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on neigh_suppress on group_fwd_mask 0x0 group_fwd_mask_str 0x0 group_fwd_maskhi 0x0 group_fwd_maskhi_str 0x0 vlan_tunnel off isolated off addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 It would be great to have the output like vxlan id 10010 local 10.1.1.1 srcport 0 dstport 4789 I remember a couple of years back Linux system engineers I used to work with doing command | python ... but I was not able to find/recall the command. (I have Python installed). Any other solutions (especially single liners) are welcome.
try: your-command |grep -Eo '(vxlan id|srcport|dstport) [0-9]+|local [0-9.]+'
Formatting command output that is a long single line
1,658,510,518,000
Is there any command line tool to list the newly added package in debian? Answers accepted for debian stable, testing or Sid (because it is a highly active release)
aptitude keeps track of new packages, and you can list them using aptitude search '~N' They show up in the “New Packages” section in the UI. To clear the list of new packages, run aptitude forget-new or press f in the UI; you can also specify a subset of new packages to be “forgotten”. The set of packages considered here will depend on the repositories you have configured: if your system only tracks Debian 10, you’ll only see new packages in Debian 10 (generally speaking, new kernels after an ABI bump); if your system is configured with the unstable repositories (whether or not it actually tracks unstable), you’ll see new packages in unstable. To track specific suites, you can use the RSS feeds: unstable, stable etc. (but these only list the last seven days’ worth of updates).
How to get the list of the newcomers packages in debian?
1,658,510,518,000
This question is very similar to Is there a standard command that always exits with a failure? I'm writing some code which I need to test that it handles subprocesses gracefully when the child process exits due to a signal (say SIGTERM or SIGINT). Is there something concise that I can call like true or false to achieve this with signals?
Ok :-) The script you want looks like this: #! /bin/sh kill $$ This relies on the fact that kill is a builtin command, and not the /bin/kill program.
Is there a standard command that always exits with a signal?
1,658,510,518,000
I have two pipe delimited file like below file1.txt A1234|JESSIE|OPTED A1224|JOHN|OPTED L1212|RAMSAY|OPTED L1832|TIZEN|TESTED file2.txt A1234|B1465 G1211|L1211 G1241|L1212 G1271|L1232 Desired output A1234|B1465 G1241|L1212 I am trying to compare column 1 and column 2 in file2.txt with column 1 in file1.txt and get the matching rows in file2.txt if it the first column in file1.txt matches with either column 1 or column 2 in file2.txt. I tried the awk below but it doesn't appear to be giving me the right results. awk -F'|' 'FNR==NR{a[$1]=1; next} a[$1,2]' file1.txt file2.txt > output.txt
$ awk -F '|' 'NR==FNR{a[$1]; next} ($1 in a) || ($2 in a)' file1.txt file2.txt A1234|B1465 G1241|L1212
Compare two files and get the matching rows based on two columns
1,658,510,518,000
I am trying to delete all zip files from a folder /mnt/drive1/temp and its subfolders recursivley. I am aware that an incorrect command here could have disastrous consequences so wanted to check I had the right format, so far I have... find /mnt/drive/temp -type f -name '*.zip' -delete Will this command achieve what I want?
If you omit the -delete option, find will print out a list of all files that match the test conditions you have specified. This is great way to check that you have caught the right files, especially before you delete them. Once you're sure that the files are the right ones, append the -delete option and run the command. Syntactically, your command appears correct, but there may be a typo. You have find followed by the path ('/mnt/drive/temp'). You have also mentioned '/mnt/drive1/temp' in the question, which is a different path. This is the possible typo I am referring. Followed by the path, you have a series of tests. -type f will find regular files, and -name '*.zip' will find files that end with the .zip extension. The single quotes prevent the shell from expanding the '*' character, which is the correct approach. As stated above, first do a 'dry run' without the -delete option. Then, review the output and confirm that the files are correct. Once you are satisfied, proceed to delete the files.
Delete all zip files from a folder recursivley
1,658,510,518,000
I have a logfile that is generating hundreds of lines per second – say, 12 specific lines, 16× per sec. I want to run either a command-line or a shell script that can display this logfile neatly in real-time. But if I run tail -f logfile.txt, the text rapidly scrolls off the terminal window and can't be read by human eyes. I haven't yet mastered the command line so this is all I can think of doing right now. I want the terminal window to just print 12 lines at a time and automatically refresh, something like: // while ( Ctrl+C hasn't been hit ) // { // clear terminal window // print last 12 lines of logfile.txt // wait until logfile is 12 lines longer // } Any ideas? EDIT: it turns out I can do tail -f logfile.txt and just set the terminal window height to 12. This gets me pretty close to what I want, but it seems like a "naive" approach. Hoping somebody has a more elegant solution.
You can view the last N lines at a M second interval using watch. Assuming N=20 and M=3, watch -n3 tail -n20 logfile.txt Obviously you'll lose great chunks of output as the update interval exceeds the write interval, but as far as I understand it this is what you want.
How to 'tail' a logfile, X lines at a time
1,658,510,518,000
Recently, I opened terminal and started typing everything i can, after which i accidentally put " and something like python shell was initialised: muhammadrasul@AMR:~/Desktop$ lksdflaflakd;kfa;lk" > a > s > > fd > sfs > fs > Then I realised that it works just for " as well. So, what that environment actually is and why does it ignore everything before that "?
" starts a string. The string lasts until the next " (except that \" put a " in the string and doesn't end the string). The string can contain newlines. So after entering a single ", the shell keeps reading input, because the string is unfinished. When you terminate the string with another ", the shell will start executing the command. That's when it will complain that each of the commands is not found. The > prompt is the shell's way to say that it's expecting more input. You can customize it through the variable PS2, which is analogous to PS1, but for continuation lines.
What does " command do in terminal? [duplicate]
1,658,510,518,000
How to get a listing of all occurences of the folder foo (or node_modules in my case) in my home folder, like this: ~/a/foo ~/b/d/foo ~/b/d/e/foo ... My goal is to manually remove all unnecessary node_modules folder from my hard disk which has very limited space.
You could use find command: find ~ -type d -name node_modules To exclude nested node_modules directories, use -prune on the folders that are found to stop find from descending into them: find ~ -type d -name node_modules -prune And then, to delete: find ~ -type d -name node_modules -prune -exec rm -rf {} +
List given folder everywhere it exists
1,658,510,518,000
There are multiple versions of g++ packages in the default Ubuntu repositories. I already know that the package names of the packages that I am searching for all start with g++-, but searching for these packages with apt-cache search g++- returns many unhelpful search results that don't start with g++- because the g++- string in apt-cache search g++- is a regular expression. How to search only for packages whose names start with g++-? My available Ubuntu versions to test the command are 16.04, 18.04 and 20.04, but if you have some other OS that has apt command-line package manager I will try the command in Ubuntu and see if it works.
Anchoring and escaping the special character + like in a regular expression works: # apt-cache search '^g\+\+-' g++-7 - GNU C++ compiler g++-7-multilib - GNU C++ compiler (multilib support) g++-aarch64-linux-gnu - GNU C++ compiler for the arm64 architecture ... (A visual scan didn't show any packages that didn't begin with g++ in the output.) Tested in Docker containers running 16.04, 18.04 and 20.04.
Search for packages from Ubuntu repositories whose names start with g++-
1,658,510,518,000
I have many images and each image has little watermark at the bottom, I want to remove that by cropping the images in bulk. Here is an image that shows what I want to do. How to do that in bulk using the command line tools?
Cropping images using command line tools mentioned in the comments is a good initial reference, but it lacks this very convenient variation with percentages in Width x Height, which is just what you need. convert -crop 100%x100%+0-20 original.png cropped.png Of course, substitute 20 with your actual x value of vertical offset. I found about the percentage arguments in How to crop an image using imagemagick convert.
How to remove some pixels with respect to bottom in multiple images. [crop]
1,658,510,518,000
In the example bellow: function zp () { zparseopts -E -walk:=o_walk echo "walk: $o_walk" } I get the following output: $ zp --walk "Walking" walk : --walk Walking $ zp --walk zp:zparseopts:2: missing argument for option: -walk walk : Here the argument of the option is mandatory so I am getting this error. How can I make the option mandatory so that I must pass --walk to zp else it will throw an error?
I don't know exactly about zparseopts, but I think getopt doesn't have that and I only see references to mandatory arguments in the manual for zparseopts. You can always just check manually if the resulting option is set: function zp () { if ! zparseopts -E -walk:=o_walk; then return 1 fi if [ $#o_walk = 0 ]; then echo "required option --walk missing" >&2 return 1 fi echo "walk: $o_walk" } Here, zparseopts fails if the option is given without an argument, and the second if explicitly tests if the o_walk array has any items. Using an associative array to collect the arguments is also an option, and to me it feels cleaner: function zp () { if ! zparseopts -E -A opts -walk: ; then return 1 fi if ! [ ${opts[--walk]+x} ]; then echo "required option --walk missing" >&2 return 1 fi echo "walk: $opts[--walk]" }
How do I make an option (not argument of the option) mandatory in zparseopts?
1,658,510,518,000
Show character at position in a file The above page shows how to use dd to print the nth char in a file. Is there a way to print a file starting from the nth char? Thanks.
You could use tail, for example, from the 5th character onwards: tail -c +5 FILE
How to print a file starting from the nth char?
1,658,510,518,000
The command mid3v2 -l someFile.mp3 gives the following output for a file with name someFile.mp3 in mp3-format: IDv2 tag info for someFile.mp3 APIC=cover front, (image/jpg, 52016 bytes) TALB=someAlbumName TCON=amusicGenre TDRC=2000 TIT2=songname TPE1=singer TPE2=singer TRCK=1 I would like to store the value of TPE1 to a variable t for further processing. How can i do this?
There are many ways to answer this question. The first step is to understand that the output of a command could be received by other commands via a pipe, or, could be captured in a variable: cmd | next command ... etc var=$(cmd) The process to select a line and further select what is after the sign = is called "text processing" and the shell is not well suited to do it. A common way to do it may be sed: $ mid3v2 -l someFile.mp3 | sed -En 's/^TPE1=(.*)$/\1/p' Singer And capture the result to a variable: $ t=$(mid3v2 -l someFile.mp3 | sed -En 's/^TPE1=(.*)$/\1/p') $ echo "$t" Singer There is no simple equivalent inside a simple shell. On higher shells (ksh,bash,zsh) it is possible to use regex. Which shell do you use?
Save the output of a command to a variable
1,658,510,518,000
I have a data file which a part of it looks like this: 4 1 5 2 1 2 3 1 1 1 1 2 1 1 1 1 2 1 2 1 I want to count similar rows and put my counts in a third column like this: 4 1 1 5 2 1 1 2 2 3 1 1 1 1 3 2 1 2 Any suggestion please?
Using Miller: $ mlr --nidx uniq -g 1,2 -c file 4 1 1 5 2 1 1 2 2 3 1 1 1 1 3 2 1 2 or, equivalently mlr --nidx count-distinct -f 1,2 file Unlike awk arrays or perl hashes, Miller appears to preserve the "seen order" of the keys - but I don't know if that's guaranteed.
counting number of unique rows within 2 columns
1,658,510,518,000
I use the following command to find a file and copy it somewhere else, find /search/ -name file.txt -exec cp -Rp {} /destination \; How can I copy all files and subdirectories in the parent directory of file.txt? Example, /search/test/sub /search/test/sub2 /search/test/file.txt /search/test/file.doc They should be copied as /destination/sub /destination/sub2 /destination/file.txt /destination/file.doc
With -execdir (not a standard predicate, but often implemented), the given utility would execute in the directory where the file was found. This means that you could do find /search -name file.txt -execdir cp -Rp . /destination \; Without -execdir: find /search -name file.txt -exec sh -c 'cp -Rp "${1%/*}/." /destination' sh {} \; or, find /search -name file.txt -exec sh -c 'cd "${1%/*}" && cp -Rp . /destination' sh {} \; These last two variations execute a short in-line script for each found file. The script takes the pathname of the file as its first argument (in $1), and strips the filename off of the pathname using ${1%/*} (a standard parameter substitution). Then it applies the same cp command as in the first variation with -execdir. The code that does the cd emulates a bit more faithfully what the -execdir variation at the top actually does, while the middle variation bypasses changing the directory by referring to . in the source directory at the end of the path instead.
How to find a file and copy its directory?
1,658,510,518,000
I am using Cygwin as Linux shell, I have following contents in my current working directory: Files : Abc.dat 123.dat 456.dat Directories: W_Abc_w W_123_w W_456_w Now I want to copy files as below: Abc.dat -> W_Abc_w 123.dat -> W_123_w 456.dat -> W_456_w How to achieve this in a single line linux command? I need a generic solution which can be used for similar cases in future... Destination directory always exists, but number of characters in file name will vary. Destination directory name will always contain the file name of file to be copied along with other extra characters. Destination directory names have unique pattern eg. Abc_sa_file_name_1 second directory name will be Abc_sa_file_name_2. File names also has pattern e.g kim_1. Kim_2 . I will be moving or copying file kim_1 to Abc_sa_kim_1_1. I wish to operate complete pattern in one command.
In one command (line): cp Abc.dat W_Abc_w/; cp 123.dat W_123_w/; cp 456.dat W_456_w/ The trailing slashes are not required, but a habit to indicate that the intention is to put the file into a destination directory not as a new file. As a generic loop with a pattern: for f in ???.dat do [ -d W_"${f%.dat}"_w ] && cp -- "$f" W_"${f%.dat}"_w done This picks up every filename that has three characters followed by .dat and copies them into the correspondingly-named directory, if that directory already exists. The filename expansion inside the cp command strips off the trailing .dat. If you were interested in a command-line approach that does not use a loop -- but also moves the files instead of copies them -- you could use zsh: autoload zmv zmv '(???).dat' 'W_$1_w'
Copy files such that individual files gets copied to the folder having file name as a string within complete folder name
1,658,510,518,000
When I run the command head -n 445 /etc/snort/snort.conf | nl I expect lines 1-445 to be returned. However, only up to line 371 is returned: [snip] 370 preprocessor dcerpc2_server: default, policy WinXP, \ 371 detect [smb [139, 445], tcp 35, udp 135, rpc-over-http-server 593], \ What is happening?
The nl utility does not number blank lines by default (and you have blank lines in the input file).
head not returning n lines
1,658,510,518,000
What is the difference between cp fileA fileB and cp -- fileA fileB in Linux?
-- specifies the end of options. In your specific example this shouldn't make a difference but if you were using filename globbing such as: cp * fileB to find a file and you had a file in your directory named -R for example your command could potentially be: cp -R dirA fileB Which obviously wouldn't be the desired outcome. This is especially important when using commands like rm.
Bash: what is the difference between `cp fileA fileB` and `cp -- fileA fileB` [duplicate]
1,658,510,518,000
I'm trying to get my terminal to alert me with a simple bell once my domain registration has finished (is resolvable). From watch --help: Options: -b, --beep beep if command has a non-zero exit How can I invert this option, so it beeps if the command has a zero exit? I also tried variations of the following, but I can't get it to beep when inside watch. watch 'nslookup foo.bar && echo "\a"'
Since watch runs the command with sh by default (-x says not to run with sh), you can invert the return code with !: watch -b ! nslookup foo.bar Depending on your shell and config, you may need to quote !.
How to invert 'watch -b'?
1,658,510,518,000
I'm trying to create a simple script that uses a list of months like this: (Jan Feb) To generate and execute this command: python ExpenseManager.py -p Inputs/Jan\ 2019\ Debit.CSV Inputs/Jan\ 2019\ Credit.CSV -p Inputs/Feb\ 2019\ Debit.CSV Inputs/Feb\ 2019\ Credit.CSV This is the program I've written to that effect: #!/usr/bin/env bash clear months=(Jan Feb) args=() for month in ${months[@]}; do args=(${args[@]} -p "Inputs/${month}\\ 2019\\ Debit.CSV" "Inputs/${month}\\ 2019\\ Credit.CSV") done python ExpenseManager.py "${args[@]}" exit 0 And this, in theory, is working. When I echo the resulting command, I get the exact command I want: python ExpenseManager.py -p Inputs/Jan\ 2019\ Debit.CSV Inputs/Jan\ 2019\ Credit.CSV -p Inputs/Feb\ 2019\ Debit.CSV Inputs/Feb\ 2019\ Credit.CSV Now when I copy/paste the command created by this program and execute it, it works fine. However when I have Bash execute the command, it includes the backslashes that I use to include the escape backslash in Bash: Namespace(filepairs=[['Inputs/Jan\', '2019\', 'Debit.CSV', 'Inputs/Jan\', '2019\', 'Credit.CSV'], ['Inputs/Feb\ 2019\ Debit.CSV', 'Inputs/Feb\ 2019\ Credit.CSV']] I've tried several solutions to get this to work: I've tried making the args a single string and building off of that I've tried surrounding the args with single quotes and using double quotes around spaces like this: args=(${args[@]} -p 'Inputs/${month}" "2019" "Debit.CSV "Inputs/${month}" "2019" "Credit.CSV") I've tried separating each part of the arg that requires a space using quotes like this: args=(${args[@]} -p "Inputs/${month}" "2019" "Debit.CSV" "Inputs/${month}\ 2019\ Credit.CSV") I've looked at other solutions here and elsewhere but nothing seems to do the trick. So rather than continue to get stuck on this I was hoping someone could tell me the magic trick to have Bash execute this procedurally-built, interpolated command?
Don't use echo to see what command is being executed. It prints the command after parsing, that is after quotes and escapes have been applied and removed); therefore, if the output of echo includes quotes and/or escapes like you'd expect to see in a raw command line (i.e. before parsing), it indicates that something is terribly wrong. Compare the output from these two echo commands: $ month=Jan $ var="Inputs/${month}\\ 2019\\ Debit.CSV" $ echo $var Inputs/Jan\ 2019\ Debit.CSV $ echo Inputs/Jan\ 2019\ Debit.CSV Inputs/Jan 2019 Debit.CSV In the first, the escapes are printed, indicating that they weren't parsed, applied, and removed. In the second, they're gone, indicating that they were parsed, applied, and removed. So, how to fix it? Two rules: 1) don't put quotes or escapes in variables (except in weird cases where the string's going to go through an extra level of parsing), and 2) instead, put double-quotes around all variable references (including ${args[@]}) (again, there are some weird exceptions). Also, you can add to an array with array+=("new" "elements"). Here's the fixed script: #!/usr/bin/env bash clear months=(Jan Feb) args=() for month in "${months[@]}"; do args+=(-p "Inputs/${month} 2019 Debit.CSV" "Inputs/${month} 2019 Credit.CSV") done python ExpenseManager.py "${args[@]}" exit 0
How to procedurally generate command in bash script using string interpolation with spaces?
1,658,510,518,000
I'm looking for a tool for displaying inline menus in the shell which can be navigated with arrow-keys and enter. By "inline", I mean that the menu is displayed within the normal flow of stdout text, not in a pop-up dialog on top of everything. I only found that post trying to address that, but it only mentions either custom scripting or non-inline/pop-up solutions (like dialog or zenity). What I am looking for is a robust package that I could simply install within a Docker image using apt-get or npm install -g and invoke from my scripts with a list of choices and get back the user's selected item. In nodeJS, I am using Inquirer which offers not only that kind of menus, but also all sorts of inputs. Here's an example screenshot of such an inline menu. The tool does not have to be written in shell script. It can be a binary/script written in any language, as long as it's rather easy to install using apt-get/curl. Even a nodeJS tool would be fine, as long as it's invokable from a shell script to pass it the choices.
I used to use iselect for this, many years ago. A very basic example: $ sel="$(iselect -a 'foo' 'bar')" $ echo $sel foo From man iselect: iSelect is an interactive line selection tool for ASCII files, operating via a full-screen Curses-based terminal session. It can be used either as an user interface frontend controlled by a Bourne-Shell, Perl or other type of script backend as its wrapper or in batch as a pipe filter (usually between grep and the final executing command). In other words: iSelect was designed to be used for any types of interactice line-based selections. Input Data Input is read either from the command line (line1 line2 ...) where each argument corresponds to one buffer line or from stdin (when no arguments are given) where the buffer lines are determined according to the newline characters. You can additionally let substrings displayed in Bold mode for non-selectable lines (because the selectable lines are always displayed bold) by using the construct "<b>"..."</b>" as in HTML.
Looking for command line package for showing inline text-based menu selector with arrow keys
1,658,510,518,000
I have a mixed wordlist as an input: azert12345 a1z2e3r4t5 a1z2e3r455 The command line I have tried to execute: cat file.txt | grep -E "[[:digit:]]{5}" --color What do I want to accomplish: Print only these words: "azert12345" and "a1z2e3r4t5", using grep with a pattern like I said before. Something like grep -E "[[:digit:]]{5}". It is easy to print words like "azert12345" using grep -E "[[:alpha:]]{5}[[:digit:]]{5}" with a maximum number of digits of 5 and a maximum number of alphabetical characters as 5, but the problem is: How am I going to print the mixed ones like this one a1z2e3r4t5? The "a1z2e3r4t5" is just an example the mount of data i should deal with is so much biger This problem is driving me to crazy for 3 days, and it is not a homework. I'll start learning again more about linux commands. I need some help.
IMHO this would be simpler in awk or perl, for the reasons outlined here: grep with logic operators (in particular, that there is no natural AND operator in grep). For example awk 'gsub(/[a-z]/,"&") == 5 && gsub(/[0-9]/,"&") == 5' file or perl -ne 'print if tr/[a-z]// == 5 && tr/[0-9]// == 5' file will print lines containing exactly 5 of each of the character sets. If you insist on grep, then something like this might work: grep -xE '([^a-z]*[a-z][^a-z]*){5}' file | grep -xE '([^0-9]*[0-9][^0-9]*){5}'
output mixed alphanumeric input with grep,pipe and cat
1,658,510,518,000
I have a tar file. That's what its structure looks like: -images.tar.gz -folder_0_image_1.jpg -folder_0_image_2.jpg -folder_0_image_3.png -... -folder_1 -folder_1_image_1.jpg -folder_1_image_2.jpg -... -folder_2 -folder_2_image_1.jpg -folder_2_image_2.jpg -... -folder_x ... How do I extract all the files from the root directory that have the .jpg extension? (I'd like to extract these files: folder_0_image_1.jpg, folder_0_image_2.jpg ...)
You need to exclude files in subfolders like this: tar --wildcards --exclude='*/*' -xvzf images.tar.gz '*.jpg' Explanation: --wildcards means we specify files to extract by a wildcard, i.e. *.jpg - specified later --exclude='*/*' an option to exclude (from being selected for extraction) all entries with a / in them - i.e. all files in subfolders -xvzf eXtract, Verbose output, gunZip decompress first, archive from a File images.tar.gz the archive name, of course '*.jpg' filename pattern - we promised tar one, here it is - everything that ends in .jpg.
How can I extract files with specific extension from a tar file's root directory?
1,658,510,518,000
System: Linux Mint 19.1 Tessa, edition: Cinnamon Got a problem with a locate command. I created test.txt file on a desktop. After that I did: sudo updatedb However locate test.txt -i still doesn't show anything. Permissions to mlocate.db: -rw-r---- Working on normal user, not root (that's why I was using sudo command)
The updatedb command will scan the filesystems on your system and create an index of the names of the available files and directories. This indexing is performed as a non-privileged user. This means that the index will only ever contain the names of files that are accessible by all the system's users. Since your home directory is only accessible to yourself (you say in comments that you have rwx------ permissions on it), this means that it will not be indexed by updatedb. This in turn means that locate will never return names from within your home directory (using sudo locate instead of just locate will still query the same index, so that won't help). To solve this, you have two options: Loosen up the restrictions to your home directory (and to any directory beneath that that you want to be indexed by updatedb). The permissions should probably read rwxr-xr-x, or 755 in octal. Don't use locate to find files. Instead use find: find "$HOME" -name test.txt This would look for anything called test.txt in or under your home directory.
Locate doesn't work
1,658,510,518,000
I have a table as below: 1 10 15 2 2 25 1 10 26 I like to merge them and make a new column in linux, like below: 1 10 15 1:10-15 2 2 25 2:2-25 1 10 26 1:10-26
Try this, awk '{print $0" "$1":"$2"-"$3}' file 1 10 15 1:10-15 2 2 25 2:2-25 1 10 26 1:10-26
how can I merge multiple column in one column and separated by '-'?
1,658,510,518,000
I am looking for files, since I added a backup external HD. I want to continue working elsewhere, while find/grep/locate find a file. As a match is found, I'd like to be alerted so that i can stop the search, in case it was the one i intended to find. Can there be an audible alert per match?
At least with GNU find, the -printf action supports a \a (terminal bell) escape char - so at its simplest you could do something like find . -name foo -printf '\a' -print I'm not aware of an equivalent with grep or locate.
how can i make grep/find/locate beep as it finds each match
1,658,510,518,000
When running the following command tcpdump -i deviceName 'host 1.2.3.4' -q -w /mypath/dump.pcap the dump file contains a huge amount of data because there's a lot of traffic. However, I only need to save the header details of each packet, not the entire contents. I tried using the -q switch (for "quiet") but that's not helping. I need Time, Source, Destination, Protocol and Length. I do not need any of the other information, and especially not the full contents of each packet. If there's a way to ignore the contents and only write the header details to disk so as to save space? I'm getting to over a GB in a matter of minutes :( I've seen many questions about how to increase the amount of data saved, but nothing for reducing it. Am I barking up the wrong tree?
I was in the same situation and I solved it by adding -s 96
How to record only the header info when using `tcpdump`
1,658,510,518,000
I have a hosted zone and record set that route to multiple addresses. I'd like to update the record set with adding or removing one IP address in the list. Unfortunately, AWS CLI doesn't provide the option of deleting/adding the value of resource record in route53 { "Comment": "Update the A record set", "Changes": [ { "Action": "UPSERT", "ResourceRecordSet": { "Name": "mydomain.com", "Type": "A", "TTL": 300, "ResourceRecords": [ { "Value": "XX.XX.XX.XX" } ] } } ] } I can add multiple IP addresses into your json like this manually. But I want to add multiple IPs to the json file using bash automatically. { "Comment": "Update the A record set", "Changes": [{ "Action": "UPSERT", "ResourceRecordSet": { "Name": "mydomain.com", "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": "XX.XX.XX.XX" }, { "Value": "XX.XX.XX.XX" } ] } }] }
Adding, using jq $ jq '.Changes[0].ResourceRecordSet.ResourceRecords += [{"Value": "foobar"}]' file.json { "Comment": "Update the A record set", "Changes": [ { "Action": "UPSERT", "ResourceRecordSet": { "Name": "mydomain.com", "Type": "A", "TTL": 300, "ResourceRecords": [ { "Value": "XX.XX.XX.XX" }, { "Value": "foobar" } ] } } ] }
Update file with multiple values automatically
1,543,536,286,000
I install Ubuntu into a SSD drive and when I insert the SSD into different computer I have to change manually the disk number. I would like to get the home directory of all the disks until I found the one in the SSD. In order to do that I need to know how to save the output of commands into variables (specially 'ls') Is that even possible? Thanks.
You don't need to capture the output of ls, you need to look up the search command in grub. Search devices by file ('-f', '--file'), filesystem label ('-l', '--label'), or filesystem UUID ('-u', '--fs-uuid'). Basically it allows you to search for your SSD either by some file present on the SSD or by filesystem label or UUID.
How can I get the 'ls' output to a variable in grub2?
1,543,536,286,000
I recently did a fresh Ubuntu 18 install and copied over my home directory from my previous Ubuntu 16 setup. However this seems to have broken the copy paste functionality I had previously with xclip (0.12 installed). My previous tmux.conf method: setw -g mode-keys vi bind -t vi-copy y copy-pipe "xclip -sel clip -i" I've looked at other similar questions on here but unfortunately none of them match my exact scenario.
As of tmux 2.6, bind-key no longer takes a mode-table option (-t). Instead, there is a a key-table (-T) for each mode. Additionally, commands can't be used directly in copy-mode bindings. They have to be sent with send-keys -X. From comments on tmux issue 754: replace -t with -T replace vi-<name> with <name>-mode-vi prefix the command with send-keys -X Furthermore, from version 2.4 onwards, the new command copy-pipe-and-cancel leaves copy mode, while copy-pipe keeps it active. So that line in your tmux.conf becomes: bind-key -T copy-mode-vi y send-keys -X copy-pipe-and-cancel "xclip -sel clip -i" Garbage printed to the screen Depending on your terminal emulator, you may also see some characters dumped to the screen after using this binding. This is down to the set-clipboard feature: Attempt to set the terminal clipboard content using the xterm(1) escape sequence, if there is an Ms entry in the terminfo(5) description (see the TERMINFO EXTENSIONS section). It appears that some terminals (such as LXTerminal) will set TERM to xterm (which supports this extension), but don't actually recognise the sequence. copy-pipe and copy-pipe-and-cancel will "helpfully" attempt to use this feature, and the terminal simply displays the resulting characters. What you're seeing is the escape sequence followed by the base64-encoding of the selected text. If your terminal is one that mishandles this escape sequence, you can disable it with set-option -g set-clipboard off
Ubuntu 18 Tmux 2.6-3 copy paste functionality with xclip non functional
1,543,536,286,000
I am trying to open 2 separate windows/instances of emacs from the terminal in a single command. I have tried: emacs &; emacs & (error: bash:syntax error near unexpected token ; and emacs & && emacs & (error: bash:syntax error near unexpected token && but both ways produce errors. How can I produce in a single command 2 windows of emacs to emerge?
You need only single separator between commands: ; or & or && etc, so try emacs & emacs & If you run emacs &; emacs & then you start emacs in the background, and then run ; without any command so bash claims it doesn't expect this separator there (syntax error near unexpected token ;). Similar error you will get by just running bare ;: bash$ > ; bash: syntax error near unexpected token `;' Not all shell behaves like that, for example in zsh you can even do zsh$> ; ; ls; ; ls&; ls&; ; ls &; ; without any problems (but not ;; without space in-between) as it is a separator on itself, used in case statement). The other thing you tried, emacs & && emacs &, is even worse as second command (after &&) should be run only if the first one ended successfully (this is what && does). But, since we run the first command in the background shell doesn't wait for its finish, so that condition doesn't make much sense. Once again: use just single separator between commands, either command1 & command2 or eventually command1 && command2.
How to produce from the terminal in a single command 2 emacs windows?
1,543,536,286,000
I need to search (on the whole disk) and replace (where there are matches) one file with another (both in the same path). Example: Folder 1 x*.txt (good) (e.g.: xFile.txt) *.txt (bad) (e.g.: File.txt) If there is a match of both files in the same path, i need to delete: *.txt (e.g.: File.txt) and rename: x*.txt (e.g.: xFile.txt) to *.txt (e.g.: File.txt) Result: Folder 1 *.txt (e.g: File.txt... old xFile.txt) I use this command: find -name 'x*.txt' | sed -r 'p;s/g([^\/]*.txt)/\1/' | xargs -d '\n' -n2 mv The problem is that the command does not verify if both files exist (xFile.txt and File.txt in the same path) before executing the order How can I solve it? Thanks in advance
With GNU tools, you could do something like: (export LC_ALL=C find . -name '*.txt' -print0 | sed -Ez 's|/x([^/]*)$|/\1|' | sort -z | uniq -zd | sed -z 'h;s|.*/|&x|;G' | xargs -r0n2 echo mv) That assumes there are not files whose name starts with more than one x. For instance, it won't do mv ./xx.txt ./x.txt
replace files in the command line with specific string
1,543,536,286,000
Assume my current path is /home/inp/Documents/Folder I would like to copy folders /home/inp/Test1/randomName1 and /home/inp/Test1/randomName2 from my current path. Currently, I use the following command: cp ~/Test1/randomName1 ~/Test1/randomName2 . Is it possible to combine randomName1 and randomName2 without using regular expression? Something like: cp ~Test1/[randomName2,randomName2] .
You can accomplish this with brace expansion: cp ~Test1/{randomName1,randomName2} . This will expand to each string in the braces: $ echo Something{1,2,3,5} Something1 Something2 Something3 Something5 or cp ~Test1/randomName{1..2} . This will expand to each number between the start and end, and can also be used with single letters: $ echo Something{1..5} Something1 Something2 Something3 Something4 Something5 $ echo Something{a..e} Somethinga Somethingb Somethingc Somethingd Somethinge
How to copy several not-subfolders in one shot?
1,543,536,286,000
I have a folder with ~10K XML files. Each of them looks like this: ... <object> <name>Cat</name> </object> <object> <name>Cow</name> </object> ... The name includes person, cat, dog, cow, ... I want to pick out the only xml files with cat and/or dog. How can I do this?
To get all the Cat or Dog values out of the name node in an XML document like yours, you may use xmlstarlet like this: xmlstarlet sel -t -v '//object/name[text() = "Cat" or text() = "Dog"]' file.xml This would generate the words Cat and Dog as output if they exist the document as the values of an object node's name child-node. This operation would be tricky to get right with grep in case there are other name nodes that are not child-nodes to object nodes, or if some name nodes have attributes etc. Unfortunately, xmlstarlet does not exit with a non-zero exit status if it can't find anything in the XML input file, so we need to tack on a grep at the end of this to check whether we got any output at all (this will be used in the next step): xmlstarlet sel -t -v '//object/name[text() = "Cat" or text() = "Dog"]' file.xml | grep '.' We can then run this on all the 10k files though find: find . -type f -name '*.xml' -exec sh -c ' xmlstarlet sel -t -v "//object/name[text() = \"Cat\" or text() = \"Dog\"]" "$1" | grep -q "."' sh {} ';' -print This would first find all regular files in or below the current directory whose names end with .xml. For each such file, xmlstarlet is run to extract the Cat and Dog strings from the correct XML nodes, and grep is used to check whether xmlstarlet found anything. Running grep with its -q option makes the utility quiet, but it will exit with the appropriate exit status depending on whether it matched anything or not. If grep found anything, find then prints the pathname of the file that contained the data.
Find XML files with specific values
1,543,536,286,000
I don't understan why grep doesn't work in the first example bla@ble:~/html/example$ grep -r "protected $disallowedBlockNames = array('install/end');" app/ bla@ble:~/html/example$ But bla@ble:~/html/example$ grep -r 'protected $disallowedBlockNames = array' app/ app/Resource/Block.php: protected $disallowedBlockNames = array('install/end');
You didn't provide sample input but in your first example your double quotes are allowing the disallowedBlockNames variable to be expanded by your shell before it is used by grep. I'm assuming this is a variable set in your php code and does not exist in your shell and therefore it is expanding to nothing. So what you are really sending to grep is: grep -r "protected = array('install/end');" app/ In the second example the single quotes prevent the shell from expanding the variable.
grep fails looking for string
1,543,536,286,000
I was learning about Unix file system and learned about pipes. According to GeeksForGeeks, A pipe holds the output of the first command till it has been read by the second program So, I was thinking if I could link a C program and a Java Program so as to supply the output of the C code as the command line arguments for the Java code. Here are my codes: c.c #include <stdio.h> int main(){ printf("World"); return 0; } Java.java public class Java{ public static void main(String[] a){ System.out.println("Hello "+a[0]); } } I tried to link the C object file execution command and the Java class execution command so that the output turns out to be: Hello World This is what I tried: gcc -o c c.c javac Java.java ./c | java Java It didn't go as I expected. The Java program didn't receive the output of the C program and simply threw an ArrayIndexOutOfBounds exception. Also, I didn't see the output of the C program. How can I achieve my goal, if at all this is possible?
What you want is possible and easy.  Just type /your/java/program "$(/your/c/program)" The $(…) notation is called “command substitution”.  $(command1) runs command1 with output to a pipe, captures it, and puts it on the command line.  Socommand2 $(command1) runs command2 with command1’s output as a command-line argument.  You should add quotes ("…") to handle the case where the output from command1 (i.e., your C program) is multiple words (e.g., printf("planet earth");).  This is what you are asking for. I discuss this in some length here, where I show the example $ ls -ld "$(date "+%B %Y").txt" -rwxr-xr-x 1 username groupname 687 Apr 2 11:09 April 2018.txt P.S. If you are on a very old or unusual system, the $(…) notation might not work.  In that case, try /your/java/program "`/your/c/program`" `…` is an old version of $(…).  If $(…) works on your system, use it.
Provide output of C program as command line input Java program?
1,543,536,286,000
Say I have several directories of varying length in the form /tmp/(1) I. First Majuscule Roman Numeral/01. First Arabic Numeral/a. First Grapheme /tmp/(2) II. Second Majuscule/03. Third Arabic/d. Fourth that I want to parse so the output is I.01.a. II.03.d. What's the awk and/or sed solution?
Assuming those are tho only directories beneath /tmp: $ find /tmp -mindepth 3 -type d -print | sed -e 's/\.[^/]*/./g' -e 's/^.* //' -e 's#/##g' I.01.a. II.03.d. The find command finds the directories on level 3 and prints out their full path. The result of this step is /tmp/(1) I. First Majuscule Roman Numeral/01. First Arabic Numeral/a. First Grapheme /tmp/(2) II. Second Majuscule/03. Third Arabic/d. Fourth The sed command does three things: replaces everything from a dot up until the next slash with a dot, creating /tmp/(1) I./01./a. /tmp/(2) II./03./d. removes the bit up until the first space, I./01./a. II./03./d. removes the slashes, I.01.a. II.03.d.
awk or sed to Parse Elements from Directory Path
1,543,536,286,000
I ran the following Bash function that adds a string with expanded variables, into the end of my bashrc: alias() { echo "alias $repo=\"$HOME\"/$repo/$repo.sh" >> "$HOME"/.bashrc source "$HOME"/.bashrc 2>/dev/null } alias To run it I copied it, pasted in the Bash terminal (there it appeared once) and executed by hitting Enter. The output I got in ~/.bashrc was about a thousand lines of the above string: alias $repo=\"$HOME\"/$repo/$repo.sh The very last command (source /home/user/.bashrc) kept being executed endlessly (I assume due to endless calling to the function) so I immediately aborted with the ^C key combo. After removing all thousand repeats of the string with a Nano mark-set and cut operation, I desire to ask why did this happen (and keep happening)?
You defined a function called alias, added a line to .bashrc that calls alias ..., and then sourced .bashrc into your shell (which has the function defined in it already). The alias you sourced calls the function, which adds another line and sources the script again, calling the function again once for each time it's already run, leading to exponential growth. Change the name of your function.
Bash function that creates an alias gets called endlessly
1,543,536,286,000
I have a specific problem that has brought up some general questions. The specific problem: I have a sudoers rule applied to a group of somewhat restricted users. The rule in question is %pusers ALL=(ALL) NOPASSWD: /bin/vi /etc/httpd/conf/*. A user would like to cd /etc/httpd/conf and then sudo vi httpd.conf (or sudo vi ./httpd.conf). Currently sudoers does not allow this, but it will allow running vi against the absolute path: sudo vi /etc/httpd/conf/httpd.conf. I assume that the relative path is not being translated to the full path before sudo checks to see if it’s allowed, but I don't know exactly why or how I could work around this. I'm specifically working with RHEL7 and have not tested this on other *nix systems, though I would expect to see similar behavior. The general questions: How does sudo handle relative paths passed to it on the execution side, like in the example above? (Not relative command paths, relative paths against which a command executes.) How can one get sudo to recognize/translate a command's relative execution path when checking it against the sudoers rules?
Giving regular user sudo access to an editor like vi or vim is risky. See this question for an in-depth explanation. With a modern version of sudo, you could specify in your sudoers: %pusers ALL=(ALL) sudoedit /etc/httpd/conf/* Then, your users could specify their editor of choice using the VISUAL or EDITOR environment variables (as usual), and then use $ sudoedit /etc/httpd/conf/httpd.conf or $ cd /etc/httpd/conf $ sudoedit httpd.conf Instead of sudoedit, sudo -e could be used as well. Modern sudo recognizes sudoedit as a command built-in to sudo, and when you use it, sudo understands that the arguments are supposed to be pathnames of files to edit (only), and then handling of relative pathnames becomes possible. In a general case (= when not using the sudoedit keyword), sudo won't presume to know anything about the command parameters, so if you specify a command with specific parameters in the sudoers file, only "dumb" string matching is possible. Strictly speaking, you don't actually even need sudo for granting access to Apache configuration files. Apache does not particularly care what the configuration file permissions are, as long as Apache itself can read them (and if Apache uses ports <1024, it normally starts as root, so reading files is not a problem). So, since you already have a group, you could do this: chgrp -R pusers /etc/httpd/conf chmod g+rws /etc/httpd/conf chmod g+rw /etc/httpd/conf/* RHEL7 even uses the "usergroups" system by default, so your users' umask values should already be either 002 or 007. If that's true, then these simple one-time settings are all you need to give your users write access to Apache configuration files. Your users can even create new files in the configuration directory, and they will automagically get the correct group ownership and permissions. Actually, the most likely way to accidentally mess up the permissions of the configuration files in this scheme would be to add new files to it as root: the root user usually has umask without the group write bit (traditionally 022 or more strict 077), and root power can trump the setgid bit on directories as well. You might have to learn some new habits, but in this case, unlearning "you must be root to configure Apache" might actually be a good thing. (And if you think sudo provides more audit trail, then really consider placing the configuration files under Git or some other version control system.)
How do relative execution paths work through sudo
1,543,536,286,000
If I create a file with sudo vim test and then open it up in my account (without sudo) why does the editor complain when I try to modify the file (i.e. read only option is set)? According to ls -l I am the owner, and the owner has rwx Why can't I write to the file?
If you create a file with sudo vim test the owner will be root, not you, so if later you want to edit the file you either need to change the owner from root to you or change the permissions. See: jordim@bucketlist-196008:~/test$ sudo vim test jordim@bucketlist-196008:~/test$ ls -l total 8 (...) -rw-r--r-- 1 root root 2 Feb 22 15:43 test (...) The created file belongs to root:root and only root has read and write permissions for it. Rest of the users of the group can only read, as well as any other user from other groups. The question is, why are you creating the file with "sudo" if you need to edit it later as a regular user?
If I own a file, why do I need to change permission to write to it?
1,543,536,286,000
I was was using gpg --gen-key till I got to enter the passphrase where I get: ┌──────────────────────────────────────────────────────┐ │ Please enter Passphrase, │ │ │ │ Passphrase: ________________________________________ │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘ After some digging I found out this came from gpg-agent which in turn uses pinentry. All I can do here is enter passphrase (which works fine) and press tab, which makes the blinking cursor disappear. But how to I select <ok> or <cancel> in pinentry? gpg was installed on osx via brew When I try CTL+C I get: gpg: signal Interrupt caught ... exiting but can still continue typing a passphrase.
With the cursor in the PIN entry area, pressing Enter will activate the “OK” button. Pressing Tab will highlight the “OK” button and then the “Cancel” button; pressing Enter with a button hightlighted will activate that button.
how to navigate with pinentry
1,543,536,286,000
I have a csv file in the following format: 0.25,20171225,20:00 3,20171226,23:59 3.5,20171231,00:01 1.75,20180108,05:43 How can I add a value to the first field in the last line from the command line? So if I wanted to add 1.25 the file would look like this: 0.25,20171225,20:00 3,20171226,23:59 3.5,20171231,00:01 3,20180108,05:43 Since the file is constantly growing there is no fixed line-number for the last line.
Here's awk solution: awk -F, -v OFS=, 'l{print l}{l=$0}END{$1+=1.25;print}' file The idea is to print previous line instead of the current one. -F, and -v OFS=, set the input and output field separator l{print l} prints variable l only if it is not zero (numeric) or empty (string) -- that prevents printing first line, because l is not set yet l=$0 sets variable l to whole line finally we print the last line at the very END changing its first field Output: 0.25,20171225,20:00 3,20171226,23:59 3.5,20171231,00:01 3,20180108,05:43
Add value to a number in the last line of a csv file
1,543,536,286,000
I wish to backup with duplicity (I use it often, usually without issues) /etc and /root. I wish to exclude .cache from the /root directory. I try: duplicity incremental --full-if-older-than 30W --include /etc \ --include /root --exclude '/root/.cache' --exclude / \ --verbosity info / scp://TARGET This generally works, but /root/.cache is included within backup. Instead of the '/root/.cache' I tried different expression patterns: /root/.cache, "/root/.cache", "**.cache", '**.cache', ''**.cache'' and several others, with same result. According to duplicity manual (I have 0.7.12 on openSUSE), expression "**.cache" should work well. Do I misread the manual or do I do something wrong?
Tilia, order matters when excluding in duplicity. the parameters are used in the order given. in your example '/root/.cache' is compared to --include /etc --include /root <-- and matches here --exclude '/root/.cache' --exclude / try to move the specific exclusion in front of the more general include eg. --include /etc --exclude /root/.cache --include /root --exclude /** that should work. moving the cache out the root's home folder would work as well of course. btw. file selection has it's own section in the duplicity man page http://duplicity.nongnu.org/duplicity.1.html#sect9 wrt. to **.cache, yes. it will work, but will exclude any path named '.cache' in the end. if you want the exact name you should use **/.cache. finally, there is the --exclude-if-present parameter, which can be quite handy if there are just some folders to be excluded. --exclude-if-present filename Exclude directories if filename is present. Allows the user to specify folders that they do not wish to backup by adding a specified file (e.g. ".nobackup") instead of maintaining a comprehensive exclude/include list. This option needs to come before any other include or exclude options. ..ede/duply.net
Unable to exclude .cache from duplicity backup