date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,473,661,284,000
what's the stand-alone hyphen (between the -C & -O) stand for in this command? curl -C - -O http://www.intersil.com/content/dam/Intersil/documents/fn67/fn6742.pdf FWIW- I'm following a tutorial here.
-C - automatically determines how to resume the transfer, based upon the input and output files. From man curl (note the second paragraph): -C/--continue-at <offset> Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that will be skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE will not be used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out. If this option is used several times, the last one will be used.
what does Curl's stand-alone hyphen (-) mean?
1,473,661,284,000
I'm trying to disable some services from starting at boot time on my Linux Mint 12 laptop. So I installed chkconfig, which has worked great for me before on Fedora. However, on Linux Mint 12, it gives me tons of errors. Here is an example, trying to disable the rsync service: $ sudo chkconfig rsync off insserv: warning: script 'K01acpi-support' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'cryptdisks-udev' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `cryptdisks-udev' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `cryptdisks-udev' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'acpid' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `acpid' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `acpid' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-upstart-bridge' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `plymouth-upstart-bridge' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `plymouth-upstart-bridge' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'rsyslog' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `rsyslog' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `rsyslog' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'friendly-recovery' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `friendly-recovery' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `friendly-recovery' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevtrigger' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `udevtrigger' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `udevtrigger' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev-finish' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `udev-finish' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `udev-finish' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-stop' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `plymouth-stop' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `plymouth-stop' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'apport' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `apport' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `apport' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dbus' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `dbus' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `dbus' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'cryptdisks-enable' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `cryptdisks-enable' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `cryptdisks-enable' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hwclock' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `hwclock' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `hwclock' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'lightdm' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `lightdm' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `lightdm' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevmonitor' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `udevmonitor' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `udevmonitor' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'ufw' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `ufw' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `ufw' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'binfmt-support' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `binfmt-support' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `binfmt-support' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `udev' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `udev' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'irqbalance' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `irqbalance' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `irqbalance' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'cron' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `cron' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `cron' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'nmbd' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `nmbd' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `nmbd' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-splash' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `plymouth-splash' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `plymouth-splash' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'procps' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `procps' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `procps' insserv: warning: script 'acpi-support' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-manager' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `network-manager' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `network-manager' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'smbd' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `smbd' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `smbd' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dmesg' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `dmesg' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `dmesg' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'module-init-tools' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `module-init-tools' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `module-init-tools' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-interface' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `network-interface' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `network-interface' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'console-setup' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `console-setup' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `console-setup' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'anacron' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `anacron' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `anacron' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'modemmanager' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `modemmanager' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `modemmanager' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev-fallback-graphics' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `udev-fallback-graphics' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `udev-fallback-graphics' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `plymouth' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `plymouth' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-interface-security' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `network-interface-security' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `network-interface-security' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'alsa-store' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `alsa-store' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `alsa-store' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'alsa-restore' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `alsa-restore' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `alsa-restore' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'avahi-daemon' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `avahi-daemon' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `avahi-daemon' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-log' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `plymouth-log' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `plymouth-log' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'mysql' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `mysql' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `mysql' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'atd' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `atd' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `atd' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hostname' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `hostname' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `hostname' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'cups' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `cups' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `cups' The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hwclock-save' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `hwclock-save' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `hwclock-save' insserv: script virtualbox: service vboxdrv already provided! insserv: script virtualbox: service virtualbox already provided! The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'setvtrgb' missing LSB tags and overrides insserv: Default-Start undefined, assuming empty start runlevel(s) for script `setvtrgb' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for script `setvtrgb' It seems to have worked when I run: # chkconfig rsync rsync off Is it bad to continue to use chkconfig? Can anyone suggest an alternate service-managing program, or a way to fix the errors when running chkconfig?
Since it is an ubuntu based distro you can also use update-rc.d. update-rc.d rsync remove or install rcconf wich is a really nice text-based user interface
Chkconfig on Linux Mint 12 giving tons of errors
1,473,661,284,000
I'm used to commands supporting multiple filename arguments if possible, but unlink doesn't: %> unlink a b unlink: extra operand `b' Try `unlink --help' for more information. I ended up using a for loop. Is there a technical reason why unlink only takes one filename?
unlink(1) is an intentionally simplified variant of rm(1). I'm not certain why it was created, but it's probably due to the fact that under the hood, rm(1) is implemented in terms of the unlink(2) system call. Therefore, I assume the reason for unlink(1) is to provide a more direct path to the system call. Someone doubtless came up with a use case where rm(1) did the wrong thing and decided the best way to fix it was to provide this direct path.
Why does unlink(1) support only one file?
1,473,661,284,000
How can I notify all online (Desktop) users from the command-line? I know that if I want that I get notified when something is done, I just do like this: sudo apt-get update | notify-send "apt-get update" "update finished" What should I use to notify all users (or some specific user)?
You can try wall. On my KDE machine, a small panel pops up with the message sent with wall. Of course, the message also appears in all terminals, but maybe your users do not have a terminal open. Example: echo "It is 9 o'clock and all is well." | wall
CentOS: Alert all desktop users from command-line
1,473,661,284,000
Is there a commandline tool for GNU/Linux which will analyse download and upload speed, packet-loss, latency and other factors which will indicate internet connectivity status?
You can use iftop to show bandwidth usage.
Commandline tool for comprehensive and integrated testing of internet connectivity
1,473,661,284,000
I've a sequence of commands to be used along with lot of pipings, something like this: awk '{ if ($8 ~ "isad") {print $2, $5, "SE"} else {print $2, $5, "ANT"} }' "/var/log/apache2/other_vhosts_access.log" | grep -v '127.0.0.1' | tr '[' '\0' | tr [:lower:] [:upper:] | sort -t' ' -s -k3 This basically filters the Apache log and prints three cols of info. First col is IP address, second is time, third is a string. The output could be sorted based on any column so, I need to use '-m' with sort for the time field. Sorting order could also be reversed. I want a string to store the arguments to sort, and let the combined strings get executed. Something like this: $AWK_CMD | $SORT_CMD $SORT_KEY $SORT_REV where SORT_CMD="sort -t' '" SORT_KEY="-k$1" # $1 from cmd line arg SORT_REV="$2" # Either -r or blank Am able to build such strings; when I echo it, it looks fine. Problem is, how to execute it? I get some errors like: awk '{ if ($8 ~ "isad") {print $2, $5, "SE"} else {print $2, $5, "ANT"} }' "/var/log/apache2/other_vhosts_access.log" | grep -v '127.0.0.1' | tr '[' '\0' | tr [:lower:] [:upper:] | sort -t' ' -s -k3 Running ... awk: '{ awk: ^ invalid char ''' in expression Please ignore if 'sort' doesn't sort properly, that can be fixed. I wish to know how I can build the final command string in steps. The ways I tried to execute this command in the script is by: $final_cmd `$final_cmd` Edit: The script that I'm trying to use KEY=$1 # Values {1, 2, 3} REVERSE=$2 # Values {0, 1} SORT_CMD="sort -t' ' -s" SORT_KEY="-k$KEY" if [[ $KEY -eq 2 ]] then SORT_KEY="-m -k2" fi if [[ $REVERSE -eq 1 ]] then SORT_REV="-r" else SORT_REV= fi final_cmd="awk '{ if (\$8 ~ \"isad\") {print \$2, \$5, \"SE\"} else {print \$2, \$5, \"ANT\"} }' $LOG_FILE '|' grep -v '127.0.0.1' '|' tr '[' '\0' '|' tr [:lower:] [:upper:] '|' $SORT_CMD $SORT_KEY $SORT_REV" echo $final_cmd echo "Running ..." $final_cmd
The different parts can be put in shell functions : awkfilter() { awk '{ if ($8 ~ "isad") {print $2, $5, "SE"} else {print $2, $5, "ANT"} }' } toupper() { tr '[:lower:]' '[:upper:]' } dosort() { sort -t' ' -s -k3 } awkfilter < /var/log/apache2/other_vhosts_access.log | grep -vF 127.0.0.1 | tr '[' '\0' | toupper | dosort then you could make things optional more easily : dosort() { rev= if [ "$2" = "reverse" ] then rev=-r fi sort -t' ' -s -k"$1" ${rev+"$rev"} } When your command-line is starting to be really long, writing it in a script and breaking it in parts (with functions) is usually really helpful.
How to build a long command string?
1,473,661,284,000
I would like to create a file that just contains a binary number. I think that touch can be used to create an empty file, but is there any way I can fill it with a binary number e.g. 10 (ten)? And how can I validate that the file contains the binary value of ten? See also How can I check the Base64 value for an integer?
Convert the number to hex (in this case A) and then do: echo -en '\xA' > file
How can I create a file that just contains a binary number?
1,473,661,284,000
When I do: display file.pdf it shows this: I can't resize the little window that it adds but I can remove it by clicking on the x. I can also remove the checkerboard effect that comes from assuming transparency, I think, with: display -alpha opaque file.pdf The text is now rough and slightly hard to read. It seems to have been rendered in black and white where anti-aliasing should have used more shades/colors. How can this be fixed?
You need to remove the alpha channel, rather than making it opaque — this will render the alpha information correctly (including anti-aliasing), then remove it. display -alpha remove file.pdf The small window shown above the main window is used to scroll around the document.
How to get display to show a pdf properly
1,473,661,284,000
I have a some tree. . ├── 1 │   └── a │   └── script.a.sh ├── 2 │   └── a │   └── script.b.sh ... └── a └── script.sh And I need to find script.*.sh. I execute ./a/script.sh: #!/bin/bash # path="/tmp/test" readarray files < <(find ${path} -path "*/a/script.*.sh") if [ ${#files[@]} -ne 0 ] then for file in "${files[@]}" do echo ${file} done fi Output: /tmp/test/1/a/script.a.sh /tmp/test/2/a/script.b.sh It's worked script! But if I change find command to find ${path} -path "*/a" -name "script.*.sh" the script does output nothing. My logic find all files with pattern /tmp/test/*/a/script.*.sh what I say with find ${path} -path "*/a" -name "script.*.sh". My questions that remain at the moment: What does find ${path} -path "*/a" -name "script.*.sh" command do in real? There is debug mode in the find? I mean how does it understand what I say to it (example: path(s) where it will search, without find . -name "*te.st*", just ./*/*te.st*). Can -name, -path live on the same command line (need cases, examples, explanation) or they mutually exclusive? Or can we say -path extended version of -name? Guys, yes I can use man and google. In these source I do not find full answers at all my questions either.
But if I change find command to find ${path} -path "*/a" -name "script.*.sh" the script does output nothing. Indeed, it must not. A file matching the predicate -path "*/a" cannot also match the predicate -name "script.*.sh". My logic find all files with pattern /tmp/test/*/a/script.*.sh what I say with find ${path} -path "*/a" -name "script.*.sh". You seem to be assuming that the value of $path is /tmp/test. In that case, it's unclear to me why you are making this so complicated. Since you can exactly describe the paths you want via a glob, I would be inclined to leave find entirely out of the picture: files=("${path}"/*/a/script.*.sh) But if you want to do it with find, then I don't see the point of combining -name and -path for this case. As you already observed, your original find command does the job. And it's simpler and clearer. Alternatively, you could move the partial-path matching to the starting-point list: find ${path}/*/a -maxdepth 1 -name "*script.*.sh" What does find ${path} -path "*/a" -name "script.*.sh" command do in real? It searches for files in the subtree rooted at ${path} whose paths (starting with ${path}) match the pattern */a, and whose basenames match the pattern script.*.sh. Each in full. The paths of any files matching both criteria will be printed, but the conditions cannot both be satisfied for the same file, so nothing will be printed. There is debug mode in the find? I mean how does it understand what I say to it (example: path(s) where it will search, without find . -name "*te.st*", just ./*/*te.st*). POSIX does not specify any debug facilities for find, but inasmuch as you have tagged linux and ubuntu, you are almost certainly using the GNU findutils version of find. This implementation does have debug facilities, which are documented on its manual page. You are looking for the -D option. It's unclear how helpful you would find it, but for maximum debug information you could add -D all to your find command, before the starting-point list. Can -name, -path live on the same command line (need cases, examples, explanation) or they mutually exclusive? Or can we say -path extended version of -name? Yes, of course they can be used together, and occasionally it even makes sense to combine them. But the points that may have confused you are that they are independent, and -path tests the whole path, not just the directory part. Perhaps this is what you mean when you ask whether -path is an extended version of -name. Your attempt at combining them failed because your -path predicate does not actually match the files you want to select. But this would work: find ${path} -path "*/a/*" -name "script.*.sh" Also, combinations that use the -o connective (meaning "or") instead of default or explicit -a ("and") may find more use, or combinations in which one or both of the predicates is negated via -not.
find -name -path
1,473,661,284,000
There are lots of questions/answers explaining what characters one shouldn't use in Linux filenames etc. I am looking for a non-alpha/numeric character that I can put at the front of a filename to bump it to the top of an ls without requiring escape characters to handle it on the command line. In macOS I use <Filename> to sort in the GUI, but the < is not good on the command line, of course, one needs to escape it as appropriate. Similarly -Filename- doesn't work well on the command line. Is there one special character, coming before all alpha-numeric characters, that doesn't have a special meaning for (most) shells? Or are all the early-sorting characters taken? :-( :-) Thanks, Ashley. PS I've seen 111- used, but that doesn't sit right with me...
Without knowing it, you've asked two very complicated questions... quoting and sorting. Sort order is determine by locale. What comes first is... not fixed. e.g. $ touch Hello hello There there $ LANG=C ls -1 Hello There hello there $ LANG=en_US ls -1 hello Hello there There The next is the quoting question. This is very shell dependent. So in my standard shell (ksh99) the ! is a good character. But this fails in bash. $ ls !README 0 1 2 a b c $ cat !README hello $ bash bash-4.2$ cat !README bash: !README: event not found If we work our way through the ASCII sequence, the first "useful" character might be a +. This does not appear to be a special character in ksh/bash/zsh/csh. But since quoting is shell specific and it's possible some commands might take a + as an argument (historically, head did this) we can never be certain. And, of course, LANG setting can override (so + isn't first!). % LANG=en_US ls -1 0 1 2 a b c +hello So, in general... there's no specific character that is guaranteed to come first.
Is there a character in Linux that can be used at start of a filename to bump it to the top in regular sort order but doesn't require escaping etc?
1,473,661,284,000
This: Save all the terminal output to a file Except after the fact. Meaning that instead of preparing to record or pipe all output to a file, I am dealing with output that has already taken place, and that I omitted to record to a file. Rather than to spend minutes scrolling up 7000 lines of output, copying and pasting that to a document, I have to think there is an easier way to get the current output. Considering that this may depend upon the terminal emulator, I am using Konsole and zsh in this case. How can I save the terminal output to a file after the fact?
With konsole, File->Save output as works as does CTRL-SHIFT-S, but you will only save what is in the buffer.
Save all terminal output to a file, after the fact
1,473,661,284,000
I hope this isn't a duplicate, but I did some due diligence trying to find and answer. I want to pipe the original output to a second command, only if the first command fails. e.g. cat file.txt | command1 || command2 Where both command1 and command2 read from STDIN. I want command1 to read everything from cat file.txt, but, if it fails. I want to run all that input into command2. EDIT: I now realize that my question was a tad ambiguous. I said command1 and command2 both read from STDIN, but what I meant was that I wanted both to receive the STDOUT from cat file.txt (not necessarily that specific command) I think my initial desire was to put something like this in an alias. Being 5 months later, I don't remember how this was solved. Most likely by writing a bash script.
Welcome to Stack Exchange! Lot's of good info here. Thank you for your due diligence first. The issue I see with your command is that it is "Begging the Question." In other words, it assumes you know the answer before you even start. Let's have a closer look: cat file.txt The contents of "file.txt" will go to STDOUT. If the "cat" command generates any errors, they will do to STDERR. AFTER the command completes, the exit code is set. 0 = success; anything else is an error. The issue here is, you don't know if you should execute command1 or command2 until the output has already been sent. Which means, it's gone. Compare that to this: cat file.txt | sort This is, effectively, 2 commands: 1: cat file.txt 1> temp_file 2: sort < temp_file Ok not REAL Unix commands, done this way for clarification. 1: Send STDOUT to "tempfile" (that is the 1> part). Also send STDERR to the console. 2: Run the sort command, and read STDIN from "tempfile". Putting the pipe | between the two commands obviates the need for a temp file. The pipe means: Redirect STDOUT from "cat" to STDIN of "sort". Your issue: You want to redirect STDOUT the the STDIN of another command, but you do not yet know which command that is, so you cannot redirect it. Before I get to a possible solution, let me give you a little background: In Unix, there is a way to execute command the way you want to. To wit: my_command argument1 argument2 && echo "Success" || echo "Failure" After my_command runs (and returns its exit status) with STDOUT & STDERR on the console, run one or the other of the echoes. && means "Do this if the last command was successful" || means "Do this if the last command failed" You are very close with your original command. You have: cat file.txt | command1 || command2 Let's change that to this: cat file.txt && command1 || command2 That gets you everything you want, except for the output. Question 1: If cat file.txt works, then should command1 read STDOUT and ignore STDERR? Likewise, if it fails, then should command2 read STDERR and ignore STDOUT? Or, should either command read both STDOUT & STDERR? I'll answer both questions for completeness. Either way, you'll need a way to store the output for later processing. Normally a temp_file. Capture STDOUT & STDERR in the same temp_file: cat file.txt > /tmp/temp_file 2>&1 Capture STDOUT to temp_file#1 and STDERR to temp_file#2: cat file.txt > /tmp/out_file 2 >/tmp/err_file Question 2: Do command1 and/or command2 allow for a file name on the command line? command1 temp_file Do command1 and/or command2 allow for a redirect from a file? command2 < temp_file Do command1 and/or command2 read STDIN? cat temp_file | command1 Note: The Word Count command is an example. wc temp_file cat temp_file | wc wc < temp_file The second form may be problematic in the && || form. I include it here for completeness. Ultimately, you will have to decide what command1 and command2 need to work. We can now combine these together: cat file.txt > /tmp/out_file 2>/tmp/err_file && command1 < /tmp/out_file || command2 < /tmp/err_file Then delete the temp files: rm /tmp/out_file /tmp/err_file That should give you what you want. I'm sure other guru's here can dream up a way of not using a temp_file. I like to keep thins simple and easily understandable because I am not in a programming shop and I have no idea who will have to maintain my code.
How do I pipe to a command only if another command fails (keeping the original input)?
1,473,661,284,000
I'ld like to hide ugly data from being shown by command line tools like cat (and maybe simple text editors too) which often get confused by binary data. For example a VT100 terminal sometimes gets misconfigured by binary outputs. <?php // PHP code shown by text tools on the command line __halt_compiler(); // here some fake EOF mark for simple text processing tools // hidden ugly data Can end of file be spoofed to simple stream-based text viewer tools, especially to the linux command line tools (but maybe also to some windows tools)? I am looking for a solution from within the mixed text/binary file, so that other people using cat or similar do not get trash on their screen.
One 'solution' would be to use the alternative screen buffer which many (but not all) terminals support. Consider the following command: printf "Hello, \e[?1049h ABCDEFG \e[?1049l World\n" On a terminal supporting alternative screen buffers, you would see Hello, World! possibly with a very sudden flash of the terminal. The \e[?1049h sequence will cause the terminal to switch to the alternative screen buffer, where everything printed afterwards will end up. The \e[?1049l sequence switches back to the main screen buffer. An example with php: <?php echo "Hello"; // Nothing to see here...^[[?1049h echo ", World!\n"; //^[[?1049l ?> where ^[ represends the escape character. The alternative screen buffer is used by many programs which like to create terminal user interfaces, but want to restore the contents of the terminal after closing. This is how less can display contents using the entire window, but upon exiting, all of the previous commands are still visible. If you have unbuffer installed, you can verify this: $ unbuffer less -f /dev/null | xxd 00000000: 1b5b 3f31 3034 3968 1b3d 0d0d 1b5b 4b1b .[?1049h.=...[K. as you can see, the first thing printed is \x1b[?1049h which causes the terminal to switch screen buffers. This would not work on any editor (that I'm aware of) since most editors do not attempt to display non-printable characters.
Is there any control character or hack to prevent simple command line tools from showing subsequent data?
1,473,661,284,000
I have to run this cryptic block of code. git clone https://github.com/OpenICC/xcalib.git cd xcalib cmake CMakeLists.txt sudo make install The procedure I'm following mentions the uninstall process. sudo make uninstall Why do the make install and uninstall commands lack any file or program name? In my mind they should be done like this. sudo make install program_name sudo make uninstall program_name the same as sudo apt-get install program_name sudo make install begs the question "install what?"
The commands that are executed by make install (or any invocation of make) are defined in the Makefile (and files included by the Makefile). For simple programs, you can just look for a line install: and see the commands in the lines below. But makefiles can also be quite complicated and scattered across various subdirectories. For details, see the manual for make, or an introduction to make. As @Romeo Ninov wrote, you can also use the command make -n install so see what commands would be executed. Beware that for larger makefiles this output may not be accurate, and if you haven't built the program yet it will likely show you all the commands to build before showing the commands to install.
sudo make install - what is being installed?
1,473,661,284,000
Say I have written the following command but haven't yet pressed enter to execute it: $ ls dir1 dir2 dir3 Is there a way to replace given characters without manually changing them in every location they are? For example, I'd like to press some shortcut, enter string to be replaced (say, dir) and then enter another string as its replacement (say 'directory`).
There's a replace-string autoloadable widget for that. Add to your ~/.zshrc: autoload replace-string zle -N replace-string zle -N replace-string-again bindkey '\eg' replace-string-again bindkey '\er' replace-string Then press Alt+r to invoke. Alt+g to repeat the last substitution. See info zsh replace-string for details.
Replace string in command to be executed in zsh
1,473,661,284,000
I'm using the default ksh on OpenBSD 6.2 (based on pdksh) with Vi command line editing mode enabled. I'm trying to get the arrow keys to work properly as a complement to h, l, j and k (as I'm on a Dvorak keyboard). As far as I can tell, they don't work at all. It does not matter whether I'm in "input" or "command" mode. The current key bindings includes: ^[[A = up-history ^[[B = down-history ^[[C = forward-char ^[[D = backward-char These are also the character sequenced produced by my arrow keys if I use Ctrl+VArrow key. The arrow keys works as expected in Emacs command line editing mode, but as a long time Vi user, I feel somewhat crippled when using it. My feeling is that the Escape that is sent by the arrow key is interpreted as if I pressed Esc... I get the equivalent behaviour by manually typing e.g. Esc[A as when I press Up-arrow (places me in command mode and then in insert mode at the end of the line). Question: Has anyone been able to get the arrow keys to work intuitively in Vi-mode in OpenBSD's ksh?
I did a quick foray into /usr/src/bin/ksh on my OpenBSD system, seeing as I had the actual sources checked out anyway. I had a cursory glance at c_ksh.c, emacs.c and vi.c and it looks as if the Vi mode was retrofitted into pdksh from nsh at some point (around 1989/1990). The exact words used are /* $OpenBSD: vi.c,v 1.55 2018/01/16 22:52:32 jca Exp $ */ /* * vi command editing * written by John Rochester (initially for nsh) * bludgeoned to fit pdksh by Larry Bouzane, Jeff Sparkes & Eric Gisin * */ The bind-able functions all live in emacs.c, as does the x_bind() function which gets called by the bind builtin, while vi.c seems to have its own implementation of some of them under different names that are not called from x_bind(). Therefore I think I can conclude that the bind builtin is a no-op in Vi-mode in this particular shell. UPDATE (2018-02-04): After reporting this to the openbsd-misc list, it was confirmed that bind does indeed not do anything in Vi command line editing mode. A patch will go in to modify the ksh manual on OpenBSD so that this is mentioned: bind string=[editing-command] ... In Emacs editing mode, the specified editing command is bound to the given string. Future input of the string will cause the editing command to be immediately invoked. Bindings have no effect in Vi editing mode.
Arrow keys in OpenBSD's ksh, command line editing, Vi-mode
1,473,661,284,000
Consider a directory with the following files. 20160909_154139.jpg 20160909_154038.jpg 20160909_153929.jpg 20160909_153927.jpg 20160908_121201.jpg 20160908_121155.jpg When I do ls with no arguments, I get the files in the order above. Let's say instead I just wanted the files in this order between 20160909_154038.jpg and 20160908_121201.jpg. Is there some argument I can pass to ls to specify this desire?
That can certainly be achieved by piping the output into awk ls | awk '/^20160909_154038\.jpg$/,/^20160908_121201\.jpg$/'
Is it possible to list the files between two names alphanumerically?
1,473,661,284,000
This question is related to https://askubuntu.com/q/826288/295286 In my search online, I could find no mention of whether bash 3.2 comes with readline support. Thus, I would like to know, if there is a systematic way of finding out what libraries bash uses. In the linked question , I used locate to search for readline.so ,but that approach seems a bit unreliable to me.
This is probably a duplicate (I recall it being answered). But: bash bundles readline, and will use the bundled version of readline unless it is specially configured, and the bundled version is statically linked, so you are unlikely to see it as a shared library dependency of bash. For example: $ ldd /bin/bash linux-vdso.so.1 (0x00007ffeae9a5000) libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5 (0x00007fe9bc832000) libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007fe9bc608000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe9bc403000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe9bc062000) /lib64/ld-linux-x86-64.so.2 (0x000055a30b725000) On the other hand, the dependency in Debian/testing upon libncurses.so.5 is unnecessary (bash uses only the termcap interface which is provided by libtinfo.so.5). If you want to see that bash uses readline, use nm -D (Linux...) to see the symbol table: $ nm -D /bin/bash |grep readline 00000000006ffab0 B bash_readline_initialized 00000000006fce00 B current_readline_line 00000000006fcdf8 B current_readline_line_index 00000000006fce08 B current_readline_prompt 000000000046d600 T initialize_readline 0000000000475400 T pcomp_set_readline_variables 000000000046d360 T posix_readline_initialize 000000000049a450 T readline 0000000000499d30 T readline_internal_char 0000000000499300 T readline_internal_setup 0000000000499430 T readline_internal_teardown 00000000006f7910 D rl_gnu_readline_p 00000000006fca20 D rl_readline_name 00000000007003f8 B rl_readline_state 00000000006f7914 D rl_readline_version The external symbols (essentially the same approach) show these entrypoints for the termcap interface: U tgetent U tgetflag U tgetnum U tgetstr U tgoto U tputs (some people become confused by the libncurses dependency and suppose that bash uses ncurses — termcap applications are a special case).
How to know if bash has readline library support?
1,473,661,284,000
I am using this for d in ./*/ ; do (cd "$d" && cp -R ../lib . ); done in a shell script to copy lib folder in all subfolder inside my parent directory . But the lib folder also getting copied inside lib . How to avoid that ?
Without extglob: for d in */ ; do if [ "$d" != "lib/" ]; then cp -R lib "$d" fi done Or just delete it afterwards... (well, unless lib/lib exists beforehand!) for d in */; do cp -R lib "$d"; done rm -r lib/lib (Somewhat amusingly, GNU cp says cp: cannot copy a directory, 'lib', into itself, 'lib/lib', but does it anyway.)
How to copy using for loop? [duplicate]
1,473,661,284,000
let's say I have these two commands: $ cat file1 file1_a file1_b file1_c file1_d And: $ cat file2 file2_a file2_b file2_c file2_d How can I combine these outputs using a custom separator (e.g. ...) so that I get the following output: $ # some fancy command like { cat file1 & cat file2 } | combine --separator='...' file1_a...file2_a file1_b...file2_b file1_c...file2_c file1_d...file2_d ?
I like to use the paste command. paste -d. file1 - - file2 < /dev/null produces desired output file1_a...file2_a file1_b...file2_b file1_c...file2_c file1_d...file2_d - refers to stdin, we use this twice to triple our dots </dev/null is used because we do not want anything between those dots.
How to combine two command outputs line by line? [duplicate]
1,465,559,211,000
ls command shows the following : a code controller.js mani sparat this ubuntu.gif l command the shows similar to that : a/ code/ controller.js mani/ sparat/ this/ ubuntu.gif What is difference between them ?
l is probably a shell alias. On my Ubuntu 14.04 by default it is : alias l='ls -CF' From the man ls page, these flags mean : -C list entries by columns -F, --classify append indicator (one of */=>@|) to entries Type alias l to find out what actually the l command is calling.
what is difference between l and ls commands [duplicate]
1,465,559,211,000
Is it possible for bash to find commands in a case-insensitive way? eg. these command lines will always run python: python Python PYTHON pyThoN
One way is to use alias shell builtin, for example: alias Python='python' alias PYTHON='python' alias Python='python' alias pyThoN='python' For a better approach, the command_not_found_handle() function can be used as described in this post: regex in alias. For instance, this will force all the commands to lowercase: command_not_found_handle() { LOWERCASE_CMD=$(echo "$1" | tr '[A-Z]' '[a-z]') shift command -p $LOWERCASE_CMD "$@" return $? } Unfortunately it does not work with builtin commands like cd. Also (if you have Bash 4.0) you can add a tiny function in your .bashrc to convert uppercase commands to lowercase before executing them. Something similar to this: function :() { "${1,,}" } Then you can run the command by calling : Python in command line. NB as @cas mentioned in the comments, : is a reserved bash word. So to avoid inconsistencies and issues you can replace it with c or something not already reserved.
bash case-insensitive commands matching
1,465,559,211,000
From terminal, how can I print to output a specific section of the result of man something? For example, if I wanted to get some information about the return value of the C function write, I'd like to see something like this: RETURN VALUE On success, the number of bytes written is returned (zero indicates nothing was written). It is not an error if this number is smaller than the number of bytes requested; this may happen for example because the disk device was filled. See also NOTES. On error, -1 is returned, and errno is set appropriately. If count is zero and fd refers to a regular file, then write() may return a failure status if one of the errors below is detected. If no errors are detected, or error detection is not performed, 0 will be returned without causing any other effect. If count is zero and fd refers to a file other than a regular file, the results are not speci‐ fied. ERRORS EAGAIN The file descriptor fd refers to a file other than a socket and has been marked nonblocking (O_NONBLOCK), and the write would block. See open(2) for further details on the O_NONBLOCK flag. EAGAIN or EWOULDBLOCK The file descriptor fd refers to a socket and has been marked nonblocking (O_NONBLOCK), and the write would block. [...] instead of: WRITE(2) Linux Programmer's Manual WRITE(2) NAME write - write to a file descriptor SYNOPSIS #include <unistd.h> ssize_t write(int fd, const void *buf, size_t count); DESCRIPTION write() writes up to count bytes from the buffer pointed buf to the file referred to by the file descriptor fd. The number of bytes written may be less than count if, for example, there is insufficient space on the underlying physical medium, or the RLIMIT_FSIZE resource limit is encountered (see setrlimit(2)), or the call was interrupted by a signal handler after having written less than count bytes. (See also pipe(7).) For a seekable file (i.e., one to which lseek(2) may be applied, for example, a regular file) writing takes place at the current file off‐ set, and the file offset is incremented by the number of bytes actually [...]
To quote my own post from Meta: Linking to man pages I already have a favored method for this, which you can read about in the less man page in two places: LESS='+/\+cmd' man less and LESS='+/LESS[[:space:]]*Options' man less (See what I did there?)
How can I print a section of a manual (man)?
1,465,559,211,000
When trying to copy a file from Linux (Raspbian to be precise, though I don't think it matters) to Windows using SCP: scp a.txt {user}@{ip}:\C\Users\{user}\a.txt The file is copied, but to C:\Users\{user}\CUsers{user}a.txt. It looks as if I need to escape the '\' somehow, but couldn't figure out how.
While I have never used scp on Windows, so I am only guessing, it certainly looks like the backslashes are ignored. Or, rather, as if they are taken as escape characters and, since they don't escape anything relevant, are being ignored. Consider this, on a Linux machine: $ cd \usr\share bash: cd: usrshare: No such file or directory As you can see, the \ were ignored and the path concatenated to a single string, just like what you describe above. The default target location for scp is the user's home directory. Since that is \C\Users\userName, your attempt to specify a path is taken for a file name (\C\Users\userName\a.txt becomes CUsersuserNamea.txt) and the file is saved in the default location with that name: C:\Users\userName\CUsersuserNamea.txt. A simple solution, in this case, would be to not specify a path: scp a.txt user@ip:
Why the defined path is used as file name when I transfer files through scp to a Windows host?
1,465,559,211,000
Lets say there's a string like this test="1/2/3 4/5/6 7/8/9/0" They are separated by spaces, as well as '/'. I want to return a result like this by taking the second field of each string segment. 2 5 8 Is it possible to do this with cut? Or do I need something else? newstring=$(echo $test | cut -d "/" -f2) returns only 2 I am not sure what to do.
One thing you could do is replace spaces with newlines and then use awk or cut. Then, replace newlines with spaces. You'll want to echo the entire thing to get a final newline again: $ echo $(echo "$test" | tr ' ' '\n' | awk -F'/' '{print $2}' | tr '\n' ' ') 2 5 8 Or $ echo $(echo "$test" | tr ' ' '\n' | cut -d/ -f 2 | tr '\n' ' ') 2 5 8 You could also just use perl: $ echo "$test" | perl -lane 's#.*?/(.+?)/.*#$1# for @F; print "@F"' 2 5 8
How do I cut a string already separated by spaces?
1,465,559,211,000
I have something like: find . -type f -print0 | xargs -0 file | grep Matroska | cut -d: -f 1-1 | xargs rm I want something like: find . -type -f -filemagic "Matroska" -delete
-exec indeed can be used as a predicate. find(1): Execute command; true if 0 status is returned. So this example would be: find . -type f -exec sh -c 'file "$0" | grep -q Matroska' '{}' ';' -and -delete Obviously, instead of -delete there can be -ls or -print0 or more predicates.
How can I turn `file` tool into a predicate for `find`?
1,465,559,211,000
I'd like to see how long it takes a certain app to fully start. Is there a way to do this launching the app via terminal using some command?
There's no standard definition for “fully start”. If you come up with a definition, there may or may not be a way to detect it. If your definition of “fully start” is “wait until the application becomes idle, waiting for user input”, then you can trace its system calls and look how long it takes to start reading user input. strace -o myapp.strace -tt myapp When the application has finished starting, look at the trace file in myapp.strace and locate the line at which the application issued a wait for user input. For example, in the example below, the select call I show is the point at which the application waited for user input to do anything else; the next line corresponds to something happening to the application. 01:07:53.975957 execve("/usr/bin/xterm", ["xterm"], [/* 83 vars */]) = 0 … 01:07:54.319288 select(5, [3 4], [], NULL, NULL) = 1 (in [3]) 01:07:57.129481 recvmsg(3, {msg_name(0)=NULL, msg_iov(1)=[{"\10\4\344\1\343q\257 Some applications don't go idle, but keep polling forever. Then you have to use your own judgement to decide when the application has “fully started”. Some applications may start responding to user input while they're still loading some files or otherwise performing some predefined startup actions. There's no formal definition of “fully started”, so you get to pick what counts as “fully started”. If you count “fully started” as “shows a GUI window”, you can use xtoolwait, which launches an application and waits until a window is displayed, then exits. Note that it can return a false positive if some other application happens to display a new window at the same time, since windows are not associated with applications in a fully reliable way. time xtoolwait xterm
Timing start up time for app launched via terminal command
1,465,559,211,000
I have a large number of files that contain backslashes \ that I would like to manipulate, but whenever I try something like: $ ls -li 2036553851 -rw-rw-r-- 1 user user 6757 May 20 00:10 Simplex_config\\B1:B3\\_1.csv 2036553766 -rw-rw-r-- 1 user user 6756 May 20 00:07 Simplex_config\\B1:B3\\_2.csv 2036554099 -rw-rw-r-- 1 user user 6785 May 20 00:20 Simplex_config\\B1:B3\\_3.csv 2036553974 -rw-rw-r-- 1 user user 6785 May 20 00:15 Simplex_config\\B1:B3\\_4.csv $ find . -type f -name 'Simplex*.csv' | xargs cat > looksee.txt I receive a No such file or directory error. I have considered changing the filenames and then manipulating, but I am curious to see if there was an easier solution with the inode. I came up with: #!/bin/sh if [ -f looksee.txt ]; then rm -rf looksee.txt fi ls -i Simplex_config*.csv | awk '{ print $1 }' > inode_list.txt while IFS= read -r inode; do find . -inum $inode -exec cat {} \; >> looksee.txt done < inode_list.txt But this is very cumbersome and I would like to try to find a way to parse the output from ls -i Simplex_config*.csv and pipe it to another command in a one-liner -- is there such an option available?
1. find . -type f -name 'Simplex*.csv' -print0 | xargs -0 cat > looksee.txt From man xargs --null -0 Input items are terminated by a null character instead of by whitespace, and the quotes and backslash are not special (every character is taken literally). Disables the end of file string, which is treated like any other argument. Useful when input items might contain white space, quote marks, or backslashes. The GNU find -print0 option produces input suitable for this mode. 2. find . -type f -name 'Simplex*.csv' -exec cat {} + > looksee.txt From man find -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of ; is encountered. The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these constructions might need to be escaped (with a \) or quoted to protect them from expansion by the shell. The specified command is run once for each matched file. The command is executed in the starting directory. There are unavoidable security problems surrounding use of the -exec action; you should use the -execdir option instead. -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of {} is allowed within the command. The command is executed in the starting directory. 3. cat Simplex_config* > looksee.txt if you have 1 level of subpath only.
Manipulate multiple files by inode
1,465,559,211,000
I have multiple sub-directories each with different depths. I need to search for the last occurrence of a string in a specific file type (say *.out). How can I accomplish this? I have tried: grep -r 'string' **/*.out | tail -1 But that gives me only the last string of the last file.
I will use something like: for i in `find . -name "*.out" -type f`; do grep -l 'string' $i grep 'string' $i|tail -1 done With 1st grep you will have filename and below (second grep) the content. This works as long as the file names don't contain whitespace or \[*?. See cuonglm's answer for a robust solution.
Find the last occurence of a string in a given filetype in all subdirectories
1,465,559,211,000
In the man pages it says: -C list entries by columns However, I really cannot notice any difference between the output of ls or ls -C, could someone explain this to me?
To add what @muru said in the comments; have a look at info coreutils ls `-C' `--format=vertical' List files in columns, sorted vertically. This is the default for `ls' if standard output is a terminal. It is always the default for the `dir' program. GNU `ls' uses variable width columns to display as many files as possible in the fewest lines. I take this to mean -C exists specifically for the case where you redirect or pipe the output and want to preserve columnation. Otherwise ls will switch to ls -1 when it detects that it's not displaying to a terminal.
What does the option -C achieve in ls output?
1,465,559,211,000
I have directories set up a bit like this ~/code ~/code/src ~/code/build -> /path/to/somewhere/else That last one's a symlink. If I do this cd ~/code/build ls .. then I get the listing for /path/to/somewhere, but from other remarks and my own experience, I'd expected to see the listing for ~/code -- I'd swear that this used to work the other way round. I'm using zsh and bash on Ubuntu. Is there a setting for this or is it deeply ingrained into POSIX or something?
Not the issue of ls. It's how symlinks work. The .. gets you into the parent of the current directory, the directory doesn't know you got to it through a symlink. The shell has to intervene to prevent this behaviour. For the shell builtin cd, there is special handling that doesn't just call chdir but memorizes the full directory path and tries to figure out what you want. ls, however, is not a builtin. The shell has to change .. to a different path before passing it to ls if you want to get what you expect. zsh option CHASE_DOTS helps you with that. Generally speaking, symlinks to directories are a dirty business. For critical and semi-permanent applications, rather use mount --bind.
ls .. doesn't work properly with symlinks
1,465,559,211,000
I am developing an API in Unix environment for virtual machines. Most of the modules are developed in python. I have few questions on this. I have the file extension as abc.py. I would like to make this as a command. For example , virtman dominfo [vmid] should be the command syntax. Now I have to give ./virtman.py dominfo [vmid] to achieve this. And I have first line as #!/usr/bin/python in the python file. So how can make this as a command? My echo $PATH looks like '/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin0' I read a lot of articles but I didn't get the clear picture,so any hints/suggestions would be helpful.
You seem to be mistaken in that abc.py would not be a command. If you can execute it, then is, just one with a dot in the name. Execute in the sense that you can do ./abc.py, so the execute bits must be set. If you have to do python abc.py than you it is not a command (yet). In general, to make a normal python file abc.py executable you should make sure the first line reads: #!/usr/bin/env python (This assumes you have /usr/bin/env as a program, and that will find the python command, which might be in /usr/local/bin. It also assumes that you want to run the default python (which is normally a link to a particular python version like python2.7), you could also use python3 if that is available as a command). After that do chmod +x abc.py mv abc.py abc And then you can run ./abc. If the current directory is in your path, or if you move abc to a directory in your path, you should be able to execute abc from anywhere.¹ There are however disadvantages of renaming and moving the file: you can no longer to from abc import SomeClass, as the file is renamed If the file is under revision control it might no longer be So instead, what I normally do, is make a new file /usr/local/bin/abc that looks like: #!/usr/bin/env python from abc import main main() and have at the bottom of abc.py: def main(): doing the real stuff if __name__ == '__main__': main() The directory of abc.py needs to be in the PATH python searches for modules, but this way it doesn't have to be changed, and can be used by any program as an import, and started as python abc.py. ¹ The mv is necessary to get rid of the dot in the command name, but not really necessary, you can invoke ./abc.py if you don't rename it.
Basic steps to develop python API in Unix environment
1,465,559,211,000
I have, in the same directory, several odt files. I'd like to have in one PDF document the first page of these odt files, sorted alphabetically based on their filename. For example, if I have these files: a.odt b.odt c.odt I would have a resulting PDF that has 3 pages: the first one of a.odt, then the first one of b.odt and then the first one of c.pdf. Does any one of you think about a command to do that?
#!/bin/bash for i in *.odt; do echo "Converting [ $i ]" unoconv -f pdf "$i" echo "Extracting 1st page of [ $i ]" i="${i%odt}pdf" pdftk P="$i" cat P1-1 output "$i".1 done echo "Merging pdfs" pdftk *.1 cat output result.pdf rm *.1 You have to install unoconv and pdftk. Ubuntu: sudo apt-get install unoconv pdftk
Concatenate in a PDF the first page of several odt files alphabetically
1,465,559,211,000
Is there some way to change the current url (of current tab or an specific tab) of a running web browser (any) thru command line? It can be any browser that run on Linux, preferably if it is light weight. I don't mean to close and re-open the browser. I was only able to open new tabs on browsers but not change the current url of them. I could not fast and easily find ways thru qdbus either (not that there aren't). If some plugin alows this also, it may be ok too.
In Firefox you can use MozRepl addon to control Firefox via telnet. For example, to change current url of a current tab to http://v4.ident.me: (echo "content.location.href = 'http://v4.ident.me/'"; sleep 2) | telnet localhost 4242 > /dev/null
change url of web browser thru command line?
1,465,559,211,000
How can I locate a file using locate in CentOS under a specific directory from terminal? Locate search the whole database!
There is no option for that functionality in the output from man locate on CentOS 6.5, at least. But, you could get pseudo-functionality by changing a search term. For example, locate cron might produce too much output, but locate '/var/log/cron' would limit the results to those items in the locate database that match the search terms. Or, a pipe would work: locate cron | grep '/var/log/' Otherwise, use find: find /path/to/search -name '*cron*' or similar.
How to locate a file in a directory
1,465,559,211,000
It seems when you pass a file to lpr (or pipe to it), it attempts to guess the MIME type. Sometimes, however, it guesses wrong—and then attempts to print something "interesting" at best. Is there a way to manually override the MIME type?
The MIME type is sent in the IPP attribute document-format, which you can specify with the -o option: lpr -o document-format=text/plain ... Note that if you try a document format that your CUPS server doesn't support, you'll get an error: $ lpr -o document-format=image/svg+xml drawing.svg lpr: Unsupported document-format "image/svg+xml". That error is actually coming back from the server.
Specify MIME type to CUPS's lpr command
1,465,559,211,000
I want to use multiple instances of command line paramater such as the -d option used by PHP for passing PHP options. I am currently using the getopts command in bash. With PHP invocation it would look like this: php -f aPHPscript.php -d memory_limit=120M -d apc=1 -d max_execution_time=120 How would the multiple instances of the parameter be extracted in a bash script? Is there a bash command for doing that?
Bash doesn't typically care about the values of the arguments, it's more their ordering, and how they're separated on the command line, by default parsing them based on a space between each argument. You can see this with a simple for loop construct in a shell script like so: #!/bin/bash echo "" echo "ARGS: $@" echo "" echo "parsed args:" for i in "$@"; do echo "$i" done Example $ ./parse.bash -f aPHPscript.php -d memory_limit=120M -d apc=1 \ -d max_execution_time=120 ARGS: -f aPHPscript.php -d memory_limit=120M -d apc=1 -d max_execution_time=120 parsed args: -f aPHPscript.php -d memory_limit=120M -d apc=1 -d max_execution_time=120 In the above, each iteration through the loop is "peeling off" the next argument that was passed into the script parse.bash. So rather then use getopts you could always do something like this. Using while + case I typically don't use getopts and do things as you're inquiring about like so with a while loop using a case statement to parse the arguments as required. $ more parse2.bash #!/bin/bash while [[ $# > 1 ]] do key="$1" shift # -f aPHPscript.php -d memory_limit=120M -d apc=1 #+ -d max_execution_time=120 case $key in -f) f_ARG="$1" shift ;; -d) d_ARGS=( "${d_ARGS[@]}" "$1" ) shift ;; *) # unknown option ;; esac done echo "$f_ARG" echo "${d_ARGS[@]}" When we run it with your arguments we can see that it was able to parse your -d arguments into an array, $d_ARGS. $ ./parse2.bash -f aPHPscript.php -d memory_limit=120M -d apc=1 \ -d max_execution_time=120 aPHPscript.php memory_limit=120M apc=1 max_execution_time=120 References How do I parse command line arguments in bash? Chapter 27. Arrays
What is the bash syntax for extracting the values from multiple instances of the same argument?
1,465,559,211,000
At the moment I have a set of files of the form of: /dir one/a picture.jpg /dir two/some picture.jpg What I want to do is copy and change the end to -fanart.jpg to the filename to: /dir one/a picture.jpg /a picture-fanart.jpg /dir two/some picture.jpg /some picture-fanart.jpg I've managed to get it working for the situation where there are no spaces: % for i in `find . -name "*.jpg"`; do cp $i "${i%.*}"-fanart.jpg ;done; but I to get it working where there are spaces.
Command substitution (`...` or $(...)) is split on newline, tab and space character (not only newline), and filename generation (globbing) is performed on each word resulting of that splitting. That's the split+glob operator. You could improve things by setting $IFS to newline and disable globbing, but here, best is to write it the proper way: find . -name "*.jpg" -type f -exec sh -c ' for i do cp "$i" "${i%.*}-fanart.jpg" done' sh {} + You could also use pax for that: pax -rws'/\.jpg$/-fanart&/' -s'/.*//' . . Or zsh's zmv: autoload zmv zmv -QC '(**/)(*)(.jpg)(D.)' '$1$2-fanart$3'
Copying recursively files with spaces
1,465,559,211,000
I'm looking to install the package for tempfile but am not finding it? possibly use mktemp but I'm not sure if there is a difference in behaviour besides a dot notation in the temp name? $ tempfile # /tmp/file1wJzkz $ mktemp # /tmp/tmp.IY8k24NayM
The name generated by mktemp can be modified to have no dots. For example: mktemp XXXXX => 8U5yc mktemp /tmp/XXXXX => /tmp/tsjoG From man mktemp: DESCRIPTION Create a temporary file or directory, safely, and print its name. TEM‐ PLATE must contain at least 3 consecutive 'X's in last component. If TEMPLATE is not specified, use tmp.XXXXXXXXXX, and --tmpdir is implied. Files are created u+rw, and directories u+rwx, minus umask restric‐ tions. In any case, forget about tempfile, just use mktemp. The following is from man tempfile on my Debian (emphasis mine): BUGS Exclusive creation is not guaranteed when creating files on NFS partitions. tempfile cannot make temporary directories. tempfile is deprecated; you should use mktemp(1) instead.
Command(s) to install tempfile on CentOS 6.4
1,465,559,211,000
Recently I found out a new very useful command. It is: gnome-open or xdg-open or just open on mac. It opens file or directory in by default specified program. But, in the event that I need to open a file in an already running redactor (I mean send the file to a running processes somehow) I have no solution. I wonder if there is any way to achieve that or if it is possible to write a script that can do that. For example if I want to send it to an already running instance of eclipse. Kind of drag-and-drop through commandline.
Did you try eclipse --launcher.openFile <absolute path of file to open> Eclipse OpenFile Feature.
Send file to opened editor using command line
1,465,559,211,000
The following command prints a message over ssh : xmessage Message -display :0 & How does it work? there is no -display option in xmessage's man page.
It's included by (obscure) reference. SEE ALSO X(7), echo(1), cat(1) And buried down a ways in X(7): OPTIONS Most X programs attempt to use the same names for command line options and arguments. All applications written with the X Toolkit Intrinsics automatically accept the following options: -display display This option specifies the name of the X server to use. followed by a number of other X Toolkit Intrinsics (Xt) standard options. More modern toolkits have similar common options, which you can see with the --help-all option.
xmessage over ssh
1,465,559,211,000
I regularly find myself having to execute a lengthy command on a file, then process the results with other commands. To process the next file, I usually rerun the same command by hitting the Up key until I find the command I want and arduously replace the old filename with the new filename. Is there a way to combine caret substitution (^oldfile^newfile) with the n th-last command? I have (unsuccessfully) tried to pipe the n th-last command into the substitution like so: $ !-4 | ^old^new Of course, I am open to other suggestions. These little shortcuts really help with productivity...
You can't do it with a quick substitution directly, because ^foo^bar is shorthand for: !!:s/foo/bar/ The !! part (which refers to the last command) isn't part of the quick syntax (that's what makes it quick), but you can use the longer syntax directly and then modify the !! to whatever you want: !-4:s/foo/bar/ I explained as much of the history syntax as I know in this post; the last section includes the :s modifier
How can I apply caret-substitution to my nth-last command?
1,465,559,211,000
If I download a video using youtube-downloader I can watch the part file while downloading (in my case using mpv). Suppose I cannot or don't want to select a format containing both video and audio then the audio in the part file is missing because it is downloaded and merged after the video download is complete. Is there any fast way to get audio and video merged during the download, such that I can watch the part file including audio. I already asked a similar question on github and learned that I could use the --downloader ffmpeg option. This works, but is very slow, so I am looking for a faster way to do it. The problem occurs if I download very large video (for example 10 hours long) in a high quality. However downloading audio is much much faster. So suppose I have the audio file already and I am downloading the video file. Is there an indirect way (workaround) using for example ffmpeg to merge the audio continuously into the video while the file is downloading.
Option 1: You can select a video download format that contains a mixed/muxed stream of both video and audio. For example, yt-dlp -F https://youtu.be/3QnD2c4Xovk will list formats to choose, and something like yt-dlp -f 18 https://youtu.be/3QnD2c4Xovk will choose that format. The partial file contains video and audio if the format supports it. Option 2: You can also choose to download two formats, one each for audio and video which will then be muxed by yt-dlp: yt-dlp -f 251,244 https://youtu.be/3QnD2c4Xovk The format I specified first (here, 251) was downloaded first in my tests and I could listen right away by playing the partial file. For completeness: the above currently outputs yt-dlp -F https://youtu.be/3QnD2c4Xovk [youtube] Extracting URL: https://youtu.be/3QnD2c4Xovk [youtube] 3QnD2c4Xovk: Downloading webpage [youtube] 3QnD2c4Xovk: Downloading ios player API JSON [youtube] 3QnD2c4Xovk: Downloading android player API JSON [youtube] 3QnD2c4Xovk: Downloading m3u8 information [info] Available formats for 3QnD2c4Xovk: ID EXT RESOLUTION FPS CH │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR ASR MORE INFO ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── sb2 mhtml 48x27 0 │ mhtml │ images storyboard sb1 mhtml 67x45 0 │ mhtml │ images storyboard sb0 mhtml 135x90 0 │ mhtml │ images storyboard 233 mp4 audio only │ m3u8 │ audio only unknown [en] Default 234 mp4 audio only │ m3u8 │ audio only unknown [en] Default 139 m4a audio only 2 │ 1.84MiB 48k https │ audio only mp4a.40.5 48k 22k [en] low, m4a_dash 249 webm audio only 2 │ 2.22MiB 57k https │ audio only opus 57k 48k [en] low, webm_dash 250 webm audio only 2 │ 3.02MiB 78k https │ audio only opus 78k 48k [en] low, webm_dash 140 m4a audio only 2 │ 4.91MiB 127k https │ audio only mp4a.40.2 127k 44k [en] medium, m4a_dash 251 webm audio only 2 │ 5.82MiB 151k https │ audio only opus 151k 48k [en] medium, webm_dash 17 3gp 176x144 12 1 │ 2.17MiB 56k https │ mp4v.20.3 mp4a.40.2 22k [en] 144p 394 mp4 216x144 24 │ 1.26MiB 33k https │ av01.0.00M.08 33k video only 144p, mp4_dash 269 mp4 216x144 24 │ ~ 4.53MiB 115k m3u8 │ avc1.4D400C 115k video only 160 mp4 216x144 24 │ 717.16KiB 18k https │ avc1.4D400C 18k video only 144p, mp4_dash 603 mp4 216x144 24 │ ~ 5.39MiB 136k m3u8 │ vp09.00.11.08 136k video only 278 webm 216x144 24 │ 1.34MiB 35k https │ vp09.00.11.08 35k video only 144p, webm_dash 395 mp4 360x240 24 │ 1.41MiB 37k https │ av01.0.00M.08 37k video only 240p, mp4_dash 229 mp4 360x240 24 │ ~ 6.73MiB 170k m3u8 │ avc1.4D400D 170k video only 133 mp4 360x240 24 │ 1.11MiB 29k https │ avc1.4D400D 29k video only 240p, mp4_dash 604 mp4 360x240 24 │ ~ 9.56MiB 242k m3u8 │ vp09.00.20.08 242k video only 242 webm 360x240 24 │ 1.58MiB 41k https │ vp09.00.20.08 41k video only 240p, webm_dash 396 mp4 540x360 24 │ 2.13MiB 55k https │ av01.0.01M.08 55k video only 360p, mp4_dash 230 mp4 540x360 24 │ ~ 16.81MiB 425k m3u8 │ avc1.4D4015 425k video only 134 mp4 540x360 24 │ 2.31MiB 60k https │ avc1.4D4015 60k video only 360p, mp4_dash 18 mp4 540x360 24 2 │ ≈ 7.36MiB 186k https │ avc1.42001E mp4a.40.2 44k [en] 360p 605 mp4 540x360 24 │ ~ 19.08MiB 482k m3u8 │ vp09.00.21.08 482k video only 243 webm 540x360 24 │ 2.66MiB 69k https │ vp09.00.21.08 69k video only 360p, webm_dash 397 mp4 720x480 24 │ 3.21MiB 83k https │ av01.0.04M.08 83k video only 480p, mp4_dash 231 mp4 720x480 24 │ ~ 29.80MiB 753k m3u8 │ avc1.4D401E 753k video only 135 mp4 720x480 24 │ 4.36MiB 113k https │ avc1.4D401E 113k video only 480p, mp4_dash 606 mp4 720x480 24 │ ~ 28.21MiB 713k m3u8 │ vp09.00.30.08 713k video only 244 webm 720x480 24 │ 4.21MiB 109k https │ vp09.00.30.08 109k video only 480p, webm_dash and you can see the "audio only" and "video only" description text by the yt-dlp tool.
How to watch video while still downloading including audio?
1,465,559,211,000
I have a file that contains four columns and 5000 rows. I want to make 5000 new files from this file so that each file has one row from the original file. Also, I want to name the new files according to the values in the 4th column. Example: The following file (XXXX.txt) has four rows File: XXXX.txt 1 315 4567 G1 1 212 345 G2 2 315 25674 G3 3 12 235673 G4 Expected New Files File: G1 1 315 4567 G1 File: G2 1 212 345 G2 File: G3 2 315 25674 G3 File: G4 3 12 235673 G4 I have tried this command: awk '{print > $0}' < XXXX.txt This command makes new files as desired, but I am not able to name new files as per column4 of the original file.
You can try by slightly change the awk script: awk '{print > $4}' XXXX.txt But be aware if in the source file there is row with same 4th column as other the end file will contain only the last row. You can try something like to avoid it: awk '{print >> $4}' XXXX.txt N.B. Do not run it more than once as this will add the records twice. If you eventually get "too many open files" error you can use script like this to explicitly close the output file(s) awk '{print >> $4;close($4)}' XXXX.txt
Every row of a file is converted to a new file
1,465,559,211,000
I want to forward the job output from a job run by a third party scheduler. The scheduler allows me to insert the output into a command, e.g. like so: python script.py --job-output '{joboutput}' Where {joboutput} becomes the raw job output forwarded by the scheduler. The issue that I have encountered is that the job output can contain can contain unbalanced single and double quotes (as well as special characters like (, | etc), so either wrapping the joboutput in single or double quotes doesn't work. I have looked through many similar questions, but have found none that cover this exact scenario. I would highly appreciate any suggestions. Many thanks!
If that command line is preprocessed by the scheduler and it then sends it to a shell for execution (e.g. though sh -c), after doing a simple text replacement of {joboutput} with the actual text, then what you're asking can't really be done. Not on just one line anyway. It is possible to pass an arbitrary (NUL-terminated) string as a command line argument (up to some maximum length anyway), but inserting the string on the shell command line requires following the shell's syntax/quoting rules. Basically, the shell has double quotes, single quotes and backslash escapes. Within double quotes, you need to escape a number of characters by prefixing them with backslashes, so you can't put an arbitrary string inside those. Within single quotes, you don't need to escape anything else, but the single quotes themselves need special processing. The usual way is to replace the quotes with '\'', which just closes the quoted string, inserts an escaped single quote, and reopens the quoted string. In any case, there's some characters that need to be treated specially, and no way around it. It has to be that way, as the shell needs some way of determining where the quoted string ends. So, if you use "{joboutput}" and {joboutput} is replaced with something that contains ", $, \ or `, it'll break; and if you use '{joboutput}' and {joboutput} is replaced with something that contains a ', it'll break. Some languages might have more verbose quotes, like Python's """/''', which might help in that they'd allow the occasional lone quote or quote pair, but would of course not still be totally general, since the output could contain the same """ or '''. The shell doesn't have those, though. The nearest thing is probably here-docs, which are delimited by a freely-chosen line. That would help for embedding almost arbitrary strings, but it does require being able to pass multiple lines, and is rather hairy. If it's possible to use a multiline command, this should work if {joboutput} is replaced by anything that's not END_OF_JOB_OUTPUT barring parsing bugs in the shell related to the here-doc inside a command substitution. You could change the here-doc separator to any other string, one that's unlikely to appear in the output. However, since the data goes through a command substitution here, any trailing newlines in it are lost. out=$(cat <<'END_OF_JOB_OUTPUT' {joboutput} END_OF_JOB_OUTPUT ) python script.py --job-output "$out" or without nesting the here-doc in the command substitution: exec 9<<'END_OF_JOB_OUTPUT' {joboutput} END_OF_JOB_OUTPUT out=$(cat <&9) exec 9<&- printf "%s\n" "$out" If the scheduler was able to launch that command directly, without involving the shell, we wouldn't need to care about the shell's syntax. The scheduler would just need to have some smarts to explicitly pass script.py, --job-output and the job output as distinct arguments. But we don't know if it can do that. (Also, in that case, you wouldn't use the quotes around the placeholder.) Another way that would be easier than the shell shenanigans above would be to pass the string through an environment variable or a file, if that's supported by the scheduler.
Pass text with unbalanced quotes as argument on command line
1,465,559,211,000
Problem I am trying to automate a task in my Firefox browser with xdotool. First, I open a new tab in my browser with: firefox -new-tab "www.domain.tld" Then (after the page www.domain.tld has finished loading) I want to perform a task: if [ <page has fully loaded> ] then <commands> fi How can I detect whether the page has finished loading in bash? Workaround At the moment I use sleep 5 (wait 5 seconds until calling the next command) which is a a bit hacky because some pages load really fast and others don't.
You could use a traffic monitoring service like iftop. This tool shows connections based on the hostname (or IP if you wish). #!/bin/bash while ( iftop -t -s 5 2>/dev/null | grep www.domain.ltd >/dev/null ) ; do echo "still loading" done Limitations: needs root to run assumes proper hostname resolution (will fail e.g. with youtube, where they use all sorts of hostnames but not youtube) not sure about IPv6 support in hostname resolution needs a few seconds to properly see traffic what about websites that constantly reload some element? Alternatively nethogs will do a per-process analysis and shows sent and received. E.g. for 2 counts with 2-second delay: #!/bin/bash while ( nethogs -t -c 2 -d 2 2>/dev/null | grep firefox >/dev/null ) ; do echo "still loading" done Limitations: needs root to run watches a process: if the web browser has other tabs constantly loading data, it would fail. (e.g. music from a website) needs a few seconds to properly see traffic Or tcpdump, here limited to incoming TCP packets and stopped by timeout while ( timeout 3 tcpdump 'tcp' -Q in -q 2>/dev/null | grep www.domain.ltd >/dev/null) ; do echo "still loading" done Limitations: see iftop Conclusion: all methods are based on network traffic monitoring, this means they all need a few seconds to analyze said traffic and might actually help when it comes to make sure big websites are loaded, but would not speed up the whole thing in case of small websites.
Bash: How can I check if a website has finished loading?
1,465,559,211,000
For example, if I enter the following command: $ man -k compare The diff command is missing from the results, however the test command is not. I get the same results using apropos as expected. $ whereis diff diff: /usr/bin/diff /usr/share/man/man1/diff.1.gz $ whereis test test: /usr/bin/test /usr/share/man/man1/test.1.gz If I examine the short description of diff, it is the following: NAME GNU diff - compare files line by line As you can clearly see, "compare" is in the short description. Moreover, let's examine the long description too: DESCRIPTION Compare FILES line by line. And again, "Compare" appears in the description. Now, let's examine the short description of test: NAME test - check file types and compare values Which is what I would expect. However, "compare" is missing from the description of test. So, I'm not clear why man -k compare or apropos compare do not find diff. However I believe it has to do with the output of the whatis command: $ whatis diff diff (1) - (unknown subject) $ whatis test test (1) - check file types and compare values Now the reason for this discrepancy may be due to the fact that the man page for diff has a name that consists of two words "GNU diff" rather than one, but I'm not sure.
This is a bug in the man pages of the GNU diffutils package. As you suspected, the problem is that they show “GNU diff” (etc.) rather than just “diff” as the program name. This causes the man program not to recognize the short description. python3 >>> import dbm >>> db = dbm.open('/var/cache/man/index.db') >>> db['diff\0'] b'-\t1\t1\t1554725040\t0\tA\t-\t-\tgz\t\x00' >>> db['cat\0'] b'-\t1\t1\t1567679920\t0\tA\t-\t-\tgz\tconcatenate files and print on the standard output\x00' The bug was introduced somewhere after diffutils 3.3 and before diffutils 3.6. It was reported as bug #39760 and fixed in this commit or actually possibly this commit. The fixes aren't in a release yet, they will be in diffutils 3.8.
Why does apropos and man -k omit or miss valid results?
1,465,559,211,000
I forked an open source project to work on but one of the folder names I want to cd into starts with an emoji. How can I enter into it? I know I can use the GUI to look through the folder but I rather prefer using the terminal. I'm using Ubuntu 18.04.4 LTS
Option 1: type the emoji Simple typing cd '🔥 100 DISTRICT ATTORNEYS 🔥' will suffice. Searching up "how to insert emoji in X" is usually more than enough to get you started on typing emojis in your environment. If you don't want to set up emoji insertion, you can always copy-paste from the web (searching "fire emoji" is all you need). Option 2: wildcard In all honesty, nobody expects you to make a search every time you want to insert a special character. You can also use the wildcard character like so: cd *'100 DISTRICT ATTORNEYS'* Which will search for a file/folder that has "100 DISTRICT ATTORNEYS" in the middle and any characters on the sides. In this case, the only directory that matches is the one you want to enter. Read more about the * wildcard here, and more about all the types of wildcards in bash here. Option 3: tab completion Okay, we're getting pretty desparate here, but tab completion is worth a mention. Although it may not be viable for your specific situation, in a directory that looks like so (and a situation in which you'd like to enter the ENTER 🔥 directory): | A | B | C | ENTER 🔥 Simply typing: $ cd E<tab> will autocomplete (validly) to the only directory that starts with the letter E. In reality, I like to type a few letters in before pressing tab, for good measure. Read more about tab completion here.
cd into folder name that starts with an emoji
1,465,559,211,000
I was trying to render the troff file https://github.com/bit-team/backintime/blob/master/common/man/C/backintime-config.1 in terminal. How can I make the output (using cat or some other utility) look like the way man outputs troff file in the terminal? Do I need to convert the troff file to some format understood by the terminal? Note: I am not looking to export it to pdf or html
Just do man "${Path-To-Troff-File}"
Render troff file on terminal like the man outputs on terminal
1,465,559,211,000
I installed Virtualbox on my macOS Catalina 10.15.3. The GUI works perfectly. However, when I try to run a vm from the command line, the shell can't find the VBoxManage command. I need it for docker. Where does the installer put it?
The command line tools for Oracle VirtualBox on macOS are usually kept in /usr/local/bin: $ type -a VBoxManage VBoxManage is /usr/local/bin/VBoxManage These should be available in your interactive shell sessions if you have /usr/local/bin as part of your $PATH.
Where are the Virtualbox command-line tools on macOS?
1,465,559,211,000
Basically, my hard drive is a mess and I have like 200+ sub-directories in a main directory. I want to essentially move all files in 200+ sub-directories that have the extension .txt etc to a new directory. For example, n00b.txt or n00b.txt.exe So I try the following command in the main directory consisting of the 200+ subdirectories sudo mv **/*.txt ~/Desktop/tmpremo/ Instead I am getting this error: bash: /usr/bin/sudo: Argument list too long Why am I getting it and how do I remove say .txt,.txt.exe? How do I fix it?
This would move all files with .txt and .txt.exe extensions present anywhere inside the current directory (even in subdirectories) to ~/Desktop/tmpremo. $ sudo find . -type f \( -iname '*.txt' -o -iname '*.txt.exe' \) -exec mv {} ~/Desktop/tmpremo \; If you want another extension too, just add -o -iname '*.extension' before the -exec. PS: As @xenoid noted, please refrain from using sudo unless it is absolutely required for the task at hand.
Moving all files with same file extension .txt to a new directory
1,465,559,211,000
Asterisk uses the editline library, and the keybindings can be configured in /etc/editrc. I have defined some of my own keybindins, some other are left to default values. How can I print the current keybindings in Asterisk? I am looking for something similar to what bindkey does in zsh. Also, how can I "unbind" a key, such as Ctrl+C ? And How would I create new keybinding that would bind Ctrl+D to exit/quit ? here is my current /etc/editrc: bind "^W" ed-delete-prev-word bind "\e[1;5D" vi-prev-word bind "\e[1;5C" vi-next-word bind ^[[5~ ed-search-next-history bind ^[[6~ ed-search-prev-history
It sounds like it uses NetBSD's editline, a.k.a. libedit. See the editrc man page It looks like you can remove bindings using bind -r ... Or bind ... ed-insert And I guess the easiest thing is to try adding bind (without arguments) to the bottom of editrc to list all bindings. To make Ctrl+D exit, I would try bind ^d ed-end-of-file If that doesn't work, you could try making it type "exit" for you using something like bind -s ^d exit\n Or you could make Ctrl+D act like Ctrl+C with bind ^d ed-tty-sigint
editrc: changing keybindings in /etc/editrc
1,465,559,211,000
I'm trying to get my bash function to call another function that uses bash -c. How do we make functions created earlier in the script persist between different bash sessions? Current script: #!/bin/bash inner_function () { echo "$1" } outer_function () { bash -c "echo one; inner_function 'two'" } outer_function Current output: $ /tmp/test.sh one bash: inner_function: command not found Desired output: one two
Export it: typeset -xf inner_function Example: #! /bin/bash inner_function () { echo "$1"; } outer_function () { bash -c "echo one; inner_function 'two'"; } typeset -xf inner_function outer_function Other ways to write the exact same thing are export -f inner_function or declare -fx inner_function. Notice that exported shell functions are a) a bash-only feature, not supported in other shells and b) still controversial, even after most of the bugs were fixed since shellshock.
Bash: call function from "bash -c"
1,465,559,211,000
I am trying to delete two lines ( 7 and 8) from a .txt file. For this I am using following code:- #!/bin/sh Column="7" sed '"$Column",8d' myfile.txt > result.txt on running this script, I am getting this error:- sed: -e expression #1, char 1: unknown command: `"' Please tell me how to use variable as a part of sed command.
Variables are not expanded in single quotes. Use this: sed "${Column},8d" myfile.txt > result.txt
How to use variable in sed command
1,465,559,211,000
I want to output a video using two videos as input, where these two videos fade (or dissolve) into each other in a smooth and repetitive manner, every second or so. I'm assuming a combination of ffmpeg with melt, mkvmerge, or another similar tool might produce the effect I'm after. Basically, I want to use ffmpeg to cut up video A according to a specific interval, discarding every second cut up (automatically). Likewise for video B, but in this case inverting the process to retain the discarded parts. I wish to then interweave these parts. The file names should be correctly formatted so that I can then concatenate the result using a wild card command argument or batch processing list, as per one of the aforementioned tools. The transition effect (e.g. a "lapse dissolve") isn't absolutely necessary, but it would be great if there were a filter to achieve that too. Lastly, it would also be great if this process could be done with little to no re-encoding, to preserve the video quality. I've read through this thread and the Melt Framework documentation, in addition to the ffmpeg manual.
Assuming both videos have the same resolution and sample aspect ratio, you can use the blend filter in ffmpeg. A couple of examples, ffmpeg -i videoA -i videoB -filter_complex \ "[0][1]blend=all_expr=if(mod(trunc(T),2),A,B);\ [0]volume=0:enable='mod(trunc(t+1),2)'[a]; [1]volume=0:enable='mod(trunc(t),2)'[b];\ [a][b]amix" out.mp4 Straight cuts. Output: time, in seconds, [0,1) -> videoB [1,2) -> videoA [2,3) -> videoB ... [2N ,2N+1) -> videoB [2N+1,2N+2) -> videoA ffmpeg -i videoA -i videoB -filter_complex \ "[0][1]blend=all_expr='if(mod(trunc(T/2),2),min(1,2*(T-2*trunc(T/2))),max(0,1-2*(T-2*trunc(T/2))))*A+if(mod(trunc(T/2),2),max(0,1-2*(T-2*trunc(T/2))),min(1,2*(T-2*trunc(T/2))))*B';\ [0]volume='if(mod(trunc(t/2),2),min(1,2*(t-2*trunc(t/2))),max(0,1-2*(t-2*trunc(t/2))))':eval=frame[a]; [1]volume='if(mod(trunc(t/2),2),max(0,1-2*(t-2*trunc(t/2))),min(1,2*(t-2*trunc(t/2))))':eval=frame[b];\ [a][b]amix" out.mp4 Each input's video/audio for 2 seconds with a 0.5 second transition. Output: time, in seconds, [0,0.5) -> videoA fades out 1 to 0 + videoB fades in from 0 to 1 [0.5,2) -> videoB [2,2.5) -> videoB fades out 1 to 0 + videoA fades in from 0 to 1 [2.5,4) -> videoA [4,4.5) -> videoA fades out 1 to 0 + videoB fades in from 0 to 1 [4.5,6) -> videoB [6,6.5) -> videoB fades out 1 to 0 + videoA fades in from 0 to 1 [6.5,8) -> videoA ... [4N ,4N+0.5) -> videoA fades out 1 to 0 + videoB fades in from 0 to 1 [4N+0.5,4N+2) -> videoB [4N+2 ,4N+2.5) -> videoB fades out 1 to 0 + videoA fades in from 0 to 1 [4N+2.5,4N+4) -> videoA
How to transition smoothly and repeatedly between two videos using command line tools?
1,465,559,211,000
I have tons of PDFs in multiple sub-folders in /home/user/original that I have compressed using ghostscript pdfwrite in /home/user/compressed. ghostscript has done a great job at compressing about 90% of the files however the rest of them ended up bigger than originals. I would like to cp /home/user/compressed to /home/user/original overwriting files that are only smaller than the ones in destination while the bigger ones are skipped. Any ideas?
The following find command should work for this: cd /home/user/original find . -type f -exec bash -c 'file="$1"; rsync --max-size=$(stat -c '%s' "$file") "/home/user/compressed/$file" "/home/user/original/$file"' _ {} \; The key part of this solution is the --max-size provided by rsync. From the rsync manual: --max-size=SIZE This tells rsync to avoid transferring any file that is larger than the specified SIZE. So the find command operates on the destination directory (/home/user/original) and returns a list of files. For each file, it spawns a bash shell that runs the rsync command. The SIZE parameter for --max-size option is set by running a stat command against the destination file. In effect, the rsync processing logic becomes this: If the source file is larger than than the destination file, the --max-size parameter will prevent the source file from being transferred. If the source file is smaller than the destination file, the transfer will proceed as expected. This logic will result in only the smaller files being transferred from the source directory to the destination directory. I have tested this in a few different ways, and it works for me as expected. However, you may want to create a backup of the destination directory before you try it out on your system.
Copy a folder overwriting ONLY smaller files in destination
1,465,559,211,000
For documentation purposes, I mean to redirect to file stdout and stderr from a command I execute. For instance, I would run (my command is less trivial than the alias ll but it probably doesn't matter): $ ll > out-err.dat 2>&1 $ cat out-err.dat drwxr-xr-x 39 us00001 us00001 4096 jul 31 14:57 ./ drwxr-xr-x 3 root root 4096 feb 2 06:06 ../ -rw------- 1 us00001 us00001 62226 jul 31 11:56 .bash_history ... Also for documentation purposes, I want to store in the same output file the command line I used. The intended behaviour and output is $ [NEW COMMAND LINE]? $ cat out-err.dat [NEW COMMAND LINE] <- This first line would contain the command line used drwxr-xr-x 39 us00001 us00001 4096 jul 31 14:57 ./ drwxr-xr-x 3 root root 4096 feb 2 06:06 ../ -rw------- 1 us00001 us00001 62226 jul 31 11:56 .bash_history ... How can this be done? I know I could write a bash script and execute it, so the command would remain documented separately. I could further write a script to echo the command line to file and then execute it with redirection to the same file. I am looking for a possible script-less solution. EDIT: Feedback on a nice answer. This wouldn't fit as a comment. I tested with command echo_command ll echo_command.sh ../dir > out-err.dat 2>&1. Script echo_command.sh, which I source, contains the definitions of the functions. ../dir is a non-existing dir, to force some output to stderr. Method 1: Works nice, except for two issues: It doesn't understand aliases (ll in this case; when replacing with ls it worked). It doesn't record the redirection part. Method 2: It doesn't work that well. Now the redirection part is also printed, but the command line is printed to screen instead of redirected to file. EDIT: Feedback on a comment posted, about a script utility. It is quite versatile, even more so with scriptreplay. script can be called alone, which produces an interactive shell (it wouldn't keep the recent history of the parent shell) It can also be called as script -c <command> <logfile>. This last form corresponds with the objective of the OP, but it doesn't store the command itself into the log file. It produces (at least in base cases) the same output as <command> > <logfile> 2>&1. So it seems this is not useful here.
You could use a function like this: echo_command() { printf '%s\n' "${*}"; "${@}"; } Example: $ echo_command echo foo bar baz echo foo bar baz foo bar baz $ echo_command uname uname Linux As Tim Kennedy said, there's also a very useful script command: $ script session.log Script started, file is session.log $ echo foo foo $ echo bar bar $ echo baz baz $ exit exit Script done, file is session.log $ cat session.log Script started on 2018-07-31 16:30:31-0500 $ echo foo foo $ echo bar bar $ echo baz baz $ exit exit Script done on 2018-07-31 16:30:43-0500 Update If you also need to log the redirections and basically every shell syntax (note that I added a little Command line: message to easily identify the command being executed): echo_command() { local arg for arg; do printf 'Command line: %s\n' "${arg}" eval "${arg}" done } Just take into account that you should be very careful with the quoting as eval is used: $ echo_command 'echo foo > "file 2"; cat "file 2"' Command line: echo foo > "file 2"; cat "file 2" foo It also accepts many commands at once instead of only one: $ echo_command 'echo foo' 'echo bar' 'echo baz' Command line: echo foo foo Command line: echo bar bar Command line: echo baz baz
bash echo the command line executed at the command line itself (not in a script)
1,465,559,211,000
My machine boots into command-line and then I run the following command to open a display with TWM window manager on it: $ xinit /usr/local/bin/twm On TWM, this is how I run apps: I left-click anywhere on the black screen and a menu shows up. Then I select xterm and on xterm command-line, I run my app for example: $ opera The problem is that for every app, I need a xterm to be opened first. I wonder if there is any other way by which I can open my apps without having to open xterm. Thanks. I ended up compiling/installing version 2.6.7 of fvwm which is also suggested here. It is a cool window manager with many features.
Define a menu in your .twmrc, like: Menu "MyMenu" { "Opera" f.exec "Opera" } Then bind it to your left mouse button instead of the default one: Button1 = : root : f.menu "MyMenu" Look at TWM(1) to see how to configure TWM according to you needs. You may also want to see examples of configuration at xwinman.org. TWM lacks the feature of generating dynamic menus, so you'll need to define a menu for every set of apps you want to be accessible that way, or you can use the xdgmenumaker utility.
How to open apps on TWM window manager without using xterm
1,465,559,211,000
I'm currently running Mac OS Sierra. I don't know exactly how, but somehow I've altered some permissions and think it'd be a good idea to reset them, but I do not know how. Each time I execute sudo, I am met with this warning: sudo: /var/db/sudo/ts is group writable The command executes fine, but it seems to be a good idea to fix that. Please advise. results: 0 dr-x------ 4 root wheel 136 Mar 28 11:34 . 0 dr-x-w---- 5 root wheel 170 Mar 28 10:38 .. 8 -rw--w---- 1 root wheel 80 Jan 27 00:51 zacadmin 8 -rw------- 1 root wheel 80 Mar 28 12:00 zbrown
The following command will remove write permission from group on file /var/db/sudo/ts sudo chmod g-w /var/db/sudo/ts
Accidentally set write permission on sudoers?
1,465,559,211,000
I want to find all .xml files in all directories and subdirectories with name 'metadata' in the filesystem. Is there a way to search by pattern similar to find . -name */metadata/*.xml" (mixing find command and Gradle paralance) ?
$ find / -path "*/metadata/*" -name "*.xml" or, shorter (since * matches past the slashes in pathnames when used with -path), $ find / -path "*/metadata/*.xml" Note: I'm using / as you said "in the filesystem", which I interpreted to mean "anywhere". Alternatively, using locate (will only find files accessible to all users): $ locate "/metadata/" | grep '\.xml$' (This will work somewhat like the first find above)
How to search files by directory and file name combo pattern
1,465,559,211,000
Can you please indicate me how do you find programs in the terminal without knowing the exact name? As far as I remember, there was a command line tool that helps the user to find out other programs / command line tools associated to some key words. For example I am logged in an unknown system and I want to open a pdf, I don't know what pdf reader is installed and hence cannot open the pdf from the command line. I thought there was a command line tool that I can call with some keyword, say "pdf" and it shows me for example 7 programs that are in some way associated with "pdf", say mupdf, etc. Or how do you do when you don't know the system you're working on or if you forgot what you installed? I searched on google but always landed on something like How to find application's path from command line? which is not what I am asking. Thanks
I think about apropos: From man: apropos - search the manual page names and descriptions Example: $ apropos pdf dvipdf (1) - Convert TeX DVI file to PDF using ghostscript and dvips evince-thumbnailer (1) - create png thumbnails from PostScript and PDF documents fix-qdf (1) - repair PDF files in QDF form after editing ghostscript (1) - Ghostscript (PostScript and PDF language interpreter and previewer) gs (1) - Ghostscript (PostScript and PDF language interpreter and previewer) gsnd (1) - Run ghostscript (PostScript and PDF engine) without display pdf2dsc (1) - generate a PostScript page list of a PDF document pdf2ps (1) - Ghostscript PDF to PostScript translator pdfdetach (1) - Portable Document Format (PDF) document embedded file extractor (version 3.03) pdffonts (1) - Portable Document Format (PDF) font analyzer (version 3.03) pdfimages (1) - Portable Document Format (PDF) image extractor (version 3.03) pdfinfo (1) - Portable Document Format (PDF) document information extractor (version 3.03) pdfseparate (1) - Portable Document Format (PDF) page extractor pdftocairo (1) - Portable Document Format (PDF) to PNG/JPEG/TIFF/PDF/PS/EPS/SVG using cairo pdftohtml (1) - program to convert PDF files into HTML, XML and PNG images pdftoppm (1) - Portable Document Format (PDF) to Portable Pixmap (PPM) converter (version 3.03) pdftops (1) - Portable Document Format (PDF) to PostScript converter (version 3.03) pdftotext (1) - Portable Document Format (PDF) to text converter (version 3.03) pdfunite (1) - Portable Document Format (PDF) page merger ps2ascii (1) - Ghostscript translator from PostScript or PDF to ASCII ps2pdf (1) - Convert PostScript to PDF using ghostscript ps2pdf12 (1) - Convert PostScript to PDF 1.2 (Acrobat 3-and-later compatible) using ghostscript ps2pdf13 (1) - Convert PostScript to PDF 1.3 (Acrobat 4-and-later compatible) using ghostscript ps2pdf14 (1) - Convert PostScript to PDF 1.4 (Acrobat 5-and-later compatible) using ghostscript ps2pdfwr (1) - Convert PostScript to PDF without specifying CompatibilityLevel, using ghostscript qpdf (1) - PDF transformation software snmpdf (1) - display disk space usage on a network entity via SNMP
Linux command line tool to find specific programs
1,465,559,211,000
In a Linux Foundation course we're told to use cat & at a terminal prompt to return what appears to be the current max (used) PID. I wanted to clarify what the & parameter means but both cat --help and man cat turned up nothing for the ampersand parameter. Snapshot for clarity: Can anyone please explain what the ampersand means in this context?
The & means that you send the command to the background (also called forking), and you are given "back" the prompt even though execution of the command continues (if any). There is a very good discussion about this in this thread. To know what the max_pid is, you can look in /proc: cat /proc/sys/kernel/pid_max 32768 Or (as root): sysctl kernel.pid_max kernel.pid_max = 32768
What does the ampersand in 'cat &' mean in linux?
1,477,106,372,000
Basically, I want to receive input from 2 different files whem I'm calling an executable on terminal Like: ./a.out < file1.pgm file2.pgm I want to read both the files input on my code, one after another.
For the question, where file1.pgm and file2.pgm are files whose contents you want sent to a.out as input: cat file1.pgm file2.pgm | ./a.out If file1.pgm and file2.pgm are executables that produce output for a.out: (file1.pgm; file2.pgm) | ./a.out
How to receive input from 2 files on an executable
1,477,106,372,000
I have multiple files, let's say file1, file2 etc. Each file has one word in each line, like: file1 file2 file3 one four six two five three What I want is to combine them in a new file4 in every possible permutation (without repetition) in pairs. Like onetwo onethree onefour onefive ... twothree ... onefour ... fourone ... How is this possible using Linux commands?
Use this: cat FILE1 FILE2 FILE3 | \ perl -lne 'BEGIN{@a}{push @a,$_}END{foreach $x(@a){foreach $y(@a){print $x.$y}}}' Output: oneone onetwo onethree onefour onefive onesix oneseven twoone twotwo twothree twofour twofive twosix twoseven threeone threetwo threethree threefour threefive threesix threeseven fourone fourtwo fourthree fourfour fourfive foursix fourseven fiveone fivetwo fivethree fivefour fivefive fivesix fiveseven sixone sixtwo sixthree sixfour sixfive sixsix sixseven sevenone seventwo seventhree sevenfour sevenfive sevensix sevenseven
All possible permutations of words in different files in pairs
1,477,106,372,000
How can I only scan the computer RAM for viruses using the ClamAV command clamscan? I already tried, because I found it on ClamWin forum: clamscan --memory But it seems, that the Linux version does not have the argument --memory, because I can not find it in the help (clamscan --help or man clamscan). If I try the command on Linux, I get: ~> clamscan --memory clamscan: unrecognized option `--memory' ERROR: Unknown option passed ERROR: Can't parse command line options I use Clam AntiVirus Scanner version 0.98.7.
I just tested this in a docker container with an image of debian stretch. https://packages.debian.org/source/stretch/clamav The current stable version of Clamav for Linux is 0.99.1. Source: https://www.clamav.net/ clamscan does not have such an option, I checked the manpage. It seems that this option exists in the Windows version only.
How to use ClamAV to scan the memory
1,477,106,372,000
I am using Fedora 23. Using the file explorer I "found" this page It shows where I can access other networks, I can click on them, and log in... My questions is: Is there an equivalent place I can reach via command line? That I can log into a network and such.... I am currently under the impression that I would need to mount the network to a file? My goal right now is to use the command line as much as possible, so if there are any relevant commands, or resources please point me to them as well... Side question: can I log into a WiFi network via command line? I am currently under the impression that I would need to mount the network to file as well?
Your GUI explorer is using information from the avahi daemon which is listening for services on the local network. You can do the same from the cli with avahi-browse -rat
Basic network via command line
1,477,106,372,000
I have a bunch of text files that are named in YYYYMMDD.Txt format (so today would be 20160420.Txt). Each file is basically a log that contains a timestamp and and a unique ID, each value is separated by tab delimiters. So for example, 20160420.Txt has the following values: DATE TIME ID 20160420 0135 123456 20160420 0240 234567 20160420 1252 345678 I need to extract all the Unique IDs present in those files, but only on those files from the last 6 months. The catch is that I can't use the mtime because all the files were recreated again in the past week (i.e.: the mtime does not correspond with the filename). Is there any way I can do this with grep/find/sort?
It is not trivial to find the exact date 6 months ago, especially if the current date would be the 31st of some month. But if you know how to do this with find and -mtime, I would just touch the files depending on the date in their name: for x in *.Txt; do dd=${x%.Txt} touch -t "$dd"0000 "$x" done and then use the mtime
Using grep/sort/find to extract unique values
1,477,106,372,000
I'm trying to execute ls -d "$PWD"/* > formmlFileList43k.list But I get the following error: bash: /bin/ls: Argument list too long I've read using a pipe won't have such limitation, how can I use pipe which will accomplish the same as: ls -d "$PWD"/* > formmlFileList43k.list Any help would be appreciated
You have too many items in the directory. That causes the shell to expand * into a command line argument that exceeds ARG_MAX bytes: $ grep ARG_MAX /usr/include/linux/limits.h #define ARG_MAX 131072 /* # bytes of args + environ for exec() */ I suggest you to use find as a workaround: $ find "${PWD}" -mindepth 1 -maxdepth 1 > formmlFileList43k.list EDIT: @hagello wrote an important note about filenames beginning with a dot. These files should be excluded from the find output. So, the correct workaround is: $ find "${PWD}" -mindepth 1 -maxdepth 1 '!' -name '.*' > formmlFileList43k.list
Argument list too long when running ls -d "$PWD"/* command
1,477,106,372,000
I have a function (not) that displays a notification. Hence, I can use it to show when a command is complete. sleep 5; not However, sometimes I forget to add ;not. In OS X's terminal (a few years ago), I could enter a second command, while the current command was running. This second command would execute after the first had finished. Thus, I'd type sleep 5Enter, then immediately type notEnter. After sleep had terminated, not would execute. However, my experience in Linux is that this does not occur. After sleep terminates, the command line shows not, but the Enter never registers. I've tested Terminator, Konsole and a tty. Is this behaviour dependent on the terminal emulator, and if so, are there any that work as I want? Alternatively, is there a way to make this function work in my terminal of choice (Terminator)? Testing in different shells Doesn't work, i.e. the second command doesn't register: bash bash --norc bash --norc --noprofile sh Does work, i.e. the second command registers: bash --norc --noprofile --noediting zsh I selectively removed lines from ~/.inputrc and tested with my default bash shell again. I traced the problem to the following line. When removed, the second command registers as expected. Control-j: menu-complete Oddly enough, if I try binding (say) Ctrl+i instead, there is no problem. Why does this entry prevent the second command from registering, and is there a way to still use Ctrl+j for menu-complete while having the behaviour that I want for this second command?
Well, according to some of your edits you've got CTRL+J bound to a bindkey macro command. That explains your bash issue considering readline's default behavior. Generally readline reads input in something very like stty raw mode. Input chars are read in as soon as they are typed and the shell's line-editor handles its own buffering. readline sets the terminal to raw when it takes the foreground, and it restores it to whatever its state was beforehand when calling up another foreground process group. CTRL+J is an ASCII newline. ABCDEFGHIJ is 10 bytes from NUL. Because you have configured readline to eat this character and subsequently to expand away what remains of any command-line on which it does with menu-completion, type-ahead won't work. The terminal is in a different state when the type-ahead is buffered by the kernel's line-discipline than it is when readline is in the foreground. When readline is in the foreground it does its own translation for input carriage returns -> newlines and the terminal driver doesn't convert it at all. When you enter your type-ahead input, though, the terminal driver will typically translate returns to newlines as can be configured with stty [-]icrnl. And so your return key is sufficient for commands entered live, but the newlines sent by the terminal's line-discipline are being interpreted as menu-complete commands. You might tell the terminal driver to stop this translation with stty -icrnl. This is likely to take at least a little bit of getting used to. Other commands that accept terminal input will usually expect newlines rather than returns, and so you'll either have to explicitly use CTRL+J when they control the foreground, or else teach them to handle the returns as bash does. You've already mentioned that read doesn't work as expected when reading form the terminal. Again, it likely would if you explicitly used CTRL+J to end an input line. Or... you can teach it: read() if [ -t 0 ] then command read -d $'\r' "$@" else command read "$@" fi It will probably be a lot less hassle in the long-run if you found a different key for menu-complete, though. Newlines are kind of a big deal for most terminal applications.
How can I type a command to execute when the current command completes?
1,477,106,372,000
cmd utils such as sqllite don't have full flavoured console support. but I do remember there is something called **wrap which could wrap this console and enhance its capabilities with history, up/down etc... unfortunately, I forget its name. anyone give a hint?
I think you are talking about rlwrap. rlwrap runs the specified command, intercepting user input in order to provide readline's line editing, persistent history and completion. rlwrap tries to be completely transparent - you (or your shell) shouldn't notice any difference between command and rlwrap command - except the added readline functionality, of course. This should even hold true when you are re-directing, piping and sending signals from and to command, or when command manipulates its terminal settings. rlwrap man page
command to enhance console lines with history
1,477,106,372,000
I am writing a series of CLI tools that share the same parent command, similar to programs like git. program verb OPTIONS One of the action verbs, install, is designed to git clone as many repositories as URLs are specified. What is a robust and UNIX-like logical way to determine program success or failure? Good URLs > 0 → EXIT_SUCCESS Bad URLs == 0 → EXIT_SUCCESS Write number of valid URLs to standard output, then (1) or (2) Return the number of valid URLs. Other?
Your program should at least exit(3) EXIT_SUCCESS (i.e. 0) on success and probably EXIT_FAILURE (i.e. 1) on failure. You could copy (or be inspired by) FreeBSD sysexits.h for more failure codes (but I am not sure it is worth the effort). Don't forget to give some message to stderr (or thru syslog(3)) for any kind of failure. From what you describes, failing to git clone even one (amongst many) repository for your install subcommand should be a failure. The user would probably do some corrective action (e.g. correct the spelling of the faulty URL) and then repeat the same command, so you might want it to be idempotent. Don't forget a --help option, and document any exit code outside of 0 and 1.
How to determine program success when running a sequence of similar tasks?
1,477,106,372,000
So I am trying to install Eclipse through terminal by watching a video, but when I attempt to run ./configure I get a message saying bash: ./configure: No such file or directory I looked around and then realized there was no configure in the eclipse folder unlike the video I was watching. The files that are showing in the eclipse folder are artifacts.xml dropins eclipse.ini icon.xpm plugins configuration eclipse features p2 readme So I am wondering what I should do from here to install eclipse. I really would appreciate anyone who can help me to finish installing this program. Thanks, Nova
Eclipse doesn't need an installation. Simply run ./eclipse inside the folder. That's all. Or create the desktop file. In my example, the Eclipse folder is located in /opt/eclipse nano ~/.local/share/applications/eclipse.desktop and add the lines below [Desktop Entry] Type=Application Name=Eclipse Comment=Eclipse Integrated Development Environment Icon=/opt/eclipse/icon.xpm Exec=/opt/eclipse/eclipse Terminal=false Categories=Development;IDE;Java; StartupWMClass=Eclipse
I'm trying to Install Eclipse But When I Try to Run ./configure It Doesn't Work
1,477,106,372,000
I have been trying to list files in directory using ls and passing it different options. Does it have the ability to list types of files as well ? I want to know which ones are executable , sharedlibs or just ascii files without doing file command on individual files.
ls itself won't show this information. You can pipe the output of the find to file -f -, as follows: $ find /usr/local/bin | file -f - /usr/local/bin: directory /usr/local/bin/apt: Python script, ASCII text executable /usr/local/bin/mint-md5sum: ASCII text /usr/local/bin/search: Bourne-Again shell script, ASCII text executable /usr/local/bin/gnome-help: Python script, ASCII text executable /usr/local/bin/office-vb.sh: ASCII text /usr/local/bin/pastebin: Python script, ASCII text executable /usr/local/bin/highlight: POSIX shell script, ASCII text executable /usr/local/bin/yelp: Python script, ASCII text executable Note that find is used instead of ls as it will print the full path, whereas ls will only print the file name. Therefore, if you simply need to do this with the files in your current directory, then: ls | file -f - would work.
How to list files on terminal so that we can see the file types such executable, ascii etc?
1,477,106,372,000
You'll have to bare with my my Linux terminology is pretty bad. When I say virtual terminal I'm talking about when you press ctrl+alt+ a function key (F1-F12). I think they are called virtual terminals. So I found this snippet that allows you to start an X application in another terminal. /usr/bin/xinit /opt/someAppFolder/SomeApplication -- :1 I found it in a form that allows you to run Steam in another terminal so you can easily switch out of your full screen games back to your desktop. But what I want to do is from my desktop terminal (tty7) launch a application (that does not require X) in another terminal. I know I can switch to another terminal, login, then run the app. But can I write a script to do that? So all I need to do is click a shortcut?
Use openvt. Note that you'll need to be root, because the terminal devices belong to root unless a user is logged in. openvt -c 8 myapp Add the option -s if you want to switch to vt 8 when the openvt command is run.
Debian Linux: Start/run application/process in another virtual terminal
1,477,106,372,000
I have been using the rename command to get control over my naming conventions across my system. In converting spaces in file names to hyphens, I have inadvertently created consecutive hyphens in some file names. These are proving difficult to remedy using the rename command. I have tried unsuccessfully with several different iterations of the following: rename 's/--/-/g' I do understand that double hyphens are reserved for end of arguments but a backslash escape doesn't seem to work here and I'm out of other ideas. I am relatively new to command line processing so your patience is appreciated.
To squash multiple hyphens (one hyphen followed by one or more hyphens) into a single one for all files in the current directory use: rename 's/--+/-/g' -- * The -- is important if files start with a hyphen, otherwise they would be interpreted as command line arguments. The * expands to the list of files in the current directory.
How to change double hyphen to single hyphen of file names?
1,477,106,372,000
I found this bash command that creates a directory tree in your console window. I find it quite useful, but I don't really understand how all of the special characters work. Could someone help break it down for me? alias tree="ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'" The first two are pretty simple to understand, this is creating an alias in bash that recursively lists the files and folders beneath the current directory. 1. alias tree="" 2. ls -R But after those I'm pretty much lost. I get that it's piping the rest of the commands through grep and sed, but I don't understand the rest of it or even what these modifiers are called.
alias tree="ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'" ls -R: list subdirectories recursively grep ":$": grep only for lines with : at the end of the line sed -e 's/:$//': remove : at the end of the line -e 's/[^-][^\/]*\//--/g': replace all path components except of last dir with --. To be precise replace any char except of - followed by any char except of / repeated zero or more times and then followed by /. -e 's/^/ /': add 3 spaces at the beginning of the line -e 's/-/|/': replace first - with | There are many "problems" with this snippet starting with parsing of ls command, but leaving this aside, you can rewrite sed part in more compact way: ls -R | grep ":$" | sed -e 's/:$//;s/[^-][^\/]*\//--/g;s/^/ /;s/-/|/'
What is the purpose of those special characters in the sed command? [closed]
1,477,106,372,000
My Script is having 2 command-line arguments and then just couple of questions, after these questions the script will run for itself, I'm able to pass the command-line argument by just doing, -bash-3.2$ nohup ./Script.sh 21 7 nohup: appending output to `nohup.out' -bash-3.2$ Anyway to add the answers of these to-be-asked-questions with nohup?
Instead of using nohup, you could have your script ask these questions interactively and then background and disown the remainder of whatever else it has to do. Example $ more a.bash #!/bin/bash read a echo "1st arg: $a" read b echo "2nd arg: $b" ( echo "I'm starting" sleep 10 echo "I'm done" ) & disown Sample run: $ ./a.bash 10 1st arg: 10 20 2nd arg: 20 I'm starting $ Check on it: $ ps -eaf|grep a.bash saml 6774 1 0 01:02 pts/1 00:00:00 /bin/bash ./a.bash saml 6780 10650 0 01:02 pts/1 00:00:00 grep --color=auto a.bash 10 seconds later: $ I'm done
How do I Nohup an interactive shell-script?
1,477,106,372,000
I'd like to apply chmods to files and folders in one line, Basically: chmod 700 ./* -R # but only apply to folders chmod 600 ./* -R # but only apply to files Of course I searched google and read manpages. So the question is, does the following have any drawbacks, risks or is this safe? find . -type f -print0 | xargs -0 chmod 600 && find . -type d -print0 | xargs -0 chmod 700
There is another possibility which I discovered using ACLs: the uppercase X. Given the following structure (three directories, three files): drw------- 1/ drw------- 2/ drw------- 3/ -rw------- 4 -rw------- 5 -rw------- 6 It is possible to set the execution but for directories only by using: chmod u+X * Which will result in : drwx------ 1/ drwx------ 2/ drwx------ 3/ -rw------- 4 -rw------- 5 -rw------- 6 Compared to find and xargs, this has the advantage of requiring one command only, and therefore no pipe. For this reason, I would be inclined to say that this is faster. In your example, you are basically using two commands in one line: The first searches files, prints their names and xargs does the rest. The second searches directories, same behaviour. In each of these calls, you run three commands: Run find so that it prints out the names of the files you're interested in. Pass these names to xargs so it acts as a wrapper around chmod (which is, therefore, called only once). It is also interesting to note that by using &&, you make sure the second command is executed only if the first one succeeds (yet, I don't see how find could fail in your case). However, when using find only (-exec), the chmod command is called for each file matching the find criteria. If you have 200 files in your directory, chmod will be called 200 times, which is slower than calling chmod once, on 200 files. Of course, in the end, since chmod is a relatively quick and casual operation, you will not feel the difference on a reasonable number of files. Finally, another detail about passing file names between programs: spaces. According to how each commands processes file names (using proper quotes or not), you may run into trouble while processing files with spaces in their names (since This Super Picture.png could quickly be processed as This, Super and Picture.png).
Apply recursive chmod to files or folders only
1,477,106,372,000
I prefer to set tabs 4 however this may have some side effects, such as for example ls out put may look not properly aligned. How might I configure the terminal / cat to use four spaces for tabs for cat only? Should I just alias / wrap cat to something that sets tabs 4, runs /bin/cat, then sets it back? My thinking is that this route is less preferable since in fact I would like this behaviour for less, diff, and other utilities.
The curses program tabs will allow you to change what the terminal believes to be the width of a ^I. This would make a simple script tabs -4 cat "$@" tabs -8 However, the processing of tab characters on terminals is notoriously wonky and I'm of the impression that you should never mess with them. I suggest using expand as in: expand -4 "$@" which is actually closer to what you intend. added in reply to comment: Far too many scripts count on cat meaning /bin/cat which explicitly does not change tabs. I'm not sure if you mean to replace or supersede /bin/cat, but you shouldn't. Better would be: alias tcat='expand -4' or function tcat() { expand -4 "$@" } or similar.
How to set default tabs only for cat?
1,477,106,372,000
I'm running Debian 6 in a VM on a network that requires Windows authentication to a proxy server before it can run any http/https/ftp connection. I've tried: export http_proxy=http://domain\\username:password@proxyIPaddress:8080 and same for ftp_proxy, but I keep getting: "Proxy server requires authentication". My password does have a few special characters so I've escaped them using the backslash, just like I do for the domain name. If I hit the up arrow to view my history, the command looks correct (the escape backslashes are gone and the command and password look correct). What else should I try?
You should pass the export http_proxy command as a string. This should do the trick: export http_proxy='http://domain\username@proxyIPaddress:8080/'
Proxy server with authentication not working at command line
1,477,106,372,000
Is it possible to start an xfreerdp session into Microsoft windows from a command-line only install of Linux? The command I use from a full blown Linux install is this: $ sudo xfreerdp /v:farm.company.com /d:company.com \ /u:oshiro /p:oshiro_password /g:rds.company.com This command works fine. However, when I run the same command from a command-line install of Linux, I get the following error message: Please check that the $DISPLAY environment variable is properly set. freerdp_set_last_error 0x20001 libfreerdp/core/freerdp.c:97: freerdp_pre_connect failed Both the GUI based Linux installation and the command-line only installation of Linux I have are Ubuntu 12.04. Both installations have xfreerdp version 1.2.0-beta1
I assume xfreerdp is a gui programm (an "X client"). So on Linux, you need an "X server" to run it. That's what you have on the GUI based Linux box. You can not run it on the command-line-only Linux in itself. Depending on what you are trying to do, it could make sense to run it on the command-line-only Linux and show the GUI somewhere else over the network. That's what DISPLAY is for. You could do something like: export DISPLAY=guilinuxbox:0.0 xfreerdp ... (but you would need to set up the permissions to do so) For illustrating what to expect when running a plain X server (as discussed in the comments for now): This is what a plain X server looks like - you are seeing the root window with it's default pattern. There would also be a pointer whith an "X" shape:
Error because $DISPLAY environment variable is not properly set
1,477,106,372,000
I have seen the following line in a bash script for killing a process(in this case started with the command loadgen): ps xww | grep -i "loadgen" | grep "PATTERNMATCH_FACT.xml" | cut -c1-5 | xargs -i kill {} 2>/dev/null I would like to understand the reason for piping after the two grep's in the command above. The way the loadgen command is started is the following. It's a part of the startup script. ./loadgen -XMLFile ${DEMODIR}/bam-103-pattern-match/data/PATTERNMATCH_FACT.xml -duration 0 -frequency 2
ps xww gives the following output ... 1 ? Ss 0:00 init [2] 1804 pts/0 Ss 0:00 -bash ... After the two grep's it pipes the output to cut. This command cuts the character 1-5 out of the output. In the output above it whould be the PID's: 1 1804 This is piped to xargs. Xargs builds commands that look like this: kill 1 kill 1804 and executes them. 2>/dev/null means that all error messages are sent to the pseudo device /dev/null. So your command kills every process that is greped out of the ps command. Or see explainshell.
How does the piping in this command ultimately achieve to kill the process?
1,477,106,372,000
Is it possible? And in reverse alphabetical order? Essentially, this: How can I move files by type recursively from a directory and its sub-directories to another directory? Except that each file is not moved to the destination directory unless a separate process has fetched the sole file in that destination directory and moved it elsewhere (thus the target folder is empty and 'ready' for the next file to be moved there).
Do you want something like this? #!/usr/bin/env bash ## This is the target path, the directory ## you want to copy to. target="some/path with/spaces"; ## Find all files and folders in the current directory, sort ## them reverse alphabetically and iterate through them find . -maxdepth 1 -type f | sort -r | while IFS= read -r file; do ## Set the counter back to 0 for each file counter=0; ## The counter will be 0 until the file is moved while [ $counter -eq 0 ]; do ## If the directory has no files if find "$target" -maxdepth 0 -empty | read; then ## Move the current file to $target and increment ## the counter. mv -v "$file" "$target" && counter=1; else ## Uncomment the line below for debugging # echo "Directory not empty: $(find "$target" -mindepth 1)" ## Wait for one second. This avoids spamming ## the system with multiple requests. sleep 1; fi; done; done This script will run until all files have been copied. It will only copy a file into $target if the target is empty so it will hang for ever unless another process is removing the files as they come in. It will break if your files' or $target's name contain new lines (\n) but should be fine with spaces and other strange characters.
Automatically moving files to a directory, one by one, and only when the target folder is empty
1,477,106,372,000
Possible Duplicate: What do the numbers in a man page mean? If I typeman ls, I seeLS(1) in the top left and top right corners of the manpage. I also see programs on the internet being refered to this way. ex. man(1), xman(1x), apropos(1), makewhatis(8) and catman(8). What are these numbers (and in some cases letters)?
It's the section number, see man man A section, if provided, will direct man to look only in that section of the manual. The default action is to search in all of the available sections, following a pre- defined order and to show only the first page found, even if page exists in several sections. The table below shows the section numbers of the manual followed by the types of pages they contain. 1 Executable programs or shell commands 2 System calls (functions provided by the kernel) 3 Library calls (functions within program libraries) 4 Special files (usually found in /dev) 5 File formats and conventions eg /etc/passwd 6 Games 7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7) 8 System administration commands (usually only for root) 9 Kernel routines [Non standard] By example, stat have 3 sections : $ man -k stat | grep "^stat " stat (1) - display file or file system status stat (2) - get file status stat (3p) - get file status So if you type man 1 stat it's not the same as man 2 stat
What does the number mean in a man page? [duplicate]
1,477,106,372,000
For some strange reason my filename autocomplete is behaving differently than normal. Given the following file structure ./foobar/file.txt, if I want to delete file.txt, I type rm foob<TAB><TAB> and let the command line autocomplete the filename out to rm foobar/file.txt. But right now, after hitting the first <TAB> my command gets autocompleted to rm foobar (with a space after foobar). Is it possible that I accidentally changed it to this behavior? How can I change it back?
In my experience, this is usually caused by some misbehaving configuration in one of the files in /etc/bash_completion.d, which get installed and updated as you install various packages. My recommendation would be to move all files out of that directory, start a fresh shell, and see if the behavior returns to normal. If so, you can move files back into that directory one (or a group) at a time to see which ones cause the problem. Once you narrow it down, report a bug in whichever package installed that file!
How to fix Linux filename tab autocomplete that is appending a space instead of trailing slash on directories?
1,477,106,372,000
I am trying to wget a tarball from github.com and, without creating a temporary file, extract a subdirectory from it: wget -qO- https://github.com/django-nonrel/django-nonrel/tarball/develop | tar xzf - django It gives me an error saying: tar: django: Not found in archive tar: Exiting with failure status due to previous errors Apparently I am doing something wrong.
Since the django directory itself is a subdirectory of the resulting gzip content, you'll need to use the --strip-components=1 option of GNU tar. Assuming you're using GNU tar. Then you need to tell tar that you're looking for a subdirectory called django. wget -qO- https://github.com/django-nonrel/django-nonrel/tarball/develop | tar --strip-components=1 -zxf - \*/django FWIW, I also had to use the --no-check-certificate option of wget.
Extract directory from wget's stdout
1,477,106,372,000
There is a search page in a form http://example.com/search.php and it sends a search query via POST request. I want to fetch this request via curl command line tool to inspect a POST request. The HTML form looks like this: <form action="search.php" method="POST"> <input type="text" name="search" size=20><br> SZ<input type="checkbox" checked name="sz"> NZ<input type="checkbox" checked name="nz"> <input type="submit" name="search_term" value="search" > </form> How should my curl command look like?
As an example, to send a search of "foo" with sz checked and nz unchecked: curl -d "search_term=search&search=foo&sz=on&nz=off" http://example.com/search.php
How to format a curl command for a special task?
1,477,106,372,000
When linux commands list their usage, this usually is how they do it (for eg wget): wget [option]... [URL]... From what I understand of this pattern of specifying command usage, this is not the usual regex way of specifying patterns and for the wget command says that it is not mandatory to specify any options and by that logic it is not mandatory to specify any URL as well. I mean I can directly do wget www.google.com and this will work. So the options are not mandatory. If the options are not mandatory because they are in square brackets, then following through with that logic specifying a URL should not be mandatory as well and just wget as a command should work as well. My question is- Is there some document where this pattern of specifying command usage is elaborated on?
Typically a syntax where [...] is used to indicate optional args and '|' is used to indicate a logical OR is used in most man pages. It depends who writes the man page as there is no authority that dictates what a man page must read like. More specific to your question however, the man page reads true in this case. Either you can specify a url through the -i switch or you can supply a URL itself. So you can think of the options as "conditionally optional". Really it should probably read something like ([option (excluding -i)] (-i file | URL)) but you can see how this would get complicated very quickly. So you need to take the quick descriptions with a grain of salt. In my experience the command syntax is usually the least of your worries. Also, I'm nit picking here but what you are seeing isn't a regex ;)
Unix/Linux command syntax
1,477,106,372,000
If I use tail -f *filename* I get a real nice display of whatever is changing in a given file. However, sometimes I want to be able to search this text or otherwise look it over slowly. Is there any way I can output just the changes to a log file between now and, say, whenever I hit Ctrl-C?
This should output everything that goes by into output.txt, if that's what you're asking for: tail -f filename | tee output.txt
Output the changes to a log file
1,477,106,372,000
How to use bash command control a Keyboard. e.g. what is a command in bash for pressing ctrl+c, ctrl+l, etc..?
AutoKey is a desktop automation utility for Linux and X11. It allows the automation of virtually any task by responding to typed abbreviations and hotkeys. It offers a full-featured GUI that makes it highly accessible for novices, as well as Python scripting .... Here is the link to Autokey's homepage. Note: When I first looked at Autokey, its scripting interface could not handle Unicode fully. I forget exactly how that was, but it was something like it could process UTF-8 internally, but couldn't deliver it to the something-or-other. If it wasn't for that, I'd be using it today; it looks great, and it has a good reputation (I know it from Windows-land). Otherwise, it is quite comprehensive. I believe it is a fork of Autoit3 (again Windows).. I've used Autoit3 and it is absolutely fully featured. I think Autokey is similar.... Autokey is available in the Ubuntu repository.. xdotool lets you programatically (or manually) simulate keyboard input and mouse activity, move and resize windows, etc. xdotool key --clearmodifiers --delay 40 "ctrl+shift+u" Another option is package xmacro. It contains macroplay and xmacrorec (and xmacrorec2). xmacrorec can be used to record mouse and keyboard events on any X11 display. xmacroplay can be used to playback recorded events or send any other mouse/keyboard events you choose. echo -n "KeyStrPress Control_L KeyStrPress Alt_L KeyStrPress a KeyStrRelease a KeyStrRelease Alt_L KeyStrRelease Control_L"| xmacroplay :0.0 &>/dev/null
How to use bash control a keyboard
1,477,106,372,000
I would like a way to refresh a specific window from the terminal command-line. Probably I will need a command to find the id or the name of the window and a command to refresh it. It looks like xrefresh cannot do this; how could I do this?
With xte tool from xautomation package it is as simple as xte "key F5" It will act on the current active window, so you'd have to make sure the proper one is selected previously.
How to refresh a window by terminal? Or how to simulate `F5`?
1,477,106,372,000
Possible Duplicate: How to clean up file extensions? I'd like to rename files with extension .flac.mp3 to extension .mp3. I used the following command $ for i in *; do mv $i `echo $i | sed 's/.flac//g'`; done This writes for every file the following error message. mv: target `file.mp3' is not a directory Where am I doing mistake? thank you
Maybe these files have names containing whitespace. Simple rule of shell programming: Always use double quotes around variable and command substitutions (unless you know why you need to leave them out). So: for i in *; do mv "$i" "$(echo "$i" | sed 's/.flac//g')"; done While you're at it, there are a few things you could do better in that command, even if they aren't the source of your problem. You should run your command only on the files it's supposed to affect, not every file in the current directory. The sed regexp .flac could match something other than the extension. The command may also fail if you have a file name that begins with a - or that contains a backslash (with some versions of echo). for i in *.flac.mp3; do mv -- "$i" "$(echo "$i" | sed 's/\.flac\.mp3$/\.mp3/')"; done But in fact you needn't bother with sed here, there's a shell construct to remove a suffix from a string. for i in *.flac.mp3; do mv -- "$i" "${i%.flac.mp3}.mp3"; done -- ensures that even if $i begins with a -, it won't be interpreted as an option. An alternative method is to ensure that $i never begins with -, for example by ensuring that all relative file names are prefixed with ./ (which has no effect on what file is designated since . is the current directory).     for i in ./*.flac.mp3; do mv "$i" "${i%.flac.mp3}.mp3"; done There are plenty of tools to automate file renamings; browse rename here for a few ideas. For example, if your shell is zsh: autoload zmv # goes into your .zshrc zmv '(*).flac.mp3' '$1.mp3'
Multiple renaming files [duplicate]
1,477,106,372,000
I am wondering how to change the time format when I am modifying a time with the touch command in the c shell. For example I need to use touch -a -t 051512002015 file.txt This will change the access time to May 15th 2015, 12:00. What I want to do though is (notice year is first now) touch -a -t 201505151200 file.txt This is how touch works in Linux. I use UnxUtils on Windows, which I am almost sure uses c-shell touch. I want the commands to be consistent. Is it possible to change the c-shell touch command?
This isn't about the shell, touch is an external program. There were historically two syntaxes for the date argument to touch: touch -t CCYYMMDDhhmm.SS # CC or CCYY may be omitted; .SS may be omitted touch MMDDhhmmYY # YY may be omitted The command appeared in Unix Seventh edition with no date argument. BSD versions acquired the -t option (with all the date components in descending order) somewhere around 4.4BSD. System V (e.g. SunOS 4.1.3) had the straight-date form with the year at the end. By the time of Single Unix v2 (based on POSIX:1992), the System V form was considered obsolescent, and it is no longer included in Single Unix v3 (POSIX:2001). I recommend using the standard (BSD) syntax in your script. On the legacy systems that require the BSD syntax, arrange to have a compatible touch. Several approaches are possible: Write a wrapper function that shuffles the arguments around if it detects that your script is running on a legacy system. Arrange to have a PATH that puts the standard-compliant directories ahead of the legacy directories. (You may get more specific advice if you post the exact legacy variants and versions you need to support.)
Change the time format for the touch command in BSD
1,477,106,372,000
I have problem with wget behavior for 1.10 and later versions. I am using wget to download a report from my app. version 1.10.2: >wget-1.10.2.exe --http-user=trader --http-passwd=trader http://192.168.1.222:8080/myapp/reports/FP201010271100 --11:52:46-- http://192.168.1.222:8080/myapp/reports/FP201010271100 => `FP201010271100.5' Connecting to 192.168.1.222:8080... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/csv] [ <=> ] 82,068 --.--K/s 11:52:46 (1019.65 KB/s) - `FP201010271100' saved [82068] version 1.11.4: >wget-1.11.4.exe --http-user=trader --http-passwd=trader http://192.168.1.222:8080/myapp/reports/FP201010271100 --2010-10-27 12:15:10-- http://192.168.1.222:8080/myapp/reports/FP201010271100 Connecting to 192.168.1.222:8080... connected. HTTP request sent, awaiting response... 302 Found Location: http://192.168.1.222:8080/myapp/spring_security_login [following] --2010-10-27 12:15:10-- http://192.168.1.222:8080/myapp/spring_security_login Reusing existing connection to 192.168.1.222:8080. HTTP request sent, awaiting response... 404 Not Found 2010-10-27 12:15:10 ERROR 404: Not Found. Do I need to add some parameter for version 1.11 and later? Update - Wireshark dump version 1.10.2: No. Time Source Destination Protocol Info 17 13.756462 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 18 13.767667 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 19 13.767721 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=1 Ack=1 Win=64240 Len=0 20 13.774991 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [PSH, ACK] Seq=1 Ack=1 Win=64240 Len=189 21 13.776327 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=1 Ack=190 Win=65535 Len=0 22 13.933894 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=1 Ack=190 Win=65535 Len=1420 23 13.934039 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=1421 Ack=190 Win=65535 Len=1276 24 13.934180 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=2697 Win=64240 Len=0 25 13.935194 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=2697 Ack=190 Win=65535 Len=1348 26 13.945109 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=4045 Ack=190 Win=65535 Len=1348 27 13.945473 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=5393 Win=64240 Len=0 28 13.948389 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=5393 Ack=190 Win=65535 Len=1420 29 13.948443 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=6813 Ack=190 Win=65535 Len=1420 30 13.948491 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=8233 Win=64240 Len=0 31 13.948555 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=8233 Ack=190 Win=65535 Len=1420 32 13.948604 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=9653 Win=62820 Len=0 33 13.948650 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=9653 Ack=190 Win=65535 Len=1132 34 13.954507 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=10785 Win=64240 Len=0 35 13.955210 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=10785 Ack=190 Win=65535 Len=1348 36 13.972055 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=12133 Ack=190 Win=65535 Len=1348 37 13.972179 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=13481 Win=64240 Len=0 38 13.973166 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=13481 Ack=190 Win=65535 Len=1348 39 13.973230 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=14829 Ack=190 Win=65535 Len=1348 40 13.973290 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=16177 Win=64240 Len=0 41 13.975002 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=16177 Ack=190 Win=65535 Len=1348 42 13.975035 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=17525 Ack=190 Win=65535 Len=1348 43 13.975057 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=18873 Win=64240 Len=0 44 13.975210 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=18873 Ack=190 Win=65535 Len=1348 45 13.975523 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=20221 Ack=190 Win=65535 Len=1348 46 13.975553 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=21569 Win=64240 Len=0 47 13.983036 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=21569 Ack=190 Win=65535 Len=1348 48 13.983260 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=22917 Ack=190 Win=65535 Len=1420 49 13.983290 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=24337 Win=64240 Len=0 50 13.983324 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=24337 Ack=190 Win=65535 Len=1276 51 13.983833 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=25613 Ack=190 Win=65535 Len=1348 52 13.983867 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=26961 Win=64240 Len=0 53 13.984210 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=26961 Ack=190 Win=65535 Len=1348 54 13.984641 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=28309 Ack=190 Win=65535 Len=1420 55 13.984672 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=29729 Win=64240 Len=0 56 13.984705 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=29729 Ack=190 Win=65535 Len=1276 57 13.984946 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=31005 Ack=190 Win=65535 Len=1348 58 13.984968 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=32353 Win=64240 Len=0 59 13.986763 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=32353 Ack=190 Win=65535 Len=1348 60 13.986792 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=33701 Ack=190 Win=65535 Len=1420 61 13.987044 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=35121 Win=64240 Len=0 62 13.987088 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=35121 Ack=190 Win=65535 Len=1276 63 13.987180 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=36397 Ack=190 Win=65535 Len=1348 64 13.987194 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=37745 Win=64240 Len=0 65 13.993156 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=37745 Ack=190 Win=65535 Len=1420 66 13.993181 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=39165 Ack=190 Win=65535 Len=1276 67 13.993220 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=40441 Win=64240 Len=0 68 14.013124 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=40441 Ack=190 Win=65535 Len=1348 69 14.014498 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=41789 Ack=190 Win=65535 Len=1420 70 14.014539 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=43209 Win=64240 Len=0 71 14.014574 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=43209 Ack=190 Win=65535 Len=1276 72 14.014585 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=44485 Ack=190 Win=65535 Len=1348 73 14.014595 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=45833 Win=64240 Len=0 74 14.015232 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=45833 Ack=190 Win=65535 Len=1348 75 14.015248 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=47181 Ack=190 Win=65535 Len=1348 76 14.015263 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=48529 Win=64240 Len=0 77 14.015911 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=48529 Ack=190 Win=65535 Len=1348 78 14.016112 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=49877 Ack=190 Win=65535 Len=1348 79 14.016132 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=51225 Win=64240 Len=0 80 14.016643 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=51225 Ack=190 Win=65535 Len=1348 81 14.016865 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=52573 Ack=190 Win=65535 Len=1348 82 14.016887 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=53921 Win=64240 Len=0 83 14.017095 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=53921 Ack=190 Win=65535 Len=1348 84 14.018786 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=55269 Ack=190 Win=65535 Len=1420 85 14.018823 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=56689 Win=64240 Len=0 86 14.018981 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=56689 Ack=190 Win=65535 Len=1276 87 14.018994 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=57965 Ack=190 Win=65535 Len=1348 88 14.019008 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=59313 Win=64240 Len=0 89 14.024666 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=59313 Ack=190 Win=65535 Len=1420 90 14.024685 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=60733 Ack=190 Win=65535 Len=1276 91 14.024712 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=62009 Win=64240 Len=0 92 14.025221 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=62009 Ack=190 Win=65535 Len=1348 93 14.026959 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=63357 Ack=190 Win=65535 Len=1348 94 14.027000 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=64705 Win=64240 Len=0 95 14.027035 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=64705 Ack=190 Win=65535 Len=1348 96 14.027045 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=66053 Ack=190 Win=65535 Len=1348 97 14.027053 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=67401 Win=64240 Len=0 98 14.027637 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=67401 Ack=190 Win=65535 Len=1348 99 14.028162 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=68749 Ack=190 Win=65535 Len=1348 100 14.028190 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=70097 Win=64240 Len=0 101 14.028634 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=70097 Ack=190 Win=65535 Len=1348 102 14.029058 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=71445 Ack=190 Win=65535 Len=1348 103 14.029076 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=72793 Win=64240 Len=0 104 14.029392 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=72793 Ack=190 Win=65535 Len=1348 105 14.029819 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=74141 Ack=190 Win=65535 Len=1348 106 14.029841 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=75489 Win=64240 Len=0 107 14.030139 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=75489 Ack=190 Win=65535 Len=1348 108 14.030510 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=76837 Ack=190 Win=65535 Len=1420 109 14.030530 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=78257 Win=64240 Len=0 110 14.030557 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=78257 Ack=190 Win=65535 Len=1276 111 14.031644 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=79533 Ack=190 Win=65535 Len=1348 112 14.031673 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=80881 Win=64240 Len=0 113 14.032084 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [PSH, ACK] Seq=80881 Ack=190 Win=65535 Len=1411 114 14.032093 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [FIN, ACK] Seq=82292 Ack=190 Win=65535 Len=0 115 14.032104 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [ACK] Seq=190 Ack=82293 Win=62829 Len=0 116 14.040620 10.0.2.15 172.25.9.238 TCP kazaa > glrpc [FIN, ACK] Seq=190 Ack=82293 Win=62829 Len=0 117 14.041564 172.25.9.238 10.0.2.15 TCP glrpc > kazaa [ACK] Seq=82293 Ack=191 Win=65535 Len=0 version 1.11.4: No. Time Source Destination Protocol Info 1 0.000000 10.0.2.15 172.25.9.238 TCP mpc-lifenet > glrpc [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 2 0.021323 172.25.9.238 10.0.2.15 TCP glrpc > mpc-lifenet [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 3 0.021379 10.0.2.15 172.25.9.238 TCP mpc-lifenet > glrpc [ACK] Seq=1 Ack=1 Win=64240 Len=0 4 0.026476 10.0.2.15 172.25.9.238 giFT Response: GET /intraday/faces/reports/FP201010271100 HTTP/1.0 5 0.026990 172.25.9.238 10.0.2.15 TCP glrpc > mpc-lifenet [ACK] Seq=1 Ack=143 Win=65535 Len=0 6 0.040137 172.25.9.238 10.0.2.15 giFT Request: HTTP/1.0 302 Found 7 0.078133 10.0.2.15 172.25.9.238 giFT Response: GET /intraday/spring_security_login HTTP/1.0 8 0.078741 172.25.9.238 10.0.2.15 TCP glrpc > mpc-lifenet [ACK] Seq=237 Ack=278 Win=65535 Len=0 9 0.097840 172.25.9.238 10.0.2.15 giFT Request: HTTP/1.0 404 Not Found 10 0.102113 10.0.2.15 172.25.9.238 TCP mpc-lifenet > glrpc [RST, ACK] Seq=278 Ack=1585 Win=0 Len=0 11 0.103253 172.25.9.238 10.0.2.15 giFT Request: ipt src="/myapp/org/apache/myfaces/tobago/renderkit/html/standard/standard/script/tobago.js" type="text/javascript" 12 0.103481 10.0.2.15 172.25.9.238 TCP mpc-lifenet > glrpc [RST] Seq=278 Win=0 Len=0 13 0.103657 172.25.9.238 10.0.2.15 giFT Request: .startBody = new Date(); 14 0.103906 10.0.2.15 172.25.9.238 TCP mpc-lifenet > glrpc [RST] Seq=278 Win=0 Len=0 15 0.104174 172.25.9.238 10.0.2.15 TCP glrpc > mpc-lifenet [RST, ACK] Seq=3779959295 Ack=278 Win=0 Len=0 16 0.112019 172.25.9.238 10.0.2.15 TCP glrpc > mpc-lifenet [RST, ACK] Seq=3779959295 Ack=278 Win=0 Len=0
Just add --auth-no-challenge parameter. If this option is given, Wget will send Basic HTTP authentication information (plaintext username and password) for all requests, just like Wget 1.10.2 and prior did by default. For details read bug-description
Difference between wget versions
1,477,106,372,000
Short Version If you press tab after the following command you get a filer-menu. What is the name of it? ls *( Long Version I just doing my linux-stuff and by accident I pressed tab after ( and a really cool menu popped up I have never seen before. Suddenly I could select different filters. For example I could look for only directories by typing in (/) and so much more filters which are really useful. I would love to learn more about it, but I have no Idea what to search the internet for. Any idea what this thing is called? Thanks for your help :)
If you add: zstyle ':completion:*' format 'Completing %d' To your ~/.zshrc in addition to the styles you already have, it will tell you what kind of completion it's offering: $ print -r -- *(<Tab> Completing glob flag # -- introduce glob flag Completing glob qualifier a -- + access time A -- group-readable c -- + inode change time + -- + command name d -- + device [...] Which points to the Globbing flags (only available when the extendedglob option is enabled) and Glob qualifiers sections of the documentation. Two different features which are both introduced by a (. $ print -r -- *(#<Tab> Completing glob flag a -- approximate matching c -- match repetitions of preceding pattern e -- match end of string i -- case insensitive I -- case sensitive matching l -- lower case characters match uppercase In: print -r -- img*(#i).jpg The (#i) globbing flag turns case insensitive matching for the remaining of the glob pattern, the completion helps you remember what the flags are. $ print -r -- *(a<Tab> Completing time specifier s -- seconds h -- hours w -- weeks m -- minutes d -- days M -- Months Completing sense [default exactly] - -- before + -- since Completing digit (days) August September Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su 1 2 3 4 5 6 7 1 2 3 4 8 9 10 11 12 13 14 5 6 7 8 9 10 11 15 16 17 18 19 20 21 12 13 14 15 16 17 18 22 23 24 25 26 27 28 19 20 21 22 23 24 25 29 30 31 26 27 28 29 30 [...] print -r -- *(ah-1) Expands to the files last accessed less than 1 hour ago. Note that if the bareglobqual option is disabled, glob qualifiers are only available using the (#q...) globbing flag syntax (*.jpg(#qm-1) for instance for the jpg files last modified within the last day). Glob qualifiers is one of the killer features of zsh, you'll find hundreds of answers here showing its usage.
Zsh - What is this Cool Thing called you get by pressing Tab after "("
1,477,106,372,000
I have a directory where there are multiple folders, each folder contains multiple .gz files with the same zipped file name "spark.log". How can I unzip all of them at once and rename them like the gz file? My data looks like this List of folders A B C D In every of them there are files as A spark.log.gz spark.log.1.gz spark.log.2.gz spark.log.3.gz B spark.log.gz spark.log.1.gz spark.log.2.gz spark.log.3.gz C spark.log.gz spark.log.1.gz spark.log.2.gz spark.log.3.gz D spark.log.gz spark.log.1.gz spark.log.2.gz spark.log.3.gz in each of the gz file contains spark.log, I'd like to be able to unzip and rename them according to their gz name. For example: spark.log.1.gz -> spark.log.1.log
While gzip does or can store the original name, which you can reveal by running gzip -Nl file.gz: $ gzip spark.log $ mv spark.log.gz spark.log.1.gz $ gzip -l spark.log.1.gz compressed uncompressed ratio uncompressed_name 170 292 51.4% spark.log.1 $ gzip -lN spark.log.1.gz compressed uncompressed ratio uncompressed_name 170 292 51.4% spark.log gunzip will not use that for the name of the uncompressed file unless you pass the -N option and will just use the name of the gzipped file with the .gz suffix removed. You may be confusing it with Info-ZIP's zip command and its related zip format which is a compressed archive format while gzip is just a compressor like compress, bzip2, xz... So you just need to call gunzip without -N on those files: gunzip -- */spark.log*.gz And you'll get spark.log, spark.log.1, spark.log.2... (not spark.log.1.log which wouldn't make sense, nor spark.1.log, which could be interpreted as a log file for a spark.1 service as opposed to the most recent rotation of spark.log). Having said that, there's hardly ever any reason to want to uncompress log files. Accessing the contents is generally quicker when they are compressed. Modifying the contents is potentially more expensive, but you generally don't modify log files after they've been archived / rotated. You can use zgrep, vim, zless (even less if configured to do so) to inspect their contents. zcat -f ./*.log*(nOn) | grep... if using zsh to send all the logs from older to newer to grep, etc.
gunzip multiple gz files with same compressed file name in multiple folders
1,477,106,372,000
If I have a text-file with a structured list like this: #linux ##audio ###sequenzer ####qtractor ###drummachine ####hydrogen ##scores ###lilypond ###musescore ##bureau ###kalender ####calcurse ###todo ####tudu How can I print it tree like to the command-line? linux/ ├── audio │   ├── drummachine │   │   └── hydrogen │   └── sequenzer │   └── qtractor ├── bureau │   ├── kalender │   │   └── calcurse │   └── todo │   └── tudu └── scores ├── lilypond └── musescore Is there an application that I'm missing out?
Use awk to convert the structure to "normal" pathes. linux/ linux/audio/ linux/audio/sequenzer/ linux/audio/sequenzer/qtractor/ linux/audio/drummachine/ linux/audio/drummachine/hydrogen/ ... Then you can use tree --fromfile . to read it: convert_structure.awk: { delete path_arr path = "" level=match($0,/[^#]/)-1 sub(/^#*/,"") p[level]=$0 for (l=1;l<=level;l++) { path_arr[l]=p[l] path = path p[l] "/" } print path } RUN: awk -f convert_structure.awk structure.txt | tree --fromfile . --noreport OUTPUT: . └── linux ├── audio │   ├── drummachine │   │   └── hydrogen │   └── sequenzer │   └── qtractor ├── bureau │   ├── kalender │   │   └── calcurse │   └── todo │   └── tudu └── scores ├── lilypond └── musescore Notes: Check here if your implementation of awk does not support delete of an array. This works fine with pathes that include spaces, but obviously won't work with pathes inlcuding newlines.
Print structured list to command-line (tree like)
1,477,106,372,000
I'm using ubuntu 20.04 (LTS). I can't scan networks with iwlist. $ sudo airmon-ng PHY Interface Driver Chipset phy0 wlp1s0 ath10k_pci Qualcomm Atheros QCA9377 802.11ac Wireless Network Adapter (rev 31) $ sudo ifconfig wlp1s0up $ sudo airmon-ng start wlp1s0 Found 4 processes that could cause trouble. Kill them using 'airmon-ng check kill' before putting the card in monitor mode, they will interfere by changing channels and sometimes putting the interface back in managed mode PID Name 601 avahi-daemon 606 NetworkManager 647 wpa_supplicant 651 avahi-daemon PHY Interface Driver Chipset phy0 wlp1s0 ath10k_pci Qualcomm Atheros QCA9377 802.11ac Wireless Network Adapter (rev 31) (mac80211 monitor mode vif enabled for [phy0]wlp1s0 on [phy0]wlp1s0mon) (mac80211 station mode vif disabled for [phy0]wlp1s0) $ sudo iwlist wlp1s0mon scanning wlp1s0mon Interface doesn't support scanning : Operation not supported This is different than this question. I already tried all the suggested answers. $ sudo iw dev wlp1s0mon scan ap-force command failed: Operation not supported (-95) $ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no 3: phy0: Wireless LAN Soft blocked: no Hard blocked: no
This WLAN adapter (Qualcomm Atheros QCA9377) does not support promiscuous mode (aka monitor mode). You have two choices: You can easily get yourself one of the Alfa cards for less than USD $70. You can downgrade your Atheros firmware, but not guaranteed. Reference: https://www.linuxquestions.org/questions/linux-newbie-8/wifi-networks-not-scanning-in-monitor-mode-qualcomm-atheros-qca9377-802-11ac-wireless-network-adapter-4175666173/
Ubuntu 20.04 iwlist scanning : [interface] Interface doesn't support scanning : Operation not supported
1,477,106,372,000
I have zsh completion for my custom script. It takes 3 optional arguments --insert, --edit, --rm and it completes files from given path: #compdef pass _pass() { local -a args args+=( '--insert[Create a new password entry]' '--edit[Edit a password entry]' '--rm[Delete a password entry]' ) _arguments $args '1: :->directory' case $state in directory) _path_files -W $HOME/passwords -g '*(/)' -S / _path_files -W $HOME/passwords -g '*.gpg(:r)' -S ' ' ;; esac } I need to add another option -P, that will also be offered for completion (when I type - and TAB), but does not offer path completion. This option should only take a string. So it should not match a path, and also it should not offer the other options if -P has been specified. How can I add this new option to my completion script? UPDATE: The completion does not work for option -P, ie when i do: pass -P <TAB> it completes nothing because option -P needs a string. This is good. But, when I do pass -P foo <TAB> it also does not complete nothing. But it should complete directories in current path. How can do that?
Presuming all of the options you mentioned are mutually exclusive with each other, then the solution is as follows: #compdef pass _pass() { local -a args=( # (-) makes an option mutually exclusive with all other options. '(-)--insert[Create a new password entry]' '(-)--edit[Edit a password entry]' '(-)--rm[Delete a password entry]' '(-)-P:string:' '1:password entry:->directory' ) _arguments $args case $state in directory) _path_files -W $HOME/passwords -g '*(/)' -S / _path_files -W $HOME/passwords -g '*.gpg(:r)' -S ' ' ;; esac } Documentation here: http://zsh.sourceforge.net/Doc/Release/Completion-System.html#index-_005farguments
zsh completion for custom script
1,588,335,338,000
I have been using mutt for a while and got it to work pretty well with 2 gmail accounts. I have 2 macros set to switch from one to the other. wanting to move to neomutt, the exact configuration is not working anymore. I can get to my account individually, but I can't swapp from one to the other. neomuttrc: source ~/.config/neomutt/accounts/account1 folder-hook 'account1' 'source ~/.config/neomutt/accounts/account1' source ~/.config/neomutt/accounts/account2 folder-hook 'account2' 'source ~/.config/neomutt/accounts/account2' macro index,pager <f2> '<sync-mailbox><enter-command>source ~/.config/neomutt/accounts/account1<enter><change-folder>!<enter>' macro index,pager <f3> '<sync-mailbox><enter-command>source ~/.config/neomutt/accounts/account2<enter><change-folder>!<enter>' account1: set from = "NAME" set folder = "imaps://imap.gmail.com" # Imap set imap_user = "[email protected]" set imap_authenticators = "oauthbearer" set imap_oauth_refresh_command = " ... " set imap_check_subscribed set spoolfile = "+INBOX" set postponed = "+[Gmail]/Draft" set header_cache=~/.mutt/cache/headers set message_cachedir = "~/.mutt/cache/bodies" # Mailbox definition mailboxes +GMail/INBOX +GMail/MailingList # smtp set smtp_authenticators = "oauthbearer" set smtp_oauth_refresh_command = "...." set smtp_url = "smtp://[email protected]@smtp.gmail.com:587/" set realname = "NAME" I also try to set only the macro, which allows me to chose the account to load. But once I have loaded one, I cannot load the other. Thanks for your help.
the trick was to set folder = "imaps://[email protected]" having the same folder for the mailboxes was confusing neomutt.
Neomutt Multi account