date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,633,012,780,000
Today I noticed I was getting an error from a tool which verifies its file descriptors on startup. The fact is that I get an extra pts connection: # In one console I start `cat` linux $ cat >/tmp/test # In another console I search for `cat`'s process ID linux $ ps -ef | grep cat alexis 34462 25012 0 11:58 pts/17 00:00:00 cat # Now check the file descriptors: linux $ ls -l /proc/34462/fd total 0 lrwx------ 1 alexis alexis 64 Sep 23 11:59 0 -> /dev/pts/17 l-wx------ 1 alexis alexis 64 Sep 23 11:59 1 -> /tmp/test lrwx------ 1 alexis alexis 64 Sep 23 11:59 2 -> /dev/pts/17 lrwx------ 1 alexis alexis 64 Sep 23 11:59 6 -> /dev/pts/17 As we can see, stdin was set to the destination filename /tmp/test. As expected 0 and 2 are set to a pts. What is 6, though? I am thinking that maybe it comes from my rails environment. The rvm script does some "magic" to my console and when I cd into a directory with a file named Gemfile, it detects it. That being said, I thought that was just a cd alias... Anything else could add such a file descriptor to my command lines? What could I do to test where this comes from and what capability it offers? Update: I can confirm that if I open a new console after I comment out the RVM initialization (. ~/.rvm/scripts/rvm .) then I don't get that extra pseudo terminal file descriptor. I'm still wondering how can they do that?
RVM opens a new file descriptor to whatever standard error is currently connected to when it starts. Thus, in an RVM environment, file descriptor 6 is the RVM log output. This way RVM can log to the same place by writing output to file descriptor 6, regardless of whether standard error has been redirected. The file descriptor is opened at the end of scripts/functions/logging. exec 6>&2 The exec builtin without an argument, but with a redirection, performs the redirection inside the shell process. Thus exec 6>&2 opens file descriptor 6 to whatever file descriptor 2 in the shell. Programs that are started from the shell inherit this file descriptor. When RVM wants to log something, it (usually) outputs to file descriptor 6. This happens in the rvm_error function, for example. For example, the following code is executed in an RVM environment started from a terminal, it writes “Stuff happened” to myfile.log, but writes Hello to the terminal. f () { … echo >&2 "Stuff happened" rvm_error "Hello" } f 2>myfile.log
When I start a process on my computer, I see a file descriptor number 6 open, what is that descriptor for/about?
1,633,012,780,000
I am using zsh 5.4.2. The function that is causing issue is: function zp () { zparseopts -E -watch:=o_watch -show=o_show echo "show : $o_show" echo "watch : $o_watch" } Output: $ zp --show --watch "Watching" show : --show watch : --watch Watching $ zp --watch --show show : watch : --watch --show You can see that, If I do not pass a value to --watch (which's argument is mandatory) then it takes the next option in this case --show as the argument. It should actually show an error like zp:zparseopts:1: missing argument for option: -watch Why is --watch taking --show as an argument instead of throwing an error?
For comparison, I'm pretty sure that's how the GNU C function getopt_long also works, e.g. with GNU ls: $ ls --sort --foo ls: invalid argument ‘--foo’ for ‘--sort’ Valid arguments are: ... If you made the argument to --walk optional, zparseopts would take --watch --show as two arguments: In all cases, option-arguments must appear either immediately following the option in the same positional parameter or in the next one. Even an optional argument may appear in the next parameter, unless it begins with a ‘-’. But it seems that the user just needs to know which options take arguments, which also happens with short options, e.g. tar -tzf is quite different from tar -tfz. Using (only) --sort=whatever would, in my opinion, make it clearer, but zparseopts doesn't even really support = directly. (--sort=whatever would give =whatever as the argument value). And that doesn't really work for short options.
If no argument is given to mandatory option, zparseopts takes next option as the argument
1,633,012,780,000
I would like to search for text with accents in files. I know that I can use grep for searching regular text: grep -rnw './' -e 'KORONA' ...but it doesn't work for words with accent characters, like KORONAVÍRUS, obmedzená. Any recommendation?
If the encoding of all the files is the same, you just need to write the searched sentence in that encoding. That brings up two possible conditions: The encoding on the command line (or where the command is executed) (probably set by one of the locale variables LC_*) is the same as the encoding of all the files, then, just grep as usual: grep -rn 'KORONAVÍRUS, obmedzená.' Use the -w option only if you want to match the whole line. If the encoding of all files is different, change the search string to that encoding. $ echo 'KORONAVÍRUS, obmedzená.' >orig $ grep -ran "$(cat orig | iconv -t CP1252)" Here, the -a option allows grep to search inside files with diferent encodings that may be detected as binary. If the files could contain different encodings then there is no solution possible. There is no way to auto-detect a file encoding. It is not possible to search inside a list of files if the files doesn't have an uniform encoding. Related: How to use grep/ack with files in arbitrary encoding?
Search text in files with accent characters
1,633,012,780,000
I have a tiny script which generates aliases for the execution of flatpak packages (to make flatpaks somehow usable from the command line). When I run this command by hand, everything works fine. But instead of always executing this command by hand after each flatpak install/update/remove I want my script to be executed automatically every time after the flatpak command was invoked. So, to make a bit more clear effectively I need commands flatpak * to be rewritten to flatpak * && ~/my_script.sh. Bonus: How could I restrict this function to only call the script if flatpak install or flatpak remove was called but not when flatpak list for example? Does anyone have an idea how to achieve this?
You can create an alias and add that to your .bashrc: alias flatpak='flatpak_(){ flatpak "$@" && ~/my_script.sh; }; flatpak_' To only execute my_script if first argument was install or remove: alias flatpak='flatpak_(){ flatpak "$@" && [[ "$1" = install || "$1" = remove ]] && ~/my_script.sh; }; flatpak_'
How to execute a command/script after a specific command was executed?
1,633,012,780,000
We have a dedicated server with OVH. We can access the server via SSH and WHM Cpanel. We want to increase the size of the /dev/sda2 partition. It is only 20 GB. And running out of space frequently (every other day). We want to increase the size to at least to 50GB. How can we do it. Can you please provide technical guidelines to increase the space on the /dev/sda2the partition without disrupting the files. We have several websites hosted in that server, so we don't want to disrupt the live websites, but we want to increase the space in that disk. After logging in to the server as root and SSH ; We have used df -h to see how much space is left. as you can see from the screenshot /dev/sda2 has only 20 gb space and out of which 17Gb is full. we are not knowledgeable enough in server administration and Linux, so we cannot get much information from the similar type questions. Additional information: /var/log is filling up all the space. We clear the folder as temporary fix.
In Linux nearly everything is possible. The main question is, what is the reason, and what is the goal. Your sda disk has two main partition: sda2 as / (root of the filesystem) and sda3 as /home/ . There would be some swap partition (may be sda1), too. In such case, you cannot simply do any repartitioning, without stopping your system for at least hours. But there is more simple way: Try to investigate, what directory grows so quickly. You can do that with du command: du /* -sh it will take some time, but I can bet the most growing directory should be /var/ , and mainly due to its /var/log/ subdirectory. Hence you can save you time asking the system: du /var/* -sh If it is so, you can create new subdirectory var/log inside the /home/ directory, because this is on the biggest partition. mkdir -p /home/var/log Then copy all content of the of the /var/log directory into the new one cp --recursive /var/log/* /home/var/log/ Then comes the most delicate moment: you must disable the current /var/log/ directory e.g. rename it to /var/log_old/ and immediately create symlink /var/log -> /home/var/log. It could be fine to look at some man pages to know, what the symlink realy is. You can do that with: mv /var/log /var/log_old && ln -s /home/var/log /var/log If everything gone well, your system will continue to write all the logs into the new place. The second question is, why you save so much logs. The bad reason could be, your system annonces you lot of errors. In that case you definitely must analyse the error logs, find the problem and remove it. The not too bad reason could be, your system is set to log too deep annonces, warnings etc. You can try to set your system to be less verbose or you can reconfigure your logrotate subsystem to keep less history. Please! be carefull with this described method, choose the time of minimum activity.
How to increase /dev/sda2 partition - OVH Server
1,633,012,780,000
For some reasons my Debian is broken and boots into a terminal based UI instead of the desktop. Where can I find my USB device storage directory in this terminal? Edit: It's not under /media.
As mentioned in the comments it must be under /media directory but it isn't, cause it's not mounted. It's possible to mount USB drive in the following way: sudo mkdir /media/usb-drive sudo mount /dev/sdb1 /media/usb-drive/
Access USB device storage via terminal
1,633,012,780,000
I am very new to bash, I don't even know if what I want is possible. Sorry in advance if unknowingly I am asking for too much and it isn't possible. I execute many different bash scripts for testing and learning process too frequently after doing small edits. It's really annoying to type the whole command over and over again every minute, it wastes lot of my time. For example, currently to execute the script I always enter su -c sh /sdcard/downloads/script1.sh Is there anyway I can assign this whole command to be initiated by just entering a single word? I would like to just type e.g. script1 This should actually execute su -c sh /sdcard/downloads/script1.sh (or any defined command for that instance). In short, there would be many pairs of a single word and it's corresponding command so that it will save my time. And this thing should be permanent. Even if I close the terminal and reopen it and enter my assigned word, it should run the corresponding defined command. I hope there's a way to do it.
Create alias in your ~/.bashrc file: alias script1='su -c sh /sdcard/downloads/script1.sh' The alias will then be available in the next shell session that your start.
How to assign a word to execute a particular command [duplicate]
1,633,012,780,000
I am sorry if my question is more a typographical error, but I have been trying to sort this out for a while now and sadly, I cannot get this to work. Perhaps I should use the sed command, but I haven't figured out how to specify a column in sed and despite being a beginner, I have a bit more experience with the awk command. So here is the goal; I have a CSV file, file1, that has a column (14) where some of the rows have null (blank) values, while other rows have values. I still want all the other columns in the output, but just to change the blank (empty [null]) columns in column 14 to have a new value of NA. Example: Column14 Value1 Value2 Value3 I am trying to use the awk command to locate any blank row in column 14 and if found enter a new text value of NA to the cell. Here is the command I was trying, but my new file still has blank cells in column 14. I would appreciate any help. Thank you. Command: awk -F"," 'BEGIN {OFS=","} $14 == "" { $14 = "NA" } {print}' file1 > file2 GOAL: Example: Column14 Value1 Value2 NA Value3 Thank you all for taking the time to read and assist. UPDATE As requested, here is some sample data. "employee_number","employee_login","is_active","send_pkg_email","send_na_email","last_name","first_name","department","title","phone_number","employee_type","email","charge_code","area_code","mailstop","roomid" "103293","[email protected]","Y","","","Smith","Jessica","","","+1 (650) 3530975","Employee","[email protected]","","LOC0028.03","","03.C.01H" "103295","[email protected]","Y","","","Long","Fred","","","+1 (415) 9449428","Employee","[email protected]","","LOC0025.01","","01.D.04B" "103297","[email protected]","Y","","","Cheng","Laura","","","+1 (650) 8623342","Contingent","[email protected]","","","","" "103307","[email protected]","Y","","","Brown","Chris","","","+1 (512) 9644927","Employee","[email protected]","","ATX0607.16","","16.B.10D" "103310","[email protected]","Y","","","Williams","Stan","","","+1 (650) 8048591","Employee","[email protected]","","LOC0061.03","","03.D.01B"
$ perl -MText::CSV=csv -e ' $csv = Text::CSV->new(); while(my $row = $csv->getline(ARGV)) { $row->[13] = "NA" if ($row->[13] eq ""); $csv->say(STDOUT, $row); };' input.csv Note that perl arrays begin from 0, not 1 - so the 14th field is element 13 of the $row arrrayref. employee_number,employee_login,is_active,send_pkg_email,send_na_email,last_name,first_name,department,title,phone_number,employee_type,email,charge_code,area_code,mailstop,roomid 103293,[email protected],Y,,,Smith,Jessica,,,"+1 (650) 3530975",Employee,[email protected],,LOC0028.03,,03.C.01H 103295,[email protected],Y,,,Long,Fred,,,"+1 (415) 9449428",Employee,[email protected],,LOC0025.01,,01.D.04B 103297,[email protected],Y,,,Cheng,Laura,,,"+1 (650) 8623342",Contingent,[email protected],,NA,, 103307,[email protected],Y,,,Brown,Chris,,,"+1 (512) 9644927",Employee,[email protected],,ATX0607.16,,16.B.10D 103310,[email protected],Y,,,Williams,Stan,,,"+1 (650) 8048591",Employee,[email protected],,LOC0061.03,,03.D.01B The line with employee_number 103297 now has NA in the 14th field. BTW, the output fields here are double-quoted only when necessary (e.g. when they contain a space. or if any of them contained a comma, they'd be quoted too). If you prefer all fields in the output to be quoted as in your input file, change the $csv = Text::CSV->new(); line to: $csv = Text::CSV->new({always_quote => 1}); Text::CSV has numerous other options. e.g. if you use $csv = Text::CSV->new({always_quote => 1, strict => 1}); it will also trigger an error if any of the input rows have a different number of fields. See man Text::CSV for details. Alternatively, there's a simple fix to your awk script: awk -F"," 'BEGIN {OFS=","}; $14 == "\"\"" { $14 = "\"NA\"" };1' input.csv This highlights a problem with just comma-splitting CSV files. It's impossible to distinguish between " characters as wrappers around field data and " characters being part of the field data...because there is no such distinction with this simple split method. Field 14 isn't empty when you're just splitting the input line by commas. It contains two quote characters (""). This awk one-liner will also break if any of the fields contain a comma character. That's another reason why it's better to use a CSV parser. See Is there a robust command line tool for processing csv files?. There's also a good awk csv parser at https://github.com/geoffroy-aubry/awk-csv-parser
AWK Command - Edit Blank "Cell" in CSV to Text Value
1,633,012,780,000
I want to print the output of docker stats to file. For example, I am running: docker stats --format "{{ .Name }},{{ .MemUsage }},{{ .MemPerc }},{{ .CPUPerc }}" > /home/test.txt However, since the normal output of docker stats is on one line which is updated, in the file I have the clear character (^[[3J^[[H^[[2J) printed out. How can I print the output without having that character? I also attach a picture to make clear what I have in output.
You can pipe it through ansifilter: docker stats --format "{{ .Name }},{{ .MemUsage }},{{ .MemPerc }},{{ .CPUPerc }}" | ansifilter > /home/test.txt Notice that /home/test.txt will contain multiple lines. It will look something like this: alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,656KiB / 7.476GiB,0.01%,0.00% alpine,528KiB / 7.476GiB,0.01%,0.00% alpine,528KiB / 7.476GiB,0.01%,0.00% alpine,528KiB / 7.476GiB,0.01%,0.03% alpine,528KiB / 7.476GiB,0.01%,0.03% alpine,528KiB / 7.476GiB,0.01%,0.02%
Print to file without clear character
1,633,012,780,000
Actually I am intending to create a shell script executing which should delete particular files/folders continuously say every 5 seconds which is continuously being generated by a application [ Below data being used is a dummy so that it becomes easy to understand and probably help others too in general ] Target App Package name - com.example.mypackage Target App Launch Activity Name - com.activity.launcher Target App's files to be deleted - /sdcard/Android/data/app/log1 , /sdcard/Android/data/app/log2 Deletion Interval : Every 5 Seconds Start Trigger Point : Only After i execute the shell script End Point : Deletion Loop Process Should end automatically after target App is no longer active ( Hence mentioned activity Name above )
Here is a short bash script I came up with. Note you will need to replace logger\.sh in the grep statements with the name of the process that you want to monitor. In this case it is checking for a process named "logger.sh" and grabbing the PID. #!/bin/bash pid=$(ps aux | grep "logger\.sh" | head -n 1 | awk '{print $2}'); if [ "$pid" != "" ] then pidFound=1; while [ "$pidFound" = 1 ] do rm /sdcard/Android/data/app/log1; rm /sdcard/Android/data/app/log2; sleep 5; pid=$(ps aux | grep "logger\.sh" | head -n 1 | awk '{print $2}'); if [ "$pid" != "" ]; then pidFound=1; else pidFound=0; break fi done fi
How to create a shell script to delete a particular file in loop [closed]
1,633,012,780,000
| folderA1 | fileA11, fileA12, ... folderA | folderA2 | fileA21, fileA22... | ... | ... I want to make a copy of it represented as: | folderA1 | folderA11, folderA12, ... folderB | folderA2 | folderA21, folderA22, ... | ... | ... The original folderA (and it's structure) remains as it is (unchanged). I'm trying to create a folder in (folder) B for each file in a (folder) A without the folder itself. I also would like to maintain the directory structure of the original folder (A). Using this question I'm able to achieve the generation of the above but it contains the folder A itself. find source/. -type d -exec mkdir -p dest/{} \; \ -o -type f -exec mkdir -p dest/{} \; Looks like: | | folderA1 | folderA11, folderA12, ... folderB | folderA | folderA2 | folderA21, folderA22, ... | | ...
You could cd into folderA and run the command from there: cd folderA find . -type d -o -type f -exec bash -c ' for path; do mkdir -p "/path/to/folderB/${path/file/folder}"; done ' bash {} + The parameter expansion ${path/file/folder} renames the each fileXY to folderXY. If every folder contains files, you can remove the -type d -o.
Create a folder for each file in a folder without the folder itself
1,633,012,780,000
I know I could use something like ncurses, but I don't want to include that dependency in my project. I'm looking for a way to clear all the output my program generated at a certain point, so I can show more information without flooding the screen. This is for a program written in Rust. There are libraries used to handle the terminal like Termion. Also read something about using ANSI characters as an option.
If you have a terminal compatible with xterm you could try switching to an alternat(iv)e screen with Termion: use termion::screen::AlternateScreen; use std::io::{Write, stdout}; fn main() { { let mut screen = AlternateScreen::from(stdout()); write!(screen, "Writing to alternat(iv)e screen!").unwrap(); screen.flush().unwrap(); } println!("Writing to main screen again."); } From Rust docs: termion::screen When you switch back everything you wrote is gone. Switching is done by issuing escape sequences like mentioned here: StackOverflow: switching to alternate screen in a bash script. Of course this means that you do not strictly need that library.
How to clear all (and only) my program's output from the terminal
1,633,012,780,000
I am trying to grep a list of strings listed in 7253.txt which looks like this: rs11078372 rs1124961 rs11651880 rs11659047 rs1736209 using: grep -o -f 7253.txt *.logistic > result.txt from multiple files *.logistic. The files are on larger size and this grep command takes forever. .logistic files look like this: #CHROM POS ID REF ALT A1 TEST OBS_CT OR LOG(OR)_SE Z_STAT P 17 16933404 rs11867934 T C T ADD 32232 0.974082 0.0279353 -0.940008 0.347213 so the strings from 7253.txt are matched from ID column in .logistic. And they should be the exact match. Do you have more efficient way to parse those *.logistic files? There is 22 of these files, and they are named like:FINchr1.pheno.glm.logistic, FINchr2.pheno.glm.logistic... It would be great if I can have in result.txt extracted columns from .logistic for ID and P (3rd and 12th column) To extract only ID from .logistic I could do this: awk 'FNR!=1 {print $3}' *.logistic | grep -o -w -F -f 7253.txt > result.txt But how to extract ID and P column, which are 3rd and 12th column in .logistic Thanks Ana
Perhaps you want awk ' NR == FNR {ids[$1]=1; next} $3 in ids {print $3, $12} ' 7253.txt *.logistic > result.txt
How to parse strings from a file in multiple other files?
1,633,012,780,000
I'm having a list of strings (sorted IP address ranges) like this: 10.100.0.0-10.100.255.255 External: 2.2.2.2 10.120.0.0-10.255.255.255 External: 2.2.2.2 10.0.0.0-10.255.255.255 External: 3.3.3.3 192.168.160.1-192.168.160.255 External: 3.3.3.3 and so on.. How can I group these by the last string of each line so that the result would look similar to this: External: 2.2.2.2 10.100.0.0-10.100.255.255 10.120.0.0-10.255.255.255 External: 3.3.3.3 10.0.0.0-10.255.255.255 192.168.160.1-192.168.160.255 Note: Handling of IP addresses is not required. We can treat these as strings as they are already sorted and ordered. I'm looking for a pure Bash solution (not Python) and preferably without xargs. Thanks in advance!
Remember the last external ip in a variable. If the new ip is different, print the new one. Then print the ip range. #! /bin/bash last_ip="" while read range _e ip ; do if [[ $ip != "$last_ip" ]] ; then printf '\nExternal: %s\n' "$ip" fi printf ' %s\n' "$range" last_ip=$ip done < "$1"
group list by last string of each line
1,633,012,780,000
I installed /bin/sh: wkhtmltopdf /usr/bin/ directory. However when I try to run the program by entering /usr/bin/wkhtmltopdf to shell I receive an error /bin/sh: /usr/bin/wkhtmltopdf: not found. It works, however, If I enter sh /usr/bin/wkhtmltopdf. Why is that and how can I fix it? The permissions are ls -l: -rwxr-xr-x 1 1000 1000 38.0M Nov 22 2016 wkhtmltopdf type -a wkhtmltopdf : /usr/bin/wkhtmltopdf Edit: The OS I use is Alpine Linux Output of ldd wkhtmltopdf: /usr/bin # ldd wkhtmltopdf /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) /usr/lib/preloadable_libiconv.so => /usr/lib/preloadable_libiconv.so (0x7f8871fec000) libXrender.so.1 => /usr/lib/libXrender.so.1 (0x7f8871fe0000) libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0x7f8871fa4000) libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0x7f8871ef3000) libXext.so.6 => /usr/lib/libXext.so.6 (0x7f8871ee1000) libX11.so.6 => /usr/lib/libX11.so.6 (0x7f8871dbe000) libz.so.1 => /lib/libz.so.1 (0x7f8871da4000) libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) librt.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f8871c4f000) libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f8871c3b000) libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by wkhtmltopdf) libexpat.so.1 => /usr/lib/libexpat.so.1 (0x7f8871c18000) libuuid.so.1 => /lib/libuuid.so.1 (0x7f8871c0f000) libbz2.so.1 => /usr/lib/libbz2.so.1 (0x7f8871c00000) libpng16.so.16 => /usr/lib/libpng16.so.16 (0x7f8871bd0000) libxcb.so.1 => /usr/lib/libxcb.so.1 (0x7f8871ba9000) libXau.so.6 => /usr/lib/libXau.so.6 (0x7f8871ba4000) libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0x7f8871b9c000) libbsd.so.0 => /usr/lib/libbsd.so.0 (0x7f8871b88000) I installed wkhtmltopdf by running the following commands in Docker: wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz && \ tar xvf wkhtmltox-0.12.4_linux-generic-amd64.tar.xz && \ mv wkhtmltox/bin/wkhtmltopdf /usr/bin/ && mv wkhtmltox/bin/wkhtmltoimage /usr/bin
As indicated by ldd output, this wkhtmltopdf binary is built against glibc, the GNU C library: libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f88720cc000) Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by wkhtmltopdf) The libc library implements the C standard library functions, as well as the POSIX API (libc.so.6), and the dynamic linker (ld-linux-x86-64.so.2). glibc is the most common libc implementation and used by most Linux distributions. On Alpine Linux, musl-libc is used. musl is much more minimalistic in nature and aims for strong POSIX compliance. The library file for musl is libc.musl-x86_64.so.1, and the dynamic linker is ld-musl-x86_64.so.1. Programs linked against glibc will therefore fail to link against musl libc. For running glibc programs on Alpine Linux, you're usually required to install glibc, as described here. In some cases, if only basic glibc compatibility is required, it is enough to install libc6-compat which is the musl-glibc compatibility package. However, a better alternative is installing the corresponding Alpine package, if such is available (to my experience, in most cases, it is). In your case, simply install the wkhtmltopdf package (Alpine 3.9+): apk add wkhtmltopdf.
/bin/sh: /usr/bin/wkhtmltopdf: not found
1,633,012,780,000
There's an auto generated file on a remote location that is constantly changing, I can only view the remote file via ssh user@ip cat luckyNumbers it tells me today's lucky numbers and also passes along a secret encrypted message. Today's lucky number are 1 2 3 asdsa@!#SAxAaas 21gv3sad ASD@!# My goal is to redirect the lucky numbers 1 2 3 into luckynumbers.txt and then Pipe the remainder of the file into my program decoder I would like to do this without saving the entire file, or making a second request for the file I'm not sure if its even possible to split a data stream like this.
Here are a few ways to write line 2 of stdin to a specific file while sending all other lines to stdout. Using sed: ssh remotehost cat luckynumbers | sed -e '2 { w luckynumbers.txt d }' | decoder Using awk: ssh remotehost cat luckynumbers | awk 'NR == 2 { print > "luckynumbers.txt" } NR != 2 { print }' | decoder Note that, if the last line of the input does not end in a newline, awk will probably add one.
Partial Redirection with additional Piping
1,552,746,248,000
I am trying to figure out a solution for unix based systems (maxOS, Linux) to zip unknown amount of packages, ideally without requiring users to install any additional third party software. I have folder structure like this MyProject /packages /custom-modules /Module1 /ios /src /Module2 /ios /src ... Amount of custom modules and their names can vary. I now need a solution that would allow me to zip all module src folders and name them accordingly i.e. final output would look like this MyProject /packages /custom-modules /Module1 /ios /src /Module1.zip /Module2 /ios /src /Module2.zip ... Ideally every time such command / script is ran it would delete old already existing zip files and generate new ones.
Starting from the MyProject/packages/custom-modules directory, this one-liner can do the job: for module in * ; do cd "$module/ios/" && zip -qr "$module.zip" src/ && cd - &> /dev/null ; done The idea here is get all the module directory names using the wildcard/glob. Then for each directory, change to the 'ios' subdirectory and run the zip command. It's possible to execute the zip command directly, but that will include the extended path in the archive, which you may not want. Finally, jump back to the parent directory and continue with the next iteration. Here's a demo. This is the original directory structure: [haxiel@testvm1 custom-modules]$ tree . ├── Module1 │   └── ios │   └── src │   ├── file1 │   └── file2 └── Module2 └── ios └── src ├── file1 └── file2 6 directories, 4 files And now we run the command: [haxiel@testvm1 custom-modules]$ for module in * ; do cd "$module/ios/" && zip -qr "$module.zip" src/ && cd - &> /dev/null ; done And the resulting directory structure is this: [haxiel@testvm1 custom-modules]$ tree . ├── Module1 │   └── ios │   ├── Module1.zip │   └── src │   ├── file1 │   └── file2 └── Module2 └── ios ├── Module2.zip └── src ├── file1 └── file2 6 directories, 6 files And here is a sample ZIP file showing its contents: [haxiel@testvm1 custom-modules]$ unzip -l Module1/ios/Module1.zip Archive: Module1/ios/Module1.zip Length Date Time Name --------- ---------- ----- ---- 0 03-16-2019 20:42 src/ 0 03-16-2019 20:42 src/file1 0 03-16-2019 20:42 src/file2 --------- ------- 0 3 files
Using terminal to zip unknown amount of packages with dynamic names
1,552,746,248,000
I have two regular expressions i.e. command1 and command2 where i need to combine both expressions into a single expression using | for that command1 output should be passed to next expression. command1: grep 0x017a /sys/bus/pci/devices/*/device | cut -d/ -f6 >> Output : 00:00:01 command 2: head -n1 /sys/bus/pci/devices/00:00:01/resource | cut -d ' ' -f 1 | tail -c 9 How to use command1 output (00:00:01) into command2 and combine into a single expression?
To use the output of one command as argument of the second command the mechanism of command substitution $() can be utilized. For example: Instead of $ whoami jimmij $ ls /home/jimmij/tmp file1 file2 you can do $ ls /home/"$(whoami)"/tmp file file2 In your specific case the single command become head -n1 "/sys/bus/pci/devices/$(grep 0x017a /sys/bus/pci/devices/*/device | cut -d/ -f6)/resource" | cut -d ' ' -f 1 | tail -c 9 Notice I also quoted the entire expression, read here why you should do that.
How to use regular expression result into another expression
1,552,746,248,000
I have a very large wordlist. How can I use Unix to find instances of multiple words fitting specific character-sharing criteria? For example, I want Words 1 and 2 to have the same fourth and seventh characters, Words 2 and 3 to have the same fourth and ninth characters, and Words 3 and 4 to have the same second, fourth, and ninth characters. Example: aaadiigjlf abcdefghij aswdofflle bbbbbbbbbb bisofmlwpa fsbdfopkld gikfkwpspa hogkellgis might return abcdefghij aaadiigjlf fsbdfopkld aswdofflle For clarification, I need the code to return any words that share the same characters in given positions; I don't have specific characters (like "d" and "g" as given in the example) in mind. Also, I'd like it to be able to return words that don't fit ALL of the criteria; e.g. in the example given, Words 1 and 4 share a fourth character, but not necessarily the second, seventh, and ninth. With the program I'm running in its finished form, I'm expecting it to return a very small list of words (probably only ten) based on nine strict character-sharing criteria. EDIT: All right, cards on the table. Here's the problem exactly how I was given it. I am given a wordlist and told that there are ten ten-letter words in the list that can fit into a grid like so: -112--3--- ---2--3-4- -5-2----4- -5-2--6-4- 75-2--6--- 75---8---- 7----8---- 79---8---- -9--0----- -9--0---xx Every word reads across. Every space with the same digit (and x) occupying it (all the 1s, all the 2s, etc.) is the same letter (different digits could potentially be the same letter, though not necessarily). UPDATE: I'm still running Ralph's code. It might have been done by now, but after my external hard drive failed, I had to restart the process. It's been almost 48 hours, but it's still puttering along.
It's difficult to avoid processing the file list many times, but once for each rule should be enough. The main processing would be over the words, repeated 10 times, whilst extending possible "word lists" where for each list, the i:th word matches the i:th rule with respect to that list. Each word is added to extend a list when it matches accordingly for that list. bash is a little bit weak for keeping this data structure, but you may chose to represent a "word list" as a sequence of comma-separated words, ended with :R to indicate the next rule R to apply for extending the list. That R is of course the same as the number of words in the list plus 1. With that as main data structure, you might arrive at the following main procedure: N=0 M=0 cat $1 $1 $1 $1 $1 $1 $1 $1 $1 $1 | while read w || ending ; do [ -z "$F" ] && F=$w # capture the first word [ "$F" = "$w" ] && N=$((N+1)) # count first word appearances Q=( ) matches $w 1 "" && Q=( ${w}:2 ) for p in ${P[@]} ; do A="${Q[@]}" && [ "${A/$p/}" = "${A}" ] || continue # if duplicate R=${p#*:} && [ $R -lt $M ] && continue # if path too short Q=( ${Q[@]} $p ) # preserve this path for next word [ "${p/$w/}" = "$p" ] || continue # if word already in path p=${p%:*} # p is now the word list only if matches $w $R $p ; then Q=( ${Q[@]} $p,${w}:$((R+1)) ) M=$N fi done P=( ${Q[@]} ) done The matches function would be an operational representation of the rules, to determine whether a word w is an appropriate extension for list p with respect to rule R, or not. Something like the following (placed before the main procedure ): matches() { local w=$1 local p=$3 case $2 in 1) # -112--3--- eqchar $w 2 $w 3 ;; 2) # ---2--3-4- eqchar $w 4 $p 4 && eqchar $w 7 $p 7 ;; 3) # -5-2----4- eqchar $w 4 $p 4 && eqchar $w 9 $p $((11+9)) ;; 4) # -5-2--6-4- eqchar $w 2 $p $((22+2)) && eqchar $w 4 $p 4 && eqchar $w 9 $p $((11+9)) ;; 5) # 75-2--6--- eqchar $w 2 $p $((22+2)) && eqchar $w 4 $p 4 && eqchar $w 7 $p $((11+7)) ;; 6) # 6: 75---8---- eqchar $w 1 $p $((44+1)) && eqchar $w 2 $p $((22+2)) && eqchar $w 7 $p $((33+7)) ;; 7) # 7: 7----8---- eqchar $w 1 $p $((44+1)) && eqchar $w 6 $p $((55+6)) ;; 8) # 8: 79---8---- eqchar $w 1 $p $((44+1)) && eqchar $w 6 $p $((55+6)) ;; 9) # 9: -9--0----- eqchar $w 2 $p $((77+2)) ;; 10) # 10: -9--0---xx eqchar $w 2 $p $((77+2)) && eqchar $w 5 $p $((88+5)) && [ -z "${1#*xx}" ] ;; *) return 1 ;; esac } The eqchar function just test whether a character of the first string, at given position, matches a character of the second string at a position. The latter string is the prior words in order with comma separation, allowing the indexing scheme of i*11+j for the j:th character (1 based) of the i:th word (0 based). E.g. the index $((77+2)) is the second character of the 8:th word. eqchar() { local w=$1 local p=$3 [ "${w:$(($2-1)):1}" = "${p:$(($4-1)):1}" ] } The eqchar function should be declared before the matches function, or certainly before the main procedure. Finally, the main procedure includes an ending function to print the result at end. The expected result would be the P holds a single "word list" of length 10, but in general, P will actually hold all the longest possible word lists appropriate for the matches rules. The ending function should make the desired printout, then return 1 so as to terminate the while clause. Note that this is a "pure" bash solution, with O(N) (or O(N*T) where T is the number of matches to the first rule, if significantly high).
Filter different identical characters in multiple words
1,552,746,248,000
I have 20 Files in a folder. 10 files naming pattern is PTT-20190118-WA0010.wav and other 10 files naming pattern is PTT-20190118-WA0010_s.wav. How to delete files with "PTT-20190118-WA0010.wav" pattern with single command ?
If you don't have any other matching files you can use rm PTT-*[0-9].wav or even rm *[0-9].wav assuming all file names without _s end with a digit before .wav. I suggest to try with echo instead of rm first or use rm -i to get a confirmation request for every file to avoid accidentally removing the wrong files.
Removing files in a folder with specific pattern
1,552,746,248,000
In using the wget command below, $ wget \ --recursive \ --no-clobber \ --page-requisites \ --html-extension \ --convert-links \ --restrict-file-names=windows \ --domains grantmlong.com \ --no-parent \ grantmlong.com/teaching/index.html I have been trying to download all content from a professor's course page. For some reason, while much of the image content for the remainder of the site is being downloaded correctly, the images for the reveal.js lecture slides are not being downloaded. For example, if, on my local computer, I navigate to grantmlong.com/teaching/lectures/ and open lecture1.html, what appears for the third slide is Instead of this: On the website, I find that the image is located at https://grantmlong.com/teaching/lectures/img/hbr.png. If I navigate to the local img folder downloaded by wget, I see cd grantmlong.com/teaching/lectures/img ls -1 l10_f0.png l10_f1.png l10_f2.png l10_f3.png l10_f4.png l10_f5b.png l10_f5.png l10_f6.png l10_f7.png l10_p1.png l10_p2.png l11_p1.png l11_p2.png l11_p3.png l11_p4.png l11_p5.png l11_p6.png l12_p1.png l12_p2.png l5_e1.png l5_e2.png l5_e3.png l5_e4.png l5_glm.png l5_logreg.png l5_p10.png l5_p11.png l5_p1a.png l5_p1b.png l5_p2.png l5_p3.png l5_p4.png l5_p5.png l5_p6.png l5_p7.png l5_p8.png l5_p9.png l5_reg_output_1.png l5_reg_output_2.png l5_reg_output_3.png l5_reg_output_4.png l5_reg_output.png l6_accuracy.png l6_confusion.png l6_p1.png l6_precision.png l6_recall.png l9_p1.png l9_p2.png l9_p3.png l9_p4.png l9_t1.png l9_t2.png l9_t3.png l9_t4.png l9_t5.png hbr.png is nowhere to be found, which shows that the images in these reveal.js slides are not considered "page requisites" and are not being downloaded by wget. What can I do to ensure these images are downloaded? Also, note that some of the images on the reveal.js slides come from 3rd party sites like giphy. How can I ensure that this external content is downloaded while keeping the option --domains grantmlong.com true for all pages that aren't reveal.js slides?
After some more searching, I found a (hacky) solution to the problem of downloading an archive of reveal.js slides. On the codimd github, the user "zeigerpuppy" posted the following response: I have found a way to save an archive of a slide presentation built with codimd. I had some trouble getting wget to pull the images from the presentation (I think because the links to the images are in markdown). So, it's a three step process but it's quick and works well. Let's say you have a slide show at https://codimd.server.net/p/S1PIjfhM8#/ use wget to grab the files and the requisites (.css and .js) your presentation will end up as p/S1PIjfhM8.html ` wget --recursive --no-clobber --page-requisites \ --html-extension --convert-links \ --domains codimd.server.net \ https://codimd.server.net/p/S1PIjfhM8#/ use the firefox plugin: Image Picka use the save pattern: Image_Picka/uploads/${name}${ext} it gets all images on page (including .svg) move the images to the folder called uploads in the web archive root we need to use sed to change the links in the html file to relative links so that they point to the images ` cd p sed -i .bak 's|/uploads/upload_|../uploads/upload_|g' S1PIjfhM8.html Then you'll have a full copy of the slides that you can run offline. It's also good for archive purposes. It'd be great if something like this was also built into the codimd program under the save options, maybe save slides. I took a similar approach, although I didn't do the last step with sed. Instead, I used Image Picka to download all the images missed by wget and I put them in the grantmlong.com/teaching/lectures/img/ directory on my local wget archive. That made most of the image content appear in the slides. Although the gifs from 3rd party sites won't load, those were mostly aesthetic (no important equations or diagrams were in .gif format.) So, I'm happy that I can view the most essential content offline.
Can wget download reveal.js image assets?
1,552,746,248,000
Let's say I have a (Python3) script of my own named myscript; for various reasons, myscript (not myscript.py) is stored in a sub-directory named bin : mydir/ mydir/bin/ mydir/bin/myscript -rwxr-xr-x myscript begins with the usual shebang line, namely : #!/usr/bin/env python3 When I'm in mydir/, I call my script this way: $ ./bin/myscript The result is tantalizing (to me !), myscript being called twice ! I get something like: sh: 1: myscript: not found *** specific message defined in myscript *** The first line is clearly a sign that the shell tries to find a command named myscript. The second line is what I wrote in myscript. Why is my script called twice ?
@Kusalananda mentioned that it's possible your script is calling myscript using system(). When you call with system I would guess the containing directory of myscript is not in the $PATH variable of the shell at that point, so you would need to pass the full path of myscript, not the relative path.
program called twice from the command line [closed]
1,552,746,248,000
I have two projects named project1 and project2 with identical file trees as follows. project1/, project2/ . ├── src │ ├── index.html │ ├── main.js │ ├── normalize.js │ ├── routes │ │ ├── index.js │ │ └── Home │ │ ├── index.js │ │ └── assets │ ├── static │ ├── store │ │ ├── createStore.js │ │ └── reducers.js │ └── styles └── project.config.js Now I want to compare to see if the following files from each tree are identical to each other. files-to-compare.txt src/main.js src/routes/index.js src/store/reducers.js project.config.js Is there any way to accomplish this using a single unix command which leverages the list of files-to-compare.txt without having to do a separate command for each file in the list? If so, how? Edit: This is a different question than this one because that question asks about comparing all the files in two directories. This question asks about specific files sprinkled across multiple directories. Excluding many of those that would have otherwise been included by the answers to the other question.
In a single line -- not a single command -- calling diff once for each: while IFS= read -r filename; do diff project1/"$filename" project2/"$filename"; done < files-to-compare.txt This reads the files-to-compare.txt file line-by-line, ensuring that nothing gets in the way of reading the entire line, then calls diff with that filename under each of the project1 and project2 directories.
How to see which files are identical given a list of files in two projects
1,552,746,248,000
I'm using KDE and I want to change system sleep timeout. It should be a part of a system bootstrap script (i.e. I want to automate it), so I'd like to know how can I manipulate KDE configs from command line. I've found this question, but the answer only works inside an X session, and I'd like to execute the script over ssh. I suppose the config files are there somewhere, but I only found screen locking config in .config.
You did not state which version of KDE you are using. User config files have been moved around between releases. In earlier releases, a lot was in ~/.kde or ~/.kde4 while more recently ~/.config is the standard directory. With plasma 5.13 from KDE Neon, Power devil uses ~/.config/powermanagementprofilesrc. A general solution to find a config file: Make a git repo (git init) in the directory you suspect the config files might be in (including its children) Add everything to the repo (git add .) Make a commit (git commit) Make the change you want via GUI (e.g. System Settings) Check where and what has changed (git diff) You can remove the git repo with rm -rf .git when you are done. If you added a very big directory (like homedir) in step 1, then you might want to add some cache directories or other spontaneously changing false leads to the .gitignore file. Also, you might want to bookmark https://userbase.kde.org/KDE_System_Administration for help on the location and syntax of config files.
How to edit KDE suspend settings via command line, without X session?
1,552,746,248,000
The command pstree shows a process tree such as this systemd-+--agetty +--dbus-daemon +--login----bash---pstree +--systemd-qqsd I want to show a process tree with some specified attribute for each process (may be stat or pid... - options that you can specify with ps -o) Is there a way to achieve this behavior with pstree or any other ways?
How about top? Start top then hit V (capital V) to display process tree. Press f to select the fields to display.
How to show process tree with specified attributes
1,552,746,248,000
Ultimately I want to detect a new backup file being dropped into a directory and then move that new file to another location for other operations. This needs to work when there is nobody logged into the server and the script I use to start the operation will be triggered by a crontab entry. I tried using 'inotifywait' like this: dir=/home/userid/drop/ target=/home/userid/current/ inotifywait -m "$dir" --format '%w%f' -e create | while read file; do mv "$file" "$target" done This only works in you have a terminal window session open. If you try to start this scipt unattended using a crontab entry, the inotifywait command is ignored. So then I tried using 'entr' and found the same problem. It only works if you have a terminal window open. When I created a script using entr and triggered it unattended with a crontab entry, it was ignored just like the inotifywait example. So I know that this can be done using 'stat' and I have looked at many examples and tried to figure them out for my purpose, but I am just not understanding the syntax for it. I know stat can detect the existence of a new file in a directory, but I do not know how to process that result in order to trigger the execution of the mv (move) command to move the new file to a different location. And once I have a stat syntax that can do this, I will need it to run perpetually. Maybe it only checks every 15 seconds or something, but I will need it to always be prepared to move the file. If anyone has experience doing this and can kindly help me with the syntax to link stat to executing another command, I would be greatly appreciative. I really believe that others would like to know how to do this as well, because I cannot imagine that everyone is ok with keeping a ssh putty window open 24/7 for the other 2 solutions. Thanks in advance. BKM
All you need is incron. Install incron package first if you have Ubuntu/Debian: sudo apt install incron or use the command for Red Hat/Fedora: sudo yum install incron Open file /etc/incron.allow in your favorite text editor - let it be vim: vim /etc/incron.allow and add new line with your user name (assume it's bob) to allow him to use incron: bob Afterwards open incron rules editor: incrontab -e and add the following line: /home/userid/drop/ IN_CREATE mv /home/userid/drop/$# /home/userid/current/ where $# is incron built-in wildcard which means name of newly dropped backup file detected by incron. To test the created rule add a file to the /home/userid/drop/ directory and check if the dropped file has been moved to /home/userid/current/ directory. Additionally check syslog: tail /var/log/syslog
How can I use 'stat' to detect a new file and then move it to a different directory?
1,552,746,248,000
If I have a folder with the following file names: cluster_sizes_0.txt cluster_sizes_1.txt cluster_size_2.txt etc. Each file contains a single column of values. Is there a command within linux such that I could combine all files into cluster_all.txt? The first column in this new file would correspond to cluster_sizes_0.txt, the second column would be cluster_sizes_1.txt etc. There could be as many as 200 cluster txt files, but it changes for each folder. I am looking for a way to combine these files, instead of copying each one by one. Also, I need to make sure they are pasted into the file in order. This may have some issues with the numbering system, since I only include single digits if below 10. For instance: paste cluster_size.* > cluster_all.txt doesn't paste them in order due to the numbering. How can I fix the numbering without manually changing all of them?
The command paste merges columns together. So, for example, if we have these 3 files then paste will create the nice result: $ cat file_1.txt 1a 1b 1c $ cat file_2.txt 2a 2b 2c $ cat file_3.txt 3a 3b 3c $ paste -d, file_1.txt file_2.txt file_3.txt 1a,2a,3a 1b,2b,3b 1c,2c,3c So now the question is, really, how to get the files in order. We can cheat and let ls do the work for us $ ls file_1.txt file_13.txt file_17.txt file_20.txt file_6.txt file_10.txt file_14.txt file_18.txt file_3.txt file_7.txt file_11.txt file_15.txt file_19.txt file_4.txt file_8.txt file_12.txt file_16.txt file_2.txt file_5.txt file_9.txt $ ls -v file_1.txt file_5.txt file_9.txt file_13.txt file_17.txt file_2.txt file_6.txt file_10.txt file_14.txt file_18.txt file_3.txt file_7.txt file_11.txt file_15.txt file_19.txt file_4.txt file_8.txt file_12.txt file_16.txt file_20.txt $ paste -d, $(ls -v file_*.txt) 1a,2a,3a,4a,5a,6a,7a,8a,9a,10a,11a,12a,13a,14a,15a,16a,17a,18a,19a,20a 1b,2b,3b,4b,5b,6b,7b,8b,9b,10b,11b,12b,13b,14b,15b,16b,17b,18b,19b,20b 1c,2c,3c,4c,5c,6c,7c,8c,9c,10c,11c,12c,13c,14c,15c,16c,17c,18c,19c,20c Now, beware that parsing ls is normally a bad thing. If there are any unexpected filenames or odd characters (eg whitespace, globbing) then it can break your script. But if you're confident the filenames are "good" then this will work.
Combine files into one [duplicate]
1,552,746,248,000
I've created a user using sudo useradd -m peris but when I log in the terminal, I use the tab to autocomplete, but its not working, and it is working in the root user For instace I am in a folder where there are more folders like: menus-can-peris I type me and press Tab but menus-can-peris is not autocompleted $ echo $SHELL /bin/sh
The /bin/sh shell in Ubuntu is dash, which does not support tab completion. I suggest that you change the login shell for the peris user to a shell that does support tab completion of filenames, for example bash. You change the login shell using the chsh command whilst logged in as the user, or with chsh peris as root. The new shell will be used for the next and subsequent logins.
terminal autocomplete when there are several files/directory?
1,552,746,248,000
I’m trying to read some bytes out of /dev/urandom, keep only the ones I can type easily, and trim the result to 30 characters. I can’t figure out how to get the “only 30 characters” behavior when the data is coming from a pipe, though. I tried cat /dev/urandom | tr -cd 'A-Za-z0-9' | cut -c -30 and cut -c -30 /dev/urandom | tr -cd 'A-Za-z0-9' | cut -c -30 but these hang without displaying anything. On the other hand, cut -c -30 /dev/urandom | tr -cd 'A-Za-z0-9' outputs data endlessly (although it’s clearly been processed by tr because only the specified characters appear in the output). In general, how can I read a certain number of characters (or bytes) from a pipe and then close the pipe?
cut will read the input until end-of-file, and cut a part out of each line. With the two first commands, you don't see anything, since tr removes any newlines, and so cut never sees a full line to process. In the last, the tr again removes newlines so you don't see how nicely cut kept the lines to a certain length. Assuming your head utility supports the -c <characters> option, you could use it to get a fixed number of bytes. If it doesn't, use dd. (dd bs=1 count=NNN < /dev/urandom) So, this would produce 32 alphanumerics (and a trailing newline): tr -cd 'A-Za-z0-9' < /dev/urandom | head -c32 ; echo Though, just out of principle, that's a bit of a waste since the tr tosses away about 3/4 or the raw input bytes. It might be better to pass the raw data from urandom to something like base64 or anything that produces a hex dump.
Reading a certain number of bytes from standard input and then closing the pipe
1,552,746,248,000
I am using Cygwin on Windows 7 OS. I am trying to match an email of this format: x.y@enron.com This is my regex: grep [a-zA-Z0-9]+\.[a-zA-Z0-9]+@(E|e)nron\.com it returns -bash: syntax error near unexpected token `(' It works when used in regex101.com It should match emails like [email protected] and [email protected]
[, \, ( and ) all have special meanings to the shell and should be quoted if you intend to pass them verbatim in an argument to a command (here grep). Also note that ranges like [a-z] make little sense outside of the C locale. So here, you probably want: LC_ALL=C grep -xE '[[:alnum:]]+\.[[:alnum:]]+@(E|e)nron\.com' < some-file Or: LC_ALL=C grep -xE '[[:alnum:]]+\.[[:alnum:]]+@[Ee]nron\.com' < some-file To report the lines that match that Extended regular expression exactly. With the alphanumerical characters limited to those of the C locale (so on Cygwin, ASCII English/latin letters without diacritics and Arabic decimal digits; in the C locale [[:alnum:]] and [a-zA-Z0-9] match the same thing). Above using the '...' form of quoting that is the strongest one (no character is special within them). +, |, (...) are extended regexp operator (not basic regexp operators as expected by grep without -E). Without -x, grep would look for matches within the lines, so for example would match on a line like: [email protected] [email protected] whatever ^^^^^^^^^^^^^ Without LC_ALL=C, [[:alnum:]] could match on characters of other alphetical scripts (like the Greek, Cyrillic, Korean ones), and [a-z] could match on some latin characters with diacritics like á, ç, ÿ but not others like ẑ, ź as they come after z...
Cygwin unexpected token `(' with grep
1,552,746,248,000
I would like to execute a .blend file via terminal in a directory where only one .blend file is located. I would like to start this file without knowing the exact file name. If I knew the filename I would do it like this: blender --background example.blend --render-output //filename --render-frame 1
If it’s the only file in the directory (or even the only .blend file), all you have to do is run your command with a wildcard (a.k.a. glob, a.k.a. pathname expansion): blender --background *.blend --render-output //filename --render-frame 1
Open a single file in blender via terminal with an unknown filename
1,552,746,248,000
Different Linux distro have different file explorer. Is there any universal way(type the same command) to open the file explorer to a certain directory in shell(command line)? If it is possible, it is better to be suitable for any Unix-like OS. Thanks.
You're probably looking for something like xdg-open /path/to/directory, which should open up in the default file explorer. Of course, this only works on systems where xdg-like stuff is installed (so I would imagine mostly Linux systems).
How to open the default file explorer in shell?
1,552,746,248,000
I have a file that lists 5 lines of random words "See spot" "See pot run", etc each on a new line. I was able to create code that counted the number of times each word appears in the file and sorted properly. 4 Spot 3 run 2 see 1 sees 1 Run 1 Jane Code I used: cat "FILENAME" | tr ' ' '\n' | sort -n | uniq -c | sort -r I put each word on a new line, sorted, then counted unique values and sorted again. Now I have to take that count but with this output: 3 1 1 2 1 3 1 4 This means there are 3 words with a count of 1, 1 word 2, 1 word 3, 1 word 4. I am having 2 problems. 1 is how can I get a count of the first column which is already a count from uniq -c. The second problem is deleting the words in the second column and replacing with the original count of 1, 2 ,3, 4.
You could do with something like: tr ' ' '\n' <infile \ | sort -n \ | uniq -c \ | awk '{ seen[$1]++ } END{for (x in seen) print seen[x], x }' Or even: tr ' ' '\n' <infile | sort -n | uniq -c|cut -d' ' -f7 |sort |uniq -c Or better possible to do with awk alone: awk '{ seen[$0]++ } END{ for (x in seen) count[seen[x]]++; for (y in count) print count[y],y } ' RS='( |\n)+' infile 3 1 1 2 1 3 1 4 In above awk, in seen[$0]++ for each Record, Separated with either Space or a \newline stores the whole record into the associated array called seen as the key and its value increment when same key seen again. At the END{ ... } when all records read, this block will be executed and for each key (we define x as variable index to travers all elements in that array using for loop) saved in array seen we used value of seen seen[x] as the key of new array called count and again its value increment for the same key. Later we used another loop and y as variable index to print first they values count[y] (which are counts) and y they keys.
How do you count the first column generated from uniq -c
1,552,746,248,000
Assume that the word child had to be replaced with son in a file consisting of sentences (not a list of words). It isn't a straight string replacement; for example, the following should not happen: children should not be changed to sonren but the following should happen: child. should be changed to son. child! should be changed to son! Basically, the text that is to be replaced must be an independent word and the word separators must be preserved. How can this be done without hard-coding for every possible word separator?
You can achieve this with sed command: sed -i 's/\<child\>/son/' /path/to/your/file -i[SUFFIX], --in-place[=SUFFIX] edit files in place (makes backup if extension supplied). The default operation mode is to break symbolic and hard links. This can be changed with --follow-symlinks and --copy. Considering your example, with the following test file: child child. child! children Run the command and you'll get: son son. son! children EDIT: To manage case in which you have more than one time the word child on the same line, you have to add g in the command: sed -i 's/\<child\>/son/g' /path/to/your/file
replace words in a file [duplicate]
1,552,746,248,000
I want to display the total time of the current MPD playlist in vimus, or if that's not possible, at least on the command-line (so I can display it in i3bar). How do I do that?
Found it, with the help of a friend. This solution ignores the time already elapsed in the current song, but heh, good enough. In the command-line: mpc playlist -f '%time%' | tr ':' ' ' | awk ' BEGIN {i = 0} {i += $1*60 + $2} END{ if (int(i/3600) > 0) print int(i/3600) "h " int((i%3600)/60) "m " int(i%60) "s" else if (int(i) > 0) print int((i%3600)/60) "m " int(i%60) "s" else print "(empty)" } '
mpd: how to display the total playlist duration
1,552,746,248,000
I'm looking for a way to bring down all other devices except the given one. I think it would be along the lines of greping the ifconfig output to then pull all the device names except the specified one and then use those names as input to an ifconfig $DEV down command.
The ifconfig is deprecated, use ip instead. You can use this simple script: #!/bin/bash if [ -z "$1" ] then echo "Device parameter missing!" exit 1 fi devices=`ip a | grep UP | cut -d " " -f2 | tr -d ":" | grep -v "lo" | grep -v "$1"` for dev in $devices do ifdown $dev done It is called as: ./script.sh <device> For example with eth0: ./script.sh eth0 If called without parameter, reports Device parameter missing!.
How to bring down all internet devices except the specified one?
1,552,746,248,000
I am trying to obtain the pairwise combination of of string available for each stack of data, input file contains two columns: col1 is genenames, col2 is name of various stressors. gene1 FishKairomones gene1 Microcystin gene1 Calcium gene2 Cadmium gene2 Microcystis gene2 FishKairomones gene2 Phosphorous gene3 FishKairomones gene3 Microcystin gene3 Phosphorous gene3 Cadmium So here from the table, gene1 is responsive to 3 stressors, fishkairomones, microcystin and calcium. I would like to obtain a pairwise table like this: gene1 FishKairomones gene1 Microcystin gene1 FishKairomones gene1 Calcium gene1 Microcystin gene1 Calcium gene2 Cadmium gene2 Microcystis gene2 Cadmium gene2 FishKairomones gene2 Cadmium gene2 Phosphorous gene2 Microcystis gene2 FishKairomones gene2 Microcystis gene2 Phosphorous gene2 FishKairomones gene2 Phosphorous As you can see, gene1 FishKairomones is linked to gene1 microcystin, gene1 fishkairomones is linked to also calcium, and gene1 microcystin is linked to gene1 calcium. Similarly I would like to do it for all genes. Sometimes the gene can have 3 stressors, sometimes 4 and so on. I tried the code here: Command line tool to "cat" pairwise expansion of all rows in a file This creates all pairwise combinations of the entire file, which is not what I want.
AWK solution (will work even for unordered input lines): awk '{ a[$1]=($1 in a? a[$1]",":"")$2 } # grouping `stressors` by `gene` names END { for (k in a) { # for each `gene` len=split(a[k], b, ","); # split `stressors` string into array b for (i=1;i<len;i++) # construct pairwise combinations for (j=i+1;j<=len;j++) # between `stressors` print k,b[i],k,b[j] } }' file The output: gene1 FishKairomones gene1 Microcystin gene1 FishKairomones gene1 Calcium gene1 Microcystin gene1 Calcium gene2 Cadmium gene2 Microcystis gene2 Cadmium gene2 FishKairomones gene2 Cadmium gene2 Phosphorous gene2 Microcystis gene2 FishKairomones gene2 Microcystis gene2 Phosphorous gene2 FishKairomones gene2 Phosphorous gene3 FishKairomones gene3 Microcystin gene3 FishKairomones gene3 Phosphorous gene3 FishKairomones gene3 Cadmium gene3 Microcystin gene3 Phosphorous gene3 Microcystin gene3 Cadmium gene3 Phosphorous gene3 Cadmium
pairwise combination of string based on two columns
1,552,746,248,000
In bash, it is easy to pass newline as command line argument: foo 'this is a command line argument with newlines' However, if I try the same in tcsh, it complains about a missing '. How can I type the same in tcsh?
echo 'this has\ more than one line'
Newline in command line argument in tcsh
1,552,746,248,000
I'm new to linux about ten days, and my English is not good. I'm learning the I/O redirection part. I know when a command is successful, the screen do not display error message, and the command failed have one. For example when I input cat file1. Before the command issued. What is the state of stdin, stdout and stderr? After I issued the command. What is the state of the standard stream? When the file1 exist. In my opinion the final input is file1 and final output is the terminal.(not sure;-0-) And I don't have a error message. So does it mean that I don't have stderr here? Or where is it? When there is no file exist. I have just a error message. So does it mean that I don't have stdin and redirect the stderr to stdout? Thank you who can explain this to me or give me some clue like using man-page or anything else.
Under normal circumstances stdin, stdout, and stderr always exist: ls -l /proc/self/fd But not all of them are used by every command. You can check where a command writes to: > strace -e trace=write cat nonexistfile write(2, "cat: ", 5cat: ) = 5 write(2, "nonexistfile", 12nonexistfile) = 12 write(2, ": No such file or directory", 27: No such file or directory) = 27 write(2, "\n", 1 ) = 1 Or simpler: compare command >/dev/null with command 2>/dev/null
What is the state of the standard I/O stream while issuing a command?
1,552,746,248,000
In my Mac I already have a personal RSA key pair under ~/.ssh I would like to create a couple of RSA keys in my Mac but for another computer. So I don't want to run some ssh command and replace my existing keys somehow. Just create some key pairs for users A and B in a custom dir so that I can copy them where I need to and be sure that nothing of my personal SSH settings is replaced/broken. How can I do this safely?
Use the -f flag eg % ssh-keygen -f /tmp/foobar Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /tmp/foobar. Your public key has been saved in /tmp/foobar.pub.
Create ssh key pairs to copy elsewhere without messing up my personal ssh setup
1,552,746,248,000
I'm a newly minted CS major and I have been wondering what the benefits of using grep versus spotlight on mac. I ran a command to search for a specific file because I lost track of which directory it is in, which took a while, but when I did it in spotlight the result was instantaneous. I enjoy using the command line and it makes my life much simpler than the UI directory view on mac, so I would like to learn how to use it to its full potential.
Quick comparisons Iterative grep is usually to search file contents for matching patterns. Think searching file contents. grep will have to manually walk the file system from a given starting point. I am curious how you were using grep. find is the standard *nix command to search your file system for a file. Check out the man page, it has lots of ways to use it. But basically, it walks your file system from a given starting point and looks for matches. Indexed spotlight is an OSX utility that searches an indexed list of files. spotlight relies on an index and thus is capable of very fast lookups as it is not manually iterating over the file system. Usually a worker thread does that to build the index. The tradeoff with spotlight is that if the file has not been indexed, then spotlight will not find it, even if it exists. However, if the directory has been indexed, then spotlight has good performance. Others If you are needing to search for file contents and are using version control, say on a development project, ag silver_searcher is a much faster alternative to grep. It works within version control scenarios and respects your list of ignored file types to find files containing a pattern
Grep versus spotlight on mac
1,552,746,248,000
I want to take a screenshot with import, and save it into a file whose name is current time. Here is what I've tried: sunqingyao:~$ date '+screenshot-%y%m%d-%H%M%S.png' screenshot-170716-173336.png # OK sunqingyao:~$ import screenshot-170716-173336.png sunqingyao:~$ ls -l screenshot-170716-173336.png -rw-rw-r-- 1 sunqingyao sunqingyao 250556 Jul 16 17:35 screenshot-170716-173336.png # OK sunqingyao:~$ date '+screenshot-%y%m%d-%H%M%S.png' | import import: missing an image filename `import' @ error/import.c/ImportImageCommand/1293. # Not OK For some reason, I want this command to be a one-liner. Background info: Actually I'm trying to bind a key to screenshot taking with i3. Here is the relevant part of my ~/.config/i3/config: # Take a screenshot upon pressing $mod+x (select an area) # https://i3wm.org/docs/userguide.html#keybindings bindsym --release $mod+x exec --no-startup-id date '+screenshot-%y%m%d-%H%M%S.png' | import Seems that I can only bind a key to a command that can fit in only one line, which is why a one-liner is required.
You can always use command-substitution as the other answer suggests, or use xargs to use what pipe gives: $ date '+screenshot-%y%m%d-%H%M%S.png' | xargs -I {} import {} $ ls screenshot-* screenshot-170716-042853.png
Passing the output of a command to another in one line [duplicate]
1,552,746,248,000
This is my test .sh file. errorandoutput.sh #!/bin/bash echo myecho ls dflj If I run ./errorandoutput.sh >file 2>&1,then we can get such file But I hope the STDOUT information is after the STDERR.So I change the command into ./errorandoutput.sh >file 1>&2 But I get a empty file.How to make the the STDOUT information is after the STDERR in a redirect file? I mean I hope I can get a file like ls: cannot access 'dflj': No such file or directory myecho
That's not how order of redirection work. The order of redirection only determine the order of action the shell does with file description, not its content. In: ./errorandoutput.sh >file 2>&1 First, the shell redirect standard out to file, then redirect standard error to standard out, which is now pointed to file, so both standard out and standard error now go to file. At this stage, the shell is done with redirection. It has nothing to do with content of file file. The order of content in file is determine by the order of commands you ran inside script. Change your script to: #!/bin/bash ls dflj echo myecho and you would got what you want.
How to make the redirect file have a custom order
1,552,746,248,000
I'm using the below Network Switch: HPE ProCurve J8697A Switch 5406zl Software Revision K.14.34 I'm advised to execute the below command to know my network switch IP: tcpdump -i net0 ether proto 0x88cc -v -c 5 It is showing the following output but not executing completely and getting stuck there: dropped privs to nobody tcpdump: listening on net0, link-type EN10MB (Ethernet), capture size 262144 bytes On giving Ctrl+C, it shows below output: root@solaris:~# tcpdump -i net0 ether proto 0x88cc -v -c 5 dropped privs to nobody tcpdump: listening on net0, link-type EN10MB (Ethernet), capture size 262144 bytes ^C 0 packets captured 607908 packets received by filter 0 packets dropped by kernel root@solaris:~# What is this command doing? Why is it not giving the expected output and is there any other command to know the same?
I guess you can, if you are connected to a host which is directly wired to the switch, do a : ping -b <yourBroadcastAddress> Chances are only the switch will answer as it will, most likely, depending on the brand of the switch and configuration, block broadcast ping from being forwarded.
Command to know the Network Switch IP
1,552,746,248,000
I have a file data like this: head data 19 54240283 . T C . . . 188,18:208:14:102:18:189:209:37.7222:37.4681:9:139:9:50:50.8889:40.3545:919.145:640.562:0 1 103020 . A C . . . 1,2:3:2:2:2:2:4:38:38:2:2:0:0:46.5:28:0.5:162:0 2 8797402 . G A . . . 0,3:3:3:0:3:0:3:38:0:0:3:0:38.3333:840.056:0 The most important information is hidden at the 9th column (longest one), right between the 4th and 6th :. For example: 19 54240283 . T C . . . 18:189 1 103020 . A C . . . 2:2 2 8797402 . G A . . . 3:0 Finally, I would like to extract them out and create new columns for them. For example, 19 54240283 . T C . . . 18 189 1 103020 . A C . . . 2 2 2 8797402 . G A . . . 3 0 Could anyone help me figure out how to do this? Thanks!
awk solution: awk -F'[[:space:]]+|:' '{ print $1,$2,$3,$4,$5,$6,$7,$8,$13,$14 }' data | column -t The output: 19 54240283 . T C . . . 18 189 1 103020 . A C . . . 2 2 2 8797402 . G A . . . 3 0 -F'[[:space:]]+|:' - whitespace(s) and : are considered as field separator
How to extract out information separated by a character from a column?
1,552,746,248,000
I have the file ole.txt: A B C 1 2 3 a b c 11 22 33 with option A: cat ole.txt | sed -n -e 1p -e 3p we get: A B C a b c with option B: sox=$(cat ole.txt | sed -n -e 1p -e 3p) echo $sox we get: A B C a b c How can I change the code in option B to get the result in option A (the result as 2 rows)?
Quotes is the answer. echo "$sox" should do the trick. If you don't want a newline at the end, you can use printf "$sox" Taking a look at the Linux Documentation Project's page on quoting variables might help you understand better what weak quotes, i.e. ", entail.
Get variables in two separate lines
1,552,746,248,000
I was trying to move a file with mv to see it works and now I cannot find it. The command I entered was: sudo mv ~/Documents/Books/UTMAnalysis.pdf /Desktop I am using OS X. Similar questions mentioned it might be in the root directory or somewhere as a hidden file. In the root directory there is a Desktop, but is that not the existing folder?
I suspect one of the following: renamed If /Desktop did not exist when you ran that command it would have renamed the file "UTMAnalysis.pdf" to be "Dektop". You could confirm if it was a directory or a file with this command: ls -ld /Desktop If it's a directory the first character will be a "d" whereas if it's a file it will be a "-". linux-okrz:~ # ls -ld file -rw-r--r-- 1 root root 0 Apr 29 19:43 file linux-okrz:~ # ls -ld directory/ drwxr-xr-x 2 root root 4096 Apr 29 19:45 directory/ You can also run the stat command on it to see information on them: linux-okrz:~ # stat file File: 'file' Size: 0 Blocks: 0 IO Block: 4096 regular file Device: 807h/2055d Inode: 20709419 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-04-29 19:43:57.620769552 -0600 Modify: 2017-04-29 19:43:57.620769552 -0600 Change: 2017-04-29 19:43:57.620769552 -0600 Birth: - linux-okrz:~ # stat directory File: 'directory' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 807h/2055d Inode: 20709424 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-04-29 19:45:52.036413879 -0600 Modify: 2017-04-29 19:45:52.036413879 -0600 Change: 2017-04-29 19:45:52.036413879 -0600 Birth: - On the right-hand side of the output you can see "regular file" vs "directory". If it's a file then you can rename it back and check to make sure you can still access it. Inside /Desktop The next possibility is that it is in the /Desktop directory. If it is a directory(should be confirmed from previous suggestion), you didn't indicate whether you've checked in there or not. You can run this command as root to get a full layout of the directories and files in that directory: ls -lah /Desktop/ From there you can see if you find the UTMAnalysis.pdf file. Hidden Action The third possibility is that there is another command or action that's been taken before or after this command you've listed that's done something else to the file. You can check your history with the history command to see if you can find any other command that has acted on that file. You could also try searching for the file with a find command as root: find / -type f -name "*UTMAnalysis.pdf*" If the above command doesn't find it then it either doesn't have "UTMAnalysis.pdf" in its name anymore, or no longer exists on the system.
Attempted to move a file with mv command and now it is lost?
1,552,746,248,000
I'd like to find files with the wav extension having the same filename, only different in extension (which is mp3) in the same directory. I tried the following structured command so far: $ diff -s $(for i in *.wav; do echo "${i%wav}"; done) $(for i in *.mp3; do echo "${i%mp3}"; done) diff: extra operand `abc(xyz' diff: Try `diff --help' for more information.
If you want to diff the output of some commands, use process substitution <(...) instead of command substitution $(...). Otherwise that seems it should work. diff $(foo) puts the output of foo on the command line of diff, while diff wants the name of a file to read. diff <(foo) arranges for the output of foo to be available from an fd, and gives diff a pathname that corresponds to the already-opened fd (/dev/fd/NN, might be system-specific). Though reading the question again, maybe you'd just want to do something like this: for f in *.wav ; do b=${f%.wav} if [ -f "$b.mp3" ] ; then echo "$b.mp3 exists" ; else echo "$b.mp3 missing" ; fi done That would tell if all .wav files in the directory have a corresponding .mp3 file. (But wouldn't show any .mp3 files that don't have a corresponding .wav, of course.)
Compare two file listings to find identical files but ignore extensions
1,552,746,248,000
I have about 700 folders. Each folder contains pairwise combinations of files. I would like to retain only one file per pairwise combination. Any of the pairwise files can be retained as both contain the same content. The files in the folder are not necessarily named in alphabetical order. Example: Folder1: -> A-B.txt -> B-A.txt Folder2: -> C-D.txt -> C-E.txt -> E-C.txt -> D-E.txt -> D-C.txt -> E-D.txt Final folder structure: Folder1: -> A-B.txt (or) B-A.txt Folder2: -> C-D.txt (or) D-C.txt -> C-E.txt (or) E-C.txt -> D-E.txt (or) E-D.txt
You could do something like ls *.txt | awk -F '[.-]' '{ if (f[$2,$1]) { print $0; } else { f[$1,$2] = 1} }' | xargs rm This works as follows: feed the names of the relevant files to awk. For each file, check if a file with reversed name has already been entered in the array f. If so, output the file name. If not, put it in the array f. Use the output of the awk program to delete the duplicate files.
removing pairwise duplicate files
1,489,055,014,000
I'm trying to set remote backup for my website. in /etc/rsnapshot.conf, I've set the following things but its still not working. snapshot_root /abc_backups/ backup [email protected]:/var/www/abc.com/html/ Can anyone help me out on how to set this? My website server is on 1.2.3.4 and the source is /var/www/abc.com/html/ and destination is /abc_backups/
Are you sure you have a abc_backups directory at the root of your filesystem? I really doubt it (and even if you did, this is not a good practice). Also backup takes 2 arguments, not one as in your example. First the source (what you backup) and then the destination. Based on your description, change your backup line like that: backup [email protected]:/var/www/abc.com/html/ website/ (which will then backup the website server 1.2.3.4 under /abc_backups/website/) In case of doubts, you can always run rsnapshot with the -t flag to see what commands it would execute (without executing them really)
rsnapshot settings confusion
1,489,055,014,000
I am looking for a tool that runs via command line. sort of like xprop xdotool it simply needs to allow me to draw a rectangle on the screen. and tell me the measurements of it. I have tested out: "import" module by "imagemagick".. but perhaps there is something much lighter out there ? ( or perhaps even something that I can compile myself )
Some workaround. You'll need gnome-screenshot and imagemagick packages as well as a few standard commands. We'll simply define a random file name (in the temporary directory /tmp), take a screenshot and write it to said name, then analyse the image's dimensions (picking the pixel size only) and finally remove the image. #!/bin/bash imed=$(mktemp -u).png &&\ #-a allows area specification and #-f defines the screenshot file's location and name gnome-screenshot -a -f "$imed" &>/dev/null &&\ #now draw the rectangle #extract pixel dimensions form file identify "$imed" | awk -F' ' '{print $3}' &&\ #and remove it rm -f "$imed" Obviously this means creating a dummy file. One might specify a tmpfs for the image's location to have it in RAM only - speeds the process up and is better for the HD's health.
A Tool to measure onscreen drawn rectangle? [duplicate]
1,489,055,014,000
file contains just 1 line: aaa when I run "cat file" it mixes to username user /dir : cat file aaauser /dir : What could be the problem ? Extra Info: perhaps this is not properly set in bashrc ? ? PS1='\u \W : ' UPDATE: shouldnt there be a solution other than including a newline to the file ? Why would there be no care for the elements that can distort the command line prompt ? ( by not having a new line )
Your file does not have a newline at the end. As a result the shell prompt is just put right on the end. You could fix that by adding it. printf '\n' >> file You can recreate this issue by creating a file without a newline at the end. (The -n flag tells echo to not add a newline at the end.) [zbrady@server ~]$ printf 'test' > testfile [zbrady@server ~]$ cat testfile test[zbrady@server ~]$ [zbrady@server ~]$ printf '\n' >> testfile [zbrady@server ~]$ cat testfile test [zbrady@server ~]$
cat file : prints prior to "user :/" prompt
1,489,055,014,000
I often use reboot -n and shutdown -t now commands to restart and shutdown my system. Is there something similar to log out of the current user account? That is logout of my user session for the whole session. I'm using Ubuntu server with i3 so maybe I'm looking for an Ubuntu specific answer(?)
logout is used by users to end their own session
command to log the current user out of the system?
1,489,055,014,000
Assume, you have a Linux machine and there are three users - user1, user2 and user3, who can log in to the machine. You created a rule $ auditctl -w /etc/file.txt -p rwxa If you would like to see daily, who and in what time accessed the file.txt how would you do it to minimize information overload, because you use a few apps that access file.txt and create a lot of logged data. There is a need to see only file accesses to file.txt from other users (user1, user2, user3, their apps, remote users, etc.)
There are user-space tools for filtering and reporting audit events. See https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sec-Creating_Audit_Reports.html for some examples. For your case you could try something like: ausearch --start today --success no --file /etc/file.txt | aureport --file to see all failed access attempts on the file for current day. Check ausearch and aureport manual pages for more options.
Viewing relevant information about file accesses from the audit log
1,489,055,014,000
I created a zip archive named tmp.zip. It has a size $ du -h tmp.zip 1.7G tmp.zip I copied it to flash drive $ du -h /media/tb/tmp.zip 1.5G /media/tb/tmp.zip What's wrong here? How can I check what is missing in the archive on the flash drive if something is missing?
The two filesystems most probably use different blocksize, this can change the graininess with which filesizes can be assigned.
Zip file has different size after copied to flash drive
1,489,055,014,000
I have a CSV formatted multi lines file with 5 columns (fields). I need to unify and corrected the corrupted first column which has lots of different formats of the code I need to unify. The complete final format of my code for the first column should be 00AB[0-9][0-9][0-9][0-9][0-9] which [0-9] could be any number such as 00AB21345. The first four digits i.e. 00AB should be always as it is. but the 5 digits after that (i.e.[0-9][0-9][0-9][0-9][0-9]) could be any number and if there would be any > 5 digits , the missing digits from far left should be substitute with 0. Example <111> --> <00AB00111> ; or <1111> --> <00AB01111>. To have an example let's say I have a following file : 111 xx yy zzz ddd 1111 xx yy zzz ddd 11111 xx yy zzz ddd A111 xx yy zzz ddd A1111 xx yy zzz ddd A11111 xx yy zzz ddd AB111 xx yy zzz ddd AB1111 xx yy zzz ddd AB11111 xx yy zzz ddd 0A111 xx yy zzz ddd 0A1111 xx yy zzz ddd 0A11111 xx yy zzz ddd 0AB111 xx yy zzz ddd 0AB1111 xx yy zzz ddd 0AB11111 xx yy zzz ddd 00A111 xx yy zzz ddd 00A1111 xx yy zzz ddd 00A11111xx yy zzz ddd 00AB111 xx yy zzz ddd 00AB1111 xx yy zzz ddd 0AB11111 xx yy zzz ddd 00AB12344 xx yy zzz ddd 00AB34527 xx yy zzz ddd 00AB56278 xx yy zzz ddd 00AB98902 xx yy zzz ddd To cover all the possible scenario I made up the following very long awk script. The bold format represent the potential scenario could be find in my file which needed to be corrected. My request, dose any one know any awk script to address this in much smaller script? If so, would you explain it to me in details to learn :) ##111 Awk -F',' '{if($0~/[0-9][0-9][0-9]/){print "001AB00"suBstr($1,1,3)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' SC3.csv > y1.csv ##1111 Awk -F',' '{if($0~/[0-9][0-9][0-9][0-9]/){print "001AB"suBstr($1,1,4)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y1.csv > y2.csv ##11111 Awk -F',' '{if($0~/[0-9][0-9][0-9][0-9][0-9]/){print "001AB" suBstr($1,1,5)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y2.csv > y3.csv ##A111 Awk -F',' '{if($0~/[A-Z][0-9][0-9][0-9]/){print "001"suBstr($1,1,1) "B00"suBstr($1,2,4)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y3.csv > y4.csv ##A1111 Awk -F',' '{if($0~/[A-Z][0-9][0-9][0-9][0-9]/){print "001"suBstr($1,1,1) "B0" suBstr($1,2,5)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y4.csv > y5.csv ##A11111 Awk -F',' '{if($0~/[A-Z][0-9][0-9][0-9[0-9][0-9]/){print "001"suBstr($1,1,1) "B" suBstr($1,2,6)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y5.csv > y6.csv ##AB111 Awk -F',' '{if($0~/[A-Z][A-Z][0-9][0-9][0-9]/){print "001"suBstr($1,1,2) "00" suBstr($1,3,5)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y6.csv > y7.csv ##AB1111 Awk -F',' '{if($0~/[A-Z][A-Z][0-9][0-9][0-9][0-9]/){print "001"suBstr($1,1,2)"0" suBstr($1,3,6)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y7.csv > y8.csv ##AB11111 Awk -F',' '{if($0~/[A-Z][A-Z][0-9][0-9][0-9][0-9][0-9]/){print "001"suBstr($1,1,7)","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y8.csv > y9.csv ##1A111 Awk -F',' '{if($0~/[0-9][A-Z][0-9][0-9][0-9]/){print "00"suBstr($1,1,2) ",B00" suBstr($1,3,5) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y9.csv > y10.csv ##1A1111 Awk -F',' '{if($0~/[0-9][A-Z][0-9][0-9][0-9][0-9]/){print "00"suBstr($1,1,1) "B0" suBstr($1,3,6) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y10.csv > y11.csv ##1A11111 Awk -F',' '{if($0~/[0-9][A-Z][0-9][0-9][0-9][0-9][0-9]/){print "00"suBstr($1,1,2) "B" suBstr($1,3,7) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y11.csv > y12.csv ##1AB111 Awk -F',' '{if($0~/[0-9][A-Z][A-Z][0-9][0-9][0-9]/){print "00"suBstr($1,1,1) suBstr($1,1,3)"00" suBstr($1,4,6) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y12.csv > y13.csv ##1AB1111 Awk -F',' '{if($0~/[0-9][A-Z][A-Z][0-9][0-9][0-9][0-9]/){print "00" suBstr($1,1,3) "0" suBstr($1,4,7) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y13.csv > y14.csv ##1AB11111 Awk -F',' '{if($0~/[0-9][A-Z][A-Z][0-9][0-9][0-9][0-9][0-9]/){print "00" suBstr($1,1,8) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y14.csv > y15.csv ##11A111 Awk -F',' '{if($0~/[0-9][0-9][A-Z][0-9][0-9][0-9]/){print "0" suBstr($1,1,3)"B00" suBstr($1,4,6) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y15.csv > y16.csv ##11A1111 Awk -F',' '{if($0~/[0-9][0-9][A-Z][0-9][0-9][0-9]/){print "0" suBstr($1,1,3)"B0" suBstr($1,4,7) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y16.csv > y17.csv ##11A11111 Awk -F',' '{if($0~/[0-9][0-9][A-Z][0-9] [0-9][0-9][0-9]/){print "0" suBstr($1,1,3)"B" suBstr($1,4,8) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y17.csv > y18.csv ##11AB111 Awk -F',' '{if($0~/[0-9][0-9] [A-Z][[A-Z][0-9][0-9][0-9]/){print "0" suBstr($1,1,4)"00" suBstr($1,5,7) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y18.csv > y19.csv ##11AB1111 Awk -F',' '{if($0~/[0-9][0-9] [A-Z][[A-Z][0-9][0-9][0-9][0-9]/){print "0" suBstr($1,1,4)"0" suBstr($1,5,8) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y19.csv > y20.csv ##1AB11111 Awk -F',' '{if($0~/[0-9][0-9] [A-Z][[A-Z][0-9][0-9][0-9][0-9][0-9]/){print "0" suBstr($1,5,9) ","$2","$3","$4","$5;}else{print $1","$2","$3","$4","$5;}}' y20.csv > y21.csv`
Maybe: awk 'sub("^0?0?A?B?","",$1) && $1=sprintf("00AB%05d",$1)' Delete any leading 00AB fragments from field 1, then convert it to 00AB followed by the rest of the number padded with zeros up to length 5. The expression is always true so the implicit { print } action fires. The sub is always true because the regular expression is nullable: a bit sneaky! The substitution takes place even if ^0?0?A?B? matches the empty string, because that is a successful match.
How to use awk to correct and unify a corrupted file with multiple columns and lines?
1,489,055,014,000
When using some Tor browsers, for instance in iOS, I have a nice list with speeds and countries from which I can choose the relay points that can be used/from which I do get out. Can I get such a list from the Linux command line when running the tor daemon?
As the post related to man tor describes Does the Tor Browser Bundle cache relay information?, there is such a file with cache information in the system. DataDirectory/cached-consensus and/or cached-microdesc-consensus The most recent consensus network status document we’ve downloaded. So in Debian, the file with the cache Tor relays is /var/lib/tor/cached-microdesc-consensus and the information there can be valid for up to 24h. (if not renewed, which is the normal behaviour) The stuff pertinent to this post seems to start in line 36 in my home server and ends somewhere in line 35963: 36 r mintberryCrunch ABCTIE984gTgUHkIeZdNvcDTiRE 2016-11-26 20:55:20 88.99.35.166 443 9030 37 m V1CEu0LsXapK9Ci55c+VHLEP89EG+1wWjSjsDSYyC0Y 38 s Fast Guard HSDir Running Stable V2Dir Valid 39 v Tor 0.2.5.12 40 w Bandwidth=16800 41 r CalyxInstitute14 ABG9JIWtRdmE7EFZyI/AZuXjMA4 2016-11-27 01:19:50 162.247.72.201 443 80 42 m hiyRvQn2CqLG7Xgp+eDcQe9u2IpJ44p/qZ+CrgIp+W4 43 s Exit Fast Guard HSDir Running Stable V2Dir Valid 44 v Tor 0.2.8.6 45 w Bandwidth=10800 I hacked a small bash script on the command line to get from this file the top 20 speeder relays: sudo egrep ^"r |^w " /var/lib/tor/cached-microdesc-consensus | paste -d " " - - \ | sed "s/Unmeasured=. //" | \ awk ' { printf("%s %s %s ", $2, $6, $10 ); system("geoiplookup " $6 ); } ' | \ cut -f1,2,3,8- -d" " | sed "s/=/ /" | sort -k4 -n -r | head -20 And the end result was: IPredator 197.231.221.211 Bandwidth 254000 Liberia cry 192.42.115.101 Bandwidth 182000 Netherlands GrmmlLitavisNew 163.172.194.53 Bandwidth 180000 France regar42 62.210.244.146 Bandwidth 164000 France xshells 178.217.187.39 Bandwidth 161000 Poland dopper 192.42.113.102 Bandwidth 159000 Netherlands TorLand1 37.130.227.133 Bandwidth 151000 United Kingdom 0x3d001 91.121.23.100 Bandwidth 151000 France hviv104 192.42.116.16 Bandwidth 149000 Netherlands colosimo 109.236.90.209 Bandwidth 136000 Netherlands Onyx 192.42.115.102 Bandwidth 135000 Netherlands redteam01 209.222.77.220 Bandwidth 134000 United States belalugosidead 217.20.23.204 Bandwidth 129000 United Kingdom redjohn1 62.210.92.11 Bandwidth 124000 France Unnamed 46.105.100.149 Bandwidth 121000 France theblazehenTor 188.138.17.37 Bandwidth 119000 France splitDNA 62.210.82.44 Bandwidth 116000 France radia2 91.121.230.212 Bandwidth 115000 France ArachnideFR5 62.210.206.25 Bandwidth 115000 France quadhead 148.251.190.229 Bandwidth 111000 Germany Or a list of relay nodes in my home country: sudo egrep ^"r |^w " /var/lib/tor/cached-microdesc-consensus | paste -d " " - - \ | sed "s/Unmeasured=. //" | \ awk ' { printf("%s %s %s ", $2, $6, $10 ); system("geoiplookup " $6 ); } ' | \ cut -f1,2,3,8- -d" " | sed "s/=/ /" | grep Portugal | sort -k4 -n -r Output: Laika 51.254.164.50 Bandwidth 47300 Portugal freja 194.88.143.66 Bandwidth 15400 Portugal cserhalmi 188.93.234.203 Bandwidth 1870 Portugal Eleutherius 85.246.243.40 Bandwidth 1400 Portugal luster 94.126.170.165 Bandwidth 1390 Portugal undercity 178.166.97.51 Bandwidth 1180 Portugal helper123 85.245.103.222 Bandwidth 1060 Portugal Pi 94.60.255.42 Bandwidth 271 Portugal TheSpy 85.240.255.230 Bandwidth 142 Portugal MADNET00 89.153.104.243 Bandwidth 78 Portugal MADNET01 82.155.67.190 Bandwidth 14 Portugal By the way, bandwidth in the Tor server/client by default is defined in KB.
How to get tor relay points through the Linux command line?
1,489,055,014,000
Consider this directory (and file) structure: mkdir testone mkdir testtwo mkdir testone/.svn mkdir testtwo/.git touch testone/fileA touch testone/fileB touch testone/fileC touch testone/.svn/fileA1 touch testone/.svn/fileB1 touch testone/.svn/fileC1 touch testtwo/fileD touch testtwo/fileE touch testtwo/fileF touch testtwo/.git/fileD1 touch testtwo/.git/fileE1 touch testtwo/.git/fileF1 I would like to print/find all files which are in these two directories, but excluding those in the subdirectories .git and/or .svn. If I do this: find test* ... then all the files get dumped regardless. If I do this (as per, say, How to exclude/ignore hidden files and directories in a wildcard-embedded “find” search?): $ find test* -path '.svn' -o -prune testone testtwo $ find test* -path '*/.svn/*' -o -prune testone testtwo ... then I get only the top-level directories dumped, and no filenames. Is it possible to use find alone to perform a search/listing like this, without piping into grep (i.e. doing a find for all files, then: find test* | grep -v '\.svn' | grep -v '\.git'; which would also output the top-level directory names, which I don't need)?
Your find commands not saying what to do if the given path is not matched. If you want to exclude everything that starts with a dot, and print the rest try: find test* -path '*/.*' -prune -o -print so it'll prune anything that matches that path, and print anything that doesn't. Example output: testone testone/fileC testone/fileB testone/fileA testtwo testtwo/fileE testtwo/fileF testtwo/fileD If you want to specifically exclude just .svn and .git but not other things that start with a dot you can do: find test* \( -path '*/.svn' -o -path '*/.git' \) -prune -o -print which for this example produces the same output if you want to exclude the top level directories you can add -mindepth 1 like find test* -mindepth 1 -path '*/.*' -prune -o -print which gives testone/fileC testone/fileB testone/fileA testtwo/fileE testtwo/fileF testtwo/fileD
Find files in globbed directories excluding some subpaths
1,489,055,014,000
I'm running a program called gatk-picard.sh and it's printing out the running history/log (INFO rows below). Since the program will take about 20 hours to finisn and I wanted to put it on running when I am leaving my office but would check the running history in my home. How to save these history lines automatically after it's done? What I have tried was $gatk-picard.sh > log but seems not working. $ ./gatk-picard.sh INFO 16:08:50,858 HelpFormatter - -------------------------------------------------------------------------------- INFO 16:08:50,861 HelpFormatter - The Genome Analysis Toolkit (GATK) v3.6-0-g89b7209, Compiled 2016/06/01 22:27:29 INFO 16:08:50,861 HelpFormatter - Copyright (c) 2010-2016 The Broad Institute INFO 16:08:50,862 HelpFormatter - For support and documentation go to https://www.broadinstitute.org/gatk INFO 16:08:50,862 HelpFormatter - [Fri Sep 16 16:08:50 EDT 2016] Executing on Linux 3.13.0-95-generic amd64 INFO 16:08:50,862 HelpFormatter - Java HotSpot(TM) 64-Bit Server VM 1.8.0_102-b14 JdkDeflater
It's probably writing to stderr (file descriptor 2) instead of stdout (file descriptor 1, the default). > filename (which can also be written as 1> filename) redirects stdout to a file. You can either redirect stderr to stdout first, like this: gatk-picard.sh 2>&1 > log or you can write stderr to its own file, like this: ./gatk-picard.sh 2>log_err >log_out
How to save a program's running history to a file?
1,489,055,014,000
For the encrypted base64 encoded SHAX strings, what command can decrypt it back to original string, thanks
From the linked post, your original string was generated by a method such as echo -n foo | openssl dgst -binary -sha1 | openssl base64 What this generates is a digest, with SHA1 being the method of calculating the digest. In this situation there is insufficient data to reconstruct the original string. This digest is a checksum of the original string and can be used for validation; to verify a message hasn't been tampered with. So if you have a file xyzzy that contains your message you can run cat xyzzy | openssl dgst -binary -sha1 | openssl base64 If the result is the same string as you started with then you can be confident it hasn't been modified. The best you can do is remove the base64 part to get the binary digest: echo $base64string | openssl base64 -d but this is not the original message, just the checksum. The original message is not reconstructable from the digest.
How can I decrypt back a base64 encoded shaX binary string?
1,489,055,014,000
How to force Fedora command-line to give me proposals for installation of package when a missing command is typed? Now when I type git the command-line tells me : -bash: git: command not found Before it gave me proposal (something like this): Command not found. Install package 'git' to provide command 'git'? [N/y] I made a clean installation of Fedora 24 and obviously this feature is missing. I tried to dnf install command-not-found but it couldn't find the package. What should I do to bring this feature back?
You have to install PackageKit-command-not-found package: dnf install PackageKit-command-not-found It is fine to check its config /etc/PackageKit/CommandNotFound.conf for some details (it's well documented), but default config should be enough.
Fedora command-line installation proposals [duplicate]
1,489,055,014,000
As dhcpd negotiates it prints its output on the login prompt, which clutters (messes up, wrangles, uglifies, writes over, obscures [synonyms for googlers]) the console login prompt. How to inhibit dhcpd from outputting over the console login prompt? Running Void Linux with runit, /etc/sv/dhcpd/run looks like this: #!/bin/sh [ -r conf ] && . ./conf exec dhcpcd -B ${OPTS:=-M} 1>&2 /etc/sv/dhcpd/conf is empty.
Duncaen on #voidlinux at freenode gave me this link. Apparently this chaotic outputting comes from dmesg, not dhcpcd. Solution is to add dmesg -n 1 to /etc/rc.local.
How to inhibit dhcpd from outputting over the console login prompt?
1,489,055,014,000
This command will open firefox in the private browser mode correctly firefox -private & However this command is only opening firefox in regular browser mode. firefox -private& What is the difference between these two commands?
The space is actually irrelevant. The shell parses the command in exactly the same way. The difference is whether Firefox is already running or not. It appears that the -private option only works when starting Firefox. If Firefox is already running, firefox -private opens a non-private window in the existing Firefox instance.
firefox -private sometimes opens a non-private window
1,489,055,014,000
I run two separate instances of vifm on my machine: $ vifm --server-list documents photos In one I'm organising documents and in the other photos. Sometimes I'm inside a third shell and would like to give commands to one of the vifm instances. Had I only one instance I would do: $ vifm --remote -c 'normal p' But that does not allow me to select the instance I'm giving that command to. The first instance (in asciibetical order, from what I tested) is always picked to run the command. In other words, I cannot send commands to the photos instance. How can I send a command to the photos instance?
You need to specify additional argument you already know about (as you named your instances by using it): $ vifm --help | grep -A1 server-name vifm --server-name <name> name of target or this instance. Note this part: name of target ... instance. In your case end command will look like the following: $ vifm --server-name photos --remote -c 'normal p' P.S. Option name is a bit confusing, but matches corresponding option of Vim.
How do I select which remote vifm instance will run a command?
1,489,055,014,000
I need to upgrade some specifics packages from sid on debian jessie stable with their dependencies. By adding the following repo : deb http://ftp.fr.debian.org/debian/ sid main deb-src http://ftp.fr.debian.org/debian/ sid main deb http://ftp.fr.debian.org/debian/ sid main contrib non-free deb-src http://ftp.fr.debian.org/debian/ sid main contrib non-free to my sources.list, all the existing packages will be upgraded and can damage my system. How to install a specific package from "sid" on debian jessie "stable" without breaking my existing system ? Edit How to set the pin-priority to upgrade ? and how can it be used to downgrade a specific package when something went wrong ?
"Going hybrid" with Debian versions is not always worthwhile, (or safe, or reliable, etc.), but sometimes it works. The hybrid version's best case is when a package from testing or unstable makes only trivial changes, (in perpetuity even), and everything works smoothly thereafter. Possibly it's already been packaged in Debian Backports, or some repository or archive like it. Failing that, provided the package isn't too complex, one can search pkgs.org in hopes of finding something close enough. The alien package sometimes helps. One could go upstream and attempt to compile it, (and package it), but if it's a thorny package this might require more time than one has to spare, (which is why we use precompiled packages in the first place). The worst case scenario would be a package from unstable that's too needy or quarrelsome, and has too many conflicts with stable, wiping out something more important than the new package.
How to upgrade/downgrade a specific package from "sid" on debian jessie "stable"? [duplicate]
1,489,055,014,000
I am trying to lookup the WhoIs entry for the following domain: anorien.csc.warwick.ac.uk However, while typing the URL directly into the browser displays a web-page, when I type: whois anorien.csc.warwick.ac.uk into the Linux command-line I get the following error: No such domain anorien.csc.warwick.ac.uk How is this possible?
The only valid WHOIS registrations are for licensed domains only. However, if you own the primary domain, you could setup your own WHOIS server that could be queried for subdomains that you register under you. Code: whois -h whois.yourdomain.com subdomain.yourdomain.com It is not difficult to setup a whois server if you want to maintain one yourself.
Website is active but domain name not showing on WhoIs lookup from Linux command-line
1,489,055,014,000
I am trying to make a program that will copy file1 into file2 the following way: cp -i -p file1 file2 Now I call my executable copy and so by calling copy file1 file2 It will do the same thing like the first command (-i and -p). I was able to do this using execl char const *copy[] = {"/bin/cp","cp","-p","-i",0}; execl(copy[0],copy[1],copy[2],copy[3],argv[1],argv[2],copy[4]); However, I want to do it now with execv I saw the man page of exec* functions execl(const char *path, const char *arg, ...); execv(const char *path, char *const argv[]); and so the first argument seems to be the same however, How the second argument for execv is char *const argv[] what do I need to change in the execv function to get the same result ? I have my main function arguments like the following: main(int argc,char * argv[])
Change your copy array, and the function call. The following is a minimal example: #include <unistd.h> int main(int arcg, char *argv[]) { char *const args[] = {"cp","-p","-i", argv[1], argv[2], 0}; execv("/bin/cp", args); }
How can I succesfully call the execv function? [closed]
1,489,055,014,000
If I use grep -q in combination with -v to return 0 if there are no matches or 1 if there is a match, it works as long as input is a single line: $ echo 'abc' | grep -q -v a; echo $? 1 $ echo 'abc' | grep -q -v x; echo $? 0 But if the input is multi-line, grep will always return 0: $ echo -e 'a\nb\nc' | grep -q -v a; echo $? 0 $ echo -e 'a\nb\nc' | grep -q -v x; echo $? 0 What is the reason for such behavior? I know that the correct way in this case would be to use ! grep -q instead of grep -q -v, but I still want to know the answer.
Per grep manual: -v, --invert-match Selected lines are those not matching any of the specified patterns. If you supply only one line abc and tell grep to select only lines not matching a you get an empty output and return code equal to 1. If you supply three lines a, b, and c and tell grep to select only those not matching a you get b and c in output and 0 as a return code. $ echo -e 'a\nb\nc' | grep -v a; echo $? b c 0
Why "grep -q -v" only works with single line input?
1,489,055,014,000
I am trying to use an argument of the script to find other files. The problem is that when i give the script the argument x.* in the command line, it is transformed into x.sh. Any ideas how i can get the x.* inside my script? The script in comand line : ./script.sh x.*. If i try to print $1 it outputs x.sh.
You can't do it from the inside of your script. * has to be escaped, otherwise it will try to fit filenames (in your case x., then anything, as * is a glob operator that matches any string in filename). You can do it in, basically, three ways - enclose your string with single or double quotes: ./script.sh "x.*" ./script.sh 'x.*' Or prefix problematic character with backslash: ./script.sh x.\* As Jeff mentioned in the comments (thanks), you can also disable glob expansion with: set -o noglob You can turn it back on with set +o noglob if you want to.
How to prevent bash from transforming arguments?
1,489,055,014,000
I am running TCSH and I would like to update my prompt every time I run a command. I think can currently do that via backticks. set tmpstr = `git status --untracked-files=no --porcelain` set prompt="%{\e[35;1m%} $tmpstr %{\e[32;1m%}%n%{\e[37m%}@%{\e[33m%}%m%{\e[37m%}:%{\e[36m%}%~%{\e[37m%}"\$"%{\e[0m%} " But I really don't want to have the full list of files every time. So just saying whether the GIT directory is clean is enough. set tmpstr1 = `git status --untracked-files=no --porcelain` if ("$tmpstr" == "") then set gitstr = 'Git: Clean' else set gitstr = 'Git: Uncommitted GIT ' endif set prompt="%{\e[35;1m%} \$gitstr %{\e[32;1m%}%n%{\e[37m%}@%{\e[33m%}%m%{\e[37m%}:%{\e[36m%}%~%{\e[37m%}"\$"%{\e[0m%} " But the gitstr won't be updated, as it isn't a command. Any one got any other ideas? Or any magical ways of calling a full if statement each time I run a command?
I ended up using precmd I put alias precmd 'source ~/.tcsh/precmd.tcsh' into my .cshrc file and moved my prompt set into that file. Source of the .tcsh set tmpstr = `(git status --untracked-files=no --porcelain >! ~/out ) >&! ~/out1` #echo $tmpstr #for debugging if !( -s ~/out ) then if !( -s ~/out1 ) then set gitstr = "Git: Clean" set prompt="%{\e[35;1m%} \$gitstr %{\e[32;1m%}%n%{\e[37m%}@%{\e[33m%}%m%{\e[37m%}:%{\e[36m%}%~%{\e[37m%}"\$"%{\e[0m%} " else #echo "not in GIT" set prompt="%{\e[35;1m%} %{\e[32;1m%}%n%{\e[37m%}@%{\e[33m%}%m%{\e[37m%}:%{\e[36m%}%~%{\e[37m%}"\$"%{\e[0m%} " endif else set gitstr = "Git: Uncommitted GIT " set prompt="%{\e[35;1m%} \$gitstr %{\e[32;1m%}%n%{\e[37m%}@%{\e[33m%}%m%{\e[37m%}:%{\e[36m%}%~%{\e[37m%}"\$"%{\e[0m%} " endif That allowed me to check when I am in get, and report the status back to the cmd line. When out of the GIT folder it just doesn't report GIT status. The shenanigans going on up in the tmpstr is to remove the stderror from the konsole.
Updating a git variable in the Shell prompt on every command
1,489,055,014,000
I often run a series of poor-man daemons when I am sshing into my headless server. One to monitor the beer in my kegs and one to monitor the server itself from a web-browser. I do this by running both in screen: screen -d -m psdash screen -d -m kegbot runserver xxx.xx.x.xxx:8008 Both commands outside of screen tend to dominate the stdout in such a way it makes the rest of the ssh session impossible to use. And, they also terminate with the ssh session, so I have found screen to be the best bet. My question is this: is there a way in .profile or something else to have these commands run on login, but NOT run if they are already running? This last bit has me out of my depth.
I'd suggest writing the PID (in bash that's in $! after starting a process) into a file, of the two processes you're starting (psdash and kegbot). You can then use ps --pid $(cat your.pid) | tr -s ' ' | sed 1d | cur -d' ' -f4 to see if the process is actually running. Just as a side note, you should always check whether the PID inside a .pidfile is valid before acting upon it! It might just happen that whatever mechanism you use to remove the .pid file when your programs stop (normally that would either be part of the program itself, or a shellscript wrapper) fails, and there's a "wrong" PID in the .pid file. If the .pid file survives a reboot, the worst-case scenario would be a PID of some other process that you'd act upon. OK, here is a possible solution, using the kegbot as an example: First you need a wrapper script. For simplicity's sake, let's assume everything's happening in your $HOME. So, a simple wrapper (run_kegbot.sh) would be: #!/bin/zsh kegbot runserver xxx.xx.x.xxx:8008 echo $! > kegbot.pid wait rm -rf kegbot.pid This is one solution, if the kegbot forks into the background, etc. but the PID is valid after it forks. I don't know if kegbot is able to handle PID files itself, which would alleviate you having to handle PID files yourself. Or, maybe you can make kegbot not fork into the background, and then use the shell itself (by adding a & to the end of line 2), to write the PID file and wait for it to finish. Anyway, once you get the PID file malarkey done, you need something like this in your .profile: [ -e kegbot.pid ] && { PID=$(cat kegbot.pid) COMM=$(ps -p $PID -o comm=) [ "x$COMM" != "xkegbot" ] && rm -f kegbot.pid } [ -e kegbot.pid ] || screen -d -m ./run_kegbot.sh Again, this is just one solution to the problem, but the general idea is to use the PID of the process to check whether it's running or not, and above is one way of doing that. Some daemons keep their PID files in /var/run/, if kegbot and/or psdash do that, you obviously don't need a wrapper script, etc. since you can then use those PID files directly. You definitely need to check, whether the PID inside a PID file is actually the process it belongs to. A rogue reboot, and/or the daemon crashing may leave a zombified PID file, etc. That's what the first test of the PID file above is for.
How to run commands on user login -- that are NOT already running
1,489,055,014,000
I'm trying to run my Java code remotely using SSH. I need to do this with qsub, so I've created a short bash script that compiles my Java files and then runs the main one. Here's the thing: My code (when run without qsub) prompts the user for a file name and a user name. When run with qsub, it doesn't do this but the job completes. How would I (still using qsub) get this interactivity back? My code will print a bunch of results when run without qsub so I'd like that as well. Thanks for the help!
qsub submits your java program to a batch queuing system, and eventually it runs on one of the compute nodes in the cluster - how do you expect to be able to interactively input data in that situation? there is no tty or screen or keyboard. You need to modify your program to take command line arguments and give the filename and username on the command line when you use qsub to submit the job.
Running interactive java code with qsub
1,489,055,014,000
Let's say I have a directory ~/mydir that has a whole bunch of text files in it. I want to search for searchterm in this directory and then view the file that has the most matches. How can I do this using only one command?
Putting the following line in a script will do it: grep -c "$1" ~/mydir/* | grep -v ':0' | sort -t: -k2 -r -n | head -1 | sed 's/:.*//' | xargs less Then just call ./myscript searchterm If you want to search recursively, change -c to -cr in the first grep command. The parts of this pipeline, in order: grep -c "$1" ~/mydir/* # Outputs a list of the files in ~/mydir/ with :<count> # appended to each, where <count> is the number of # matches of the $1 pattern in the file. grep -v ':0' # Removes the files that have 0 results from # the list. sort -t: -k2 -r -n # Sorts the list in reverse numerical order # (highest numbers first) based on the # second ':'-delimited field head -1 # Extracts only the first result # (the most matches) sed 's/:.*//' # Removes the trailing ':<count>' as we're # done with it xargs less # Passes the resulting filename as # an argument to less If there is no match at all, less will open empty.
How can I open the file with the most matches for a given regex?
1,489,055,014,000
I'm running Arch Linux in a virtual machine and it's getting REALLY annoying that since some thing require that I cannot be root, I need to sudo most of my commands. It would be nice if I could do something to detect a command needing root, and automatically run/re-run it with sudo giving it root powers. Can someone help me out with this? Thanks. Also, I know this isn't secure, but it's just a VM to experiment in. Also, I have NOT found any automatic solutions. I'm looking for an automatic solution, the question this is supposedly duplicate to does not have an answer that is automatic.
When you need to perform maintainence, upgrades, configuration and other tasks that require root, run sudo -i to get a root login shell (i.e. with all env vars, aliases, functions etc as if root had actually logged in). Run whatever commands you need, create/edit/delete/move files, all as root - without needing to preface each command with sudo. Type exit or Ctrl-D to exit the root shell and return to your user shell when you have finished.
Automatically rerun something with sudo if root needed [duplicate]
1,489,055,014,000
I have strings similar to the following: *unknown*\*unknown* (8) hello\morning (3) I'm trying to match just morning or *unknown\*. So far I have tried: [^\\]+$ But that matches from backslash to end of line which isn't what I want.
With grep: grep -oP '(?<=\\)[^\\ ]+' file -o prints only the matching pattern. The (?<=...) is a positive lookahead which matches the backslash \\, but it is not part of the matching pattern. The second pattern [^\\ ]+ follows the backslash and contains all characters, but no backslashes and no spaces. The output: *unknown* morning
Match everything after backslash and before space
1,489,055,014,000
I would like to download several images (from a server with a self-signed certificate) and view them all with feh. Is this possible with a single command? Something like curl -sk -b path_to_session_token url1 url2 | feh - - except this only feeds feh one of the files. Preferably without saving to a file and deleting it afterwards.
feh will not be able to distinguish between several images sent on its standard input, unless some kind of "protocol" is implemented. If you don't want to use temporary files, you could maybe use a loop: for url in 'url1' 'url2'; do curl -skb token "$url" | feh - done This will download and display each image one after the other. If you'd rather have several viewers opened at the "same time", add a little ampersand to it: for url in 'url1' 'url2'; do (curl -skb token "$url" | feh - &) done Note that you can put everything on a single line, or maybe define a function: $ for url in 'url1' 'url2'; do curl -skb token "$url" | feh -; done $ for url in 'url1' 'url2'; do (curl -skb token "$url" | feh - &); done function feh_urls { for url; do curl -skb token "$url" | feh - done } $ feh_urls 'url1' 'url2' Be careful with your quotes, as there may be some annoying spaces in your URLs or paths (... and I hope I didn't make such a mistake in the above examples...). If you go for a function, maybe add it to your .bashrc so you can use it in your shell without redefining it manually everytime. For the sake of completeness, here is a little script (which could be made a function of course) involving some temporary files, saving a few curl and feh processes on the way. I am using wget instead of curl because depending on your version, you might not be able to grab remote filenames properly (you could give your images different names of course). #!/bin/bash img_dir=$(mktemp -d) cd $img_dir wget -q --no-check-certificate --load-cookie token "$@" feh $img_dir/* rm -r $img_dir Usage: $ ./feh_urls.sh 'url1' 'url2' Note that the advantage here is that feh doesn't get spawned several times, and can load all the images at once (use the arrow keys to browse them). You might also be interested in this page if you want more information about cookie management with wget, or this page for details about the SSL/TLS options.
Pipe multiple files from curl to other program
1,489,055,014,000
I want to run a command which searches for specific files, then removes them then show how many already is deleted. So far I was using: find -type f -name "*.cache" -exec rm {} \; but when I have over 400k files, I'd like to know how much already was deleted like: 1 - file1234.cache 2 - file121342.cache 3 - file15467.cache 4 - file1678534.cache
Will satisfy you find -type f -name "*.cache" -exec rm -v {} + | nl
How to Find, remove and show counter via SSH?
1,489,055,014,000
Here is the situation. I often have to read chunks of information from large plain text files. I am planning to use Readability extension for formating the content to make it readable on screen.
You can use xclip to paste the contents to a temporary file and then open that file with firefox. For example: temp=$(mktemp); xclip -o > $temp; firefox "$temp" mktemp generates a temporary file in /tmp, xclip -o pastes the contents of the clipboard to that file, and at least firefox open that file as if it were a website.
Can I pipe clipboard content to browser for viewing?
1,437,404,310,000
I just installed AWS CLI on a few servers with Ubuntu 14 on. The last server I installed it on, cannot access the AWS CLI from the terminal and run commands. This does not work: aws --version You need to do this: /home/user/.local/lib/aws/bin/aws --version How can I make aws --version work in the same manner it does on all the other servers?
PATH Variables needed to be added. cd ~ vi .profile Append: :/home/user/.local/lib/aws/bin/ Behind: PATH=":$HOME/bin # set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH:/home/user/.local/lib/aws/bin/" fi aws --version
Application [AWS CLI] Command shortcuts are not working globally
1,437,404,310,000
Background I have a text file, named blood_conc.txt as shown: 0, 0, 0, 0, 0, 0, 0, 1.32006590271e-05, 1.990014992001e-05, 1.504668143682e-05, 2.176900659261e-06, 7.673488970859e-06, 2.169217049562e-05, 4.343183585883e-05, 0, 0, 0, 0, 0, 0, 0, 2.143804950099e-05, 0, 0, 1.849919603625e-06, 0, 0, 0, 0, 0, 0, 0, 4.123812986073e-07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0001365177, 7.81009e-06, 2.695291e-07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.1799e-05, 1.82574e-05, 1.68109e-05, 2.722782e-05, 5.355517e-05, 8.196468e-05, 7.177729e-05, 7.863765e-05, 5.774439e-05, 1.329413e-08, 0, 0, 0, 4.320018e-06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0003335425, 0, 0, 0, 0, 0, 0, 0, 0, 6.061237e-05, 6.36887e-05, 2.250928e-05, 0, 0, 7.327124e-07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, I would like to count number of 0 between line 3 to line 8 inclusively. i.e. 2.143804950099e-05, 0, 0, 1.849919603625e-06, 0, 0, 0, 0, 0, 0, 0, 4.123812986073e-07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0001365177, 7.81009e-06, 2.695291e-07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.1799e-05, 1.82574e-05, 1.68109e-05, 2.722782e-05, 5.355517e-05, 8.196468e-05, 7.177729e-05, 7.863765e-05, 5.774439e-05, and the frequency of 0 should be 54. I want a simple command line to finish two tasks: Task 1: Count number of 0 in the text from Line 3 to Line 8. Task 2: Count number of values lying between the interval,says , (2.452555e-05, 0.0032784). My thought I have do some searching in the webs and posts. I find awk and grep -c may help. To focus the range of lines, I guess I can use awk 'NR==3, NR==8' blood_conc.txt. However, I do not know how to proceed by using grep or perl. I want a simple command line which just return me the frequency.
You can try this with awk: awk -F"," 'NR == 3, NR == 8 { for (i = 1; i <= NF; i++) { if ($i == 0) { cnt++; } if ($i >= 2.452555e-05 && $i <= 0.0032784) { cnt1++; } } } END { print cnt, cnt1; }' file
Count frequency of specific numbers in a text file of scientific notations
1,437,404,310,000
I want to write the process id and command of all processes with some name and from some user (for example root and init). What should I do? ps -f -u root -C init or ps -f -U root -C init writes more then just init the process.
If you only want the process ids, why not use pgrep: pgrep -u root init Or: pgrep -U root init Which switch you use (-u/-U) depends on what you want. The difference is, -u matches the effective uid and -U the real uid: The effective uid describes the user whose file access permissions are used by the process. The real uid is from the user who created the process. Edit: to list the name too, add -l $ pgrep -l -u root init 1 init
process id and command
1,437,404,310,000
This seems like a basic question and there are variations of it. I have a ONE-COLUMN html file (let us call it status.html). I would like to send this file which has a few images as the BODY of an email. I would like to do this using the Linux command line, but I am unsure of how to proceed. I do not know how the images will get gobbled up and formatted accordingly (i.e. are the base64 or attachments). The key part is that I want the images in the HTML file to be in the body of the email ... Is the a way or a Linux tool that does just this? <<<
"-a" Attach a file to your message using MIME. When attaching single or multiple files, separating filenames and recipient addresses with "--" is mandatory, e.g. mutt -a image.jpg -- addr1 or mutt -a img.jpg *.png -- addr1 addr2. The -a option must be placed at the end of command line options. –
Sending a HTML file (with images) as email body using command line
1,437,404,310,000
How do I run diagnostics against Asterisk? Asterisk is running on tleilax; and doge is on the same network (My network topology isn't optimal). Specifically, I want to do something like: sipp [email protected] except I'm not sure what flags to send. How do I send "hello world" to [email protected]? (Note that this is all on my LAN, and not accessible through the internet.) sip.conf: tleilax:~ # tleilax:~ # cat /etc/asterisk/sip.conf [general] context=trunkinbound ; Default context for incoming calls allowguest=no ; Allow or reject guest calls (default is yes) allowoverlap=no ; Disable overlap dialing support. (Default is yes) ;allowtransfer=no ; Disable all transfers (unless enabled in peers or users) ;realm=mydomain.tld ; Realm for digest authentication bindport=5060 ; UDP Port to bind to (SIP standard port is 5060) bindaddr=0.0.0.0 ; IP address to bind to (0.0.0.0 binds to all) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ;domain=mydomain.tld ; Set default domain for this host ;pedantic=yes ; Enable checking of tags in headers, ;tos_sip=cs3 ; Sets TOS for SIP packets. ;tos_audio=ef ; Sets TOS for RTP audio packets. ;tos_video=af41 ; Sets TOS for RTP video packets. ;maxexpiry=3600 ; Maximum allowed time of incoming registrations ;minexpiry=60 ; Minimum length of registrations/subscriptions (default 60) ;defaultexpiry=120 ; Default length of incoming/outgoing registration ;t1min=100 ; Minimum roundtrip time for messages to monitored hosts ;notifymimetype=text/plain ; Allow overriding of mime type in MWI NOTIFY ;checkmwi=10 ; Default time between mailbox checks for peers ;buggymwi=no ; Cisco SIP firmware doesn't support the MWI RFC ;vmexten=voicemail ; dialplan extension to reach mailbox sets the disallow=all ; First disallow all codecs allow=ulaw ; Allow codecs in order of preference allow=gsm mohinterpret=default mohsuggest=default language=en ; Default language setting for all users/peers relaxdtmf=yes ; Relax dtmf handling trustrpid = no ; If Remote-Party-ID should be trusted sendrpid = yes ; If Remote-Party-ID should be sent progressinband=no ; If we should generate in-band ringing always ;useragent=Asterisk PBX ; Allows you to change the user agent string ;promiscredir = no ; If yes, allows 302 or REDIR to non-local SIP address ;usereqphone = no ; If yes, ";user=phone" is added to uri that contains dtmfmode = rfc2833 ; Set default dtmfmode for sending DTMF. Default: rfc2833 ;compactheaders = yes ; send compact sip headers. videosupport=no ; Turn on support for SIP video. You need to turn this on ;maxcallbitrate=384 ; Maximum bitrate for video calls (default 384 kb/s) callevents=yes ; generate manager events when sip ua ;alwaysauthreject = yes ; When an incoming INVITE or REGISTER is to be rejected, ;g726nonstandard = yes ; If the peer negotiates G726-32 audio, use AAL2 packing ;matchexterniplocally = yes ; Only substitute the externip or externhost setting if it matches ;regcontext=sipregistrations rtptimeout=60 ; Terminate call if 60 seconds of no RTP or RTCP activity ;rtpholdtimeout=300 ; Terminate call if 300 seconds of no RTP or RTCP activity ;rtpkeepalive=<secs> ; Send keepalives in the RTP stream to keep NAT open ;sipdebug = yes ; Turn on SIP debugging by default, from ;recordhistory=yes ; Record SIP history by default ;dumphistory=yes ; Dump SIP history at end of SIP dialogue ;allowsubscribe=no ; Disable support for subscriptions. (Default is yes) ;subscribecontext = default ; Set a specific context for SUBSCRIBE requests notifyringing = yes ; Notify subscriptions on RINGING state (default: no) notifyhold = yes ; Notify subscriptions on HOLD state (default: no) limitonpeers = yes ; Apply call limits on peers only. This will improve ;t38pt_udptl = yes ; Default false ;register => 1234:[email protected] ;registertimeout=20 ; retry registration calls every 20 seconds (default) ;registerattempts=10 ; Number of registration attempts before we give up externip = 96.48.128.162 ; Address that we're going to put in outbound SIP ;externhost=test.test.com ; Alternatively you can specify a domain ;externrefresh=10 ; How often to refresh externhost if localnet=192.168.0.0/255.255.0.0; All RFC 1918 addresses are local networks localnet=10.0.0.0/255.0.0.0 ; Also RFC1918 localnet=172.16.0.0/12 ; Another RFC1918 with CIDR notation localnet=169.254.0.0/255.255.0.0 ;Zero conf local network nat=yes ; Global NAT settings (Affects all peers and users) canreinvite=no ; Asterisk by default tries to redirect the ;directrtpsetup=yes ; Enable the new experimental direct RTP setup. This sets up ;rtcachefriends=yes ; Cache realtime friends by adding them to the internal list ;rtsavesysname=yes ; Save systemname in realtime database at registration ;rtupdate=yes ; Send registry updates to database using realtime? (yes|no) ;rtautoclear=yes ; Auto-Expire friends created on the fly on the same schedule ;ignoreregexpire=yes ; Enabling this setting has two functions: ;domain=mydomain.tld,mydomain-incoming ;domain=1.2.3.4 ; Add IP address as local domain ;allowexternaldomains=no ; Disable INVITE and REFER to non-local domains ;autodomain=yes ; Turn this on to have Asterisk add local host ;fromdomain=mydomain.tld ; When making outbound SIP INVITEs to jbenable = yes ; Enables the use of a jitterbuffer on the receiving side of a jbforce = no ; Forces the use of a jitterbuffer on the receive side of a SIP jbmaxsize = 100 ; Max length of the jitterbuffer in milliseconds. jbresyncthreshold = 1000 ; Jump in the frame timestamps over which the jitterbuffer is jbimpl = fixed ; Jitterbuffer implementation, used on the receiving side of a SIP jblog = no ; Enables jitterbuffer frame logging. Defaults to "no". qualify=yes ; By default, qualify all peers at 2000ms limitonpeer = yes ; enable call limit on a per peer basis, different from limitonpeers #include sip-vicidial.conf ; register SIP account on remote machine if using SIP trunks ; register => testSIPtrunk:[email protected]:5060 ; ; setup account for SIP trunking: ; [SIPtrunk] ; disallow=all ; allow=ulaw ; allow=alaw ; type=friend ; username=testSIPtrunk ; secret=test ; host=10.10.10.16 ; dtmfmode=inband ; qualify=1000 tleilax:~ # sip-vicidial.conf: tleilax:~ # tleilax:~ # cat /etc/asterisk/sip-vicidial.conf ; WARNING- THIS FILE IS AUTO-GENERATED BY VICIDIAL, ANY EDITS YOU MAKE WILL BE LOST [101] username=101 secret=password accountcode=101 callerid="" <101> mailbox=101 context=default type=friend host=dynamic [gs102] username=gs102 secret=password accountcode=gs102 callerid="Test Admin Phone" <> mailbox=102 context=default type=friend host=dynamic ; END OF FILE Last Forced System Reload: 2015-02-20 16:49:28 tleilax:~ # tleilax:~ # sipsak local success: thufir@doge:~$ thufir@doge:~$ sudo sipsak -vv -s sip:345@tleilax -m "hi" No SRV record: _sip._tcp.tleilax No SRV record: _sip._udp.tleilax using A record: tleilax Max-Forwards set to 0 message received: SIP/2.0 200 OK Via: SIP/2.0/UDP 127.0.1.1:59012;branch=z9hG4bK.61911e9a;alias;received=192.168.1.3;rport=59012 From: sip:[email protected]:59012;tag=1c498905 To: sip:345@tleilax;tag=as0e771d06 Call-ID: [email protected] CSeq: 1 OPTIONS Server: Asterisk PBX 1.8.29.0-vici Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY, INFO, PUBLISH, MESSAGE Supported: replaces, timer Contact: <sip:192.168.1.2:5060> Accept: application/sdp Content-Length: 0 ** reply received after 0.830 ms ** SIP/2.0 200 OK final received thufir@doge:~$ sipsak failure; too many hops: thufir@doge:~$ thufir@doge:~$ sudo sipsak -vv -s sip:[email protected] -m "hi" No SRV record: _sip._tcp.ekiga.net No SRV record: _sip._udp.ekiga.net using A record: ekiga.net Max-Forwards set to 0 message received: SIP/2.0 483 Too Many Hops Via: SIP/2.0/UDP 192.168.1.3:55929;branch=z9hG4bK.3f8863cd;rport=55929;alias;received=96.48.128.162 From: sip:[email protected]:55929;tag=3feca6b3 To: sip:[email protected];tag=c64e1f832a41ec1c1f4e5673ac5b80f6.2949 Call-ID: [email protected] CSeq: 1 OPTIONS Server: Kamailio (1.5.3-notls (i386/linux)) Content-Length: 0 ** reply received after 155.411 ms ** SIP/2.0 483 Too Many Hops final received thufir@doge:~$ This is in the context of trying to connect Ekiga, or other GUI softphone, to Asterisk.
You should use domain from your realm,not ekiga You can troubelshoot by using asterisk -r sip set debug on
SIPp "hello world" message to Asterisk
1,437,404,310,000
I have a folder working and its structure is something like this - working/ 100/ 1/ 2/ 3/ 200/ 1/ 2/ 3/ 300/ 1/ 2/ 3/ And each of these 1 2 3 folders have around 1000 files. I want to zip the files under the 1 2 3 folders separately. The zip should not contain their top directories. Put the zips wherever they are. Such as, if there are files foo1, foo2 and foo3 under folder 1 then it should form a zip under 1 which should just have the foo files.
A simple loop will do the trick. cd working for dir in */*/; do [ -e "$dir/files.zip" ] || # skip directories where the zip already exists ( cd -- "$dir" && zip -r files.zip .) done Note that zip is smart enough to skip the zip file that is being built when recursing in that directory. Some other archiving programs would attempt to stuff the archive being built into the archive that's being built.
How to zip only files under multiple subdirectories?
1,437,404,310,000
I'm looking at the output of w: ehryk@ArchHP ~> sudo w 14:12:37 up 4:08, 4 users, load average: 2.18, 1.93, 1.55 USER TTY LOGIN@ IDLE JCPU PCPU WHAT ehryk tty1 10:04 4:08m 57.70s 0.00s xinit /home/ehryk/.xinitrc -- /etc/X11/xinit/xserverrc :0 vt1 -auth /tmp/serverauth. ehryk pts/0 10:04 4:08m 0.04s 0.04s /usr/bin/fish ehryk pts/1 10:04 4:06m 0.00s 0.00s bash ehryk pts/2 13:04 24:53 6.24s 0.00s x86_64-unknown-linux-gnu-gcc -o conftest -O2 -I/home/ehryk/Projects/openwrt/staging_ ehryk@ArchHP ~> However, the command exits. How do I keep running w in the same 'area' of the command line window, similar to the way top works, and control the refresh rate?
You can use watch command to constantly running w (every some time defined in -n parameter). For example: watch -n 1 w will run w every second. Output of w will be kept on top of the terminal window.
How do I keep running a command in the same area of the console window? ("w")
1,437,404,310,000
I can extract list of patterns using following command, fgrep -A 1 -f patternlist.txt filename.fasta but, is there a way I can extract without creating another file (patternlist.txt in this case) from other command's output? Such as: cut -d " " Cell_cycle.txt -f 1 | grep ...???... filename.fasta EDIT: The Cell_cycle.txt looks like this: $ cat Cell_cycle_Kegg_pathway ctg2977_3.g207.t1 K06626 P05_Ctg654_12.g311.t2 K03094 P06_Ctg710_7.g346.t1 K05868 i want to take the first column and extract those sequences from fasta file. EDIT 2: I have list of sequences in a UniqueSeq_28Dec2014.fasta >ctg1474_1.g69.t1 (first line) atgaaatgttggtgcagcgccctggcacttctcc...... (second line) >ctg1475_1.g70.t1 (third line) atgaaattgcagcgccctggcacttctcctgcag...... (fourth line) I want to print the first two sequences (from lines 1 to 4). However, I do not want to use head -4 UniqueSeq_28Dec2014.fasta which can also give my output, but I want it using process substitution. I tried the following command but does not seem to work. I just see 4 empty lines. grep -A 1 -Ff <(grep '>' UniqueSeq_28Dec2014.fasta |head -4) UniqueSeq_28Dec2014.fasta
Use process substitution <(): fgrep -A 1 -f <(cut -d " " -f 1 Cell_cycle.txt) filename.fasta
Extracting list of patterns which are output of another command
1,437,404,310,000
I am trying with: $ sudo iwconfig wlan0 essid mynetwork key 4bare2011 $ sudo iwconfig wlan0 essid mynetwork key 4bare2011 mode Managed and similar combinations. I'm using Ubuntu 14.04. The key is right and works for other clients. But it doesn't get accepted. s:4bare2011 is also getting an error message. ddddddddd is accepted (but is wrong). It looks like the system only accepts hex, but converting the key above to hex also didn't help. How can I connect, without changing the key?
An alternative to using iwconfig and wpasupplicant "by hand" is to edit /etc/network/interfaces and add a stanza like iface wlan0 inet static address 192.168.x.x netmask 255.255.255.0 gateway 192.168.x.1 dns-nameservers 192.168.x.1 wpa-ssid mynetwork wpa-psk 4bare2011 or iface wlan0 inet dhcp wpa-ssid mynetwork wpa-psk 4bare2011 You can then connect with ifup wlan0 and disconnect with ifdown wlan0 This still uses wpasupplicant but it's transparent to you. For more information on the different fields you can use, read interfaces(5) This alternative also opens up all sorts of interesting possibilities using interface mapping
Trying to connect through cl
1,437,404,310,000
Is there any way to check/find files and directories that only I created and check their permission?
To list all the files out in this manner you can use the tool find, along with the switches -user <username> and either -ls to get a standard type of ls output, or you can control it more specifically, using the -printf switch, telling it which columns, and in what order you want them. $ find /path/to -user <username> -ls Example $ find . -user saml -ls 6553601 24 drwx------ 170 saml saml 20480 Oct 31 20:55 . 6556571 4 -rw-rw-r-- 1 saml saml 1641 Jul 11 20:53 ./.gitk 6554851 4 -rwxrwxr-x 1 saml saml 738 May 22 13:31 ./dmesg_time.pl 8542099 4 drwxrwxr-x 2 saml saml 4096 Sep 27 08:08 ./.streamCapture 8542100 4 -rw-rw-r-- 1 saml saml 113 Sep 27 08:10 ./.streamCapture/streamCapture.conf 8792905 4 drwxrwxr-x 6 saml saml 4096 Mar 5 2014 ./Release
Checking permission for files
1,437,404,310,000
Suppose I click on a file in Nautilus. How can I copy the full address to the clipboard, and then easily paste it into a shell command that I'm typing in a terminal?
Press Ctrl+C to copy. When you paste into a terminal, what you'll get is the file name (with its full path). You get the raw file name, which won't be directly usable in a shell command if it contains spaces or other special characters. To use the file name in a command, don't use a paste command from the terminal, let the shell do the pasting. Install the program xsel (packaged in most distributions) and call it on your command line, inside a command substitution. You need double quotes around the command substitution to protect special characters such as spaces. $ ls -l "`xsel -b`"
Copy a file in Nautilus and use it in a shell command line
1,437,404,310,000
To paste many files, whose names are incremental numbers: paste {1..8}| column -s $'\t' -t What if your files wasn't named by number, but only words? It can be up to ten files, what should I do? In addition, you have a list of files that contains all the files you want. So far, my approach is: mkdir paste j=0; while read i; do let j+=1; cp $i/ paste/$j; done<list; cd paste; paste {1..8}| column -s $'\t' -t I have no problem with this approach, I just want to ask if there is any shorter one. Actually my files have the same name, just on different locations, for instance 1MUI/PQR/A/sum, 2QHK/PQR/A/sum, 2RKF/PQR/A/sum. The paste command should be paste {list}/PQR/A/sum. The list file is: 1MUI 2QHK 2RKF ...
With bash 4 mapfile -t <list paste "${MAPFILE[@]}" | column -s $'\t' -t for the paste {list}/PQR/A/sum version of the question mapfile -t <list paste "${MAPFILE[@]/%//PQR/A/sum}" | column -s $'\t' -t
How to use paste command for many files whose names are not numbers? (paste columns from each file to one file)
1,437,404,310,000
I have a file full of long SQL queries, one per line. I need to create a list of unique queries, but most of the queries include parameter values that make using an exact matching tool like uniq impossible. Is there a way to find unique lines "fuzzily", like agrep?
If the queries are predictable enough, maybe you could simply sed out the parameter values--e.g. if many queries contain equality comparison with numbers, sed 's/=[[:digit:]]+//g' would remove all the actual numbers, leaving only the column names. Otherwise, the only really general solutions I can think of are pattern recognition techniques like k-nearest neighbors, which can classify arbitrary lists of strings into clusters based on similarity.
A combination of uniq and agrep?
1,437,404,310,000
I have looked all around but couldn't find the answer to a very simply question: I would like to log into machine C from machine A, passing through machine B. However, B is slow, so I would also like my connection to C to be compressed/decompressed at C, tunneled through B, and decompressed/compressed at A. What ssh command should I execute (in machine A) to get a prompt at machine C when: I am physically at machine A. I can use ssh to log directly... 2.1. ... into machine B from A 2.2. ... into machine C from B I cannot log into machine C from A directly [EDIT] This is not a duplicate because: I am not asking how to forward traffic in general, only an ssh connection, so there could be a different answer for the particular case of ssh forwarding through ssh I am asking for compression at the ends (as even the title mentions)
Assuming you have: A with ip address ip_A B with ip address ip_B C with ip address ip_C From a first terminal connect to the B and set a tunnel to C on ssh (port 10022 is used for the tunnel but it can be anything else): ssh ip_B -L10022:ip_C:22 Then from another terminal, you will be able to connect "directly" to C from A by using the tunnel you just set and you add some compression option to the ssh command if needed: ssh localhost -p 10022 -o "Compression=yes" -o "CompressionLevel=9" In the latest command, I set compression to maximum, but it can be tuned from 1 to 9, 9 being the highest, but also the slowest.
log into a machine through another, with (de)compression at the ends only [duplicate]
1,437,404,310,000
Forgotten how I did this last time. Tried the following 2 methods that I thought had worked for me in the past: method #1 $ wget \ http://download.oracle.com/otn-pub/java/jdk/7u51-b13/jdk-7u51-linux-i586.rpm method #2 $ wget --no-cookies \ --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com" \ "yourversion.rpm" \ -O /opt/jdk-7u51-linux-i586.rpm --no-check-certificate But neither worked. For the time being I'll just download it the old fashion way but I'd like to be able to download it using wget as well.
This worked for me: $ wget --no-check-certificate \ --no-cookies \ --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com" \ "http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-i586.rpm" If you truly want to dump this file to /opt/jdk... then you'll need to be root to write to that location. Just prefix the above command with sudo, and specify the output location, -O "/opt/jdk-7-linux-i586.rpm". $ sudo wget ... -O "/opt/jdk-7u51-linux-i586.rpm" Here's the full command: $ sudo wget --no-check-certificate \ --no-cookies \ --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com" \ -O "/opt/jdk-7u51-linux-i586.rpm" \ "http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-i586.rpm" References How to automate download and instalation of Java JDK on Linux?
can't wget rpm oracle on centos linux
1,437,404,310,000
Suppose I am in an area with a lot of Wireless AP's for a particular SSID. If I run the command sudo iw dev wlan0 connect <ESSID> How does iw decide which AP to connect to? It does not seem to be based on signal strength, since the one it connects to is not the one with the best signal strength.
First Responder, but this happens at the hardware/firmware level and is not strictly a function of iw. There's no standard that I am aware of, just common practice. Most things network default to first responder, So most firmware follows suit. Typical firmware can not be tuned, but I have seen a very few that can be. usually in diagnostic or very high end cards. Most (even high end) just go with the de facto first responder. You can however use the mac or BSSID to connect to a specific AP if that is your goal. Most any card will support that. Keep in mind ESSID is for "entire networks" (groups of BSSIDs) so you should be able to float around APs that make up that ESS w/ out intervention (in theory).
What is the algorithm that the `iw` command uses for choosing an Access Point for a given Wifi network?
1,437,404,310,000
"Hmm, I need to edit file-i-must-edit-2, but I cannot remember where it is." locate file-i-must-edit /home/user/file-i-must-edit-1 /home/user/file-i-must-edit-2 /home/user/file-i-must-edit-3 "Great! I wish there was a way to avoid typing /home/user/file-i-must-edit-2 again..." Is there a way to avoid typing nano /home/user/file-i-must-edit-2 by typing something like nano <output line 2>?
If you get only one line of output, it's easy: locate file-i-must-edit nano $(!!) There is a technique you can use when there are more lines, but it involves running the original command in a different way (which you may not want to do all the time): $ touch a b c $ OUT=( $(find .) ) $ echo ${OUT[2]} ./b One thing you could do to avoid the typing is repeat the previous command (using readline, of course), narrow it down to get just one line, and then do nano $(!!) or pipe it to xargs.
Getting specific line from terminal output
1,437,404,310,000
Is it possible to retrieve info regarding to locked unix accouts? I am interested in seeing information about what date and time the lockout happend and from what hostname (pc name). I would like to see something similar to the who command.
I don't believe this information is kept anywhere. They only place you could get some of this type of information would be from the sudo command logs, assuming you're using sudo and that your sudo setup gives out permissions such that you're logging on individual commands such as passwd. I've used this command before to show what accounts are locked,i.e. "LK". $ cat /etc/passwd | cut -d : -f 1 | awk '{ system("passwd -S " $0) }' root PS 2010-12-18 0 99999 7 -1 (Password set, SHA512 crypt.) ftp LK 2010-11-11 0 99999 7 -1 (Alternate authentication scheme in use.) nobody LK 2010-11-11 0 99999 7 -1 (Alternate authentication scheme in use.) usbmuxd LK 2010-12-18 0 99999 7 -1 (Password locked.) avahi-autoipd LK 2010-12-18 0 99999 7 -1 (Password locked.) dbus LK 2010-12-18 0 99999 7 -1 (Password locked.) ntop LK 2011-05-22 0 99999 7 -1 (Password locked.) nginx LK 2011-08-19 0 99999 7 -1 (Password locked.) postgres LK 2012-06-26 0 99999 7 -1 (Password locked.) fsniper LK 2012-06-26 0 99999 7 -1 (Password locked.) clamupdate LK 2012-08-31 0 99999 7 -1 (Password locked.) Alternative method Thanks to @RahulPatil in the comments here's a more concise method: $ awk -F: '{ system("passwd -S " $1) }' /etc/passwd root PS 2007-06-20 0 99999 7 -1 (Password set, MD5 crypt.) bin LK 2007-06-20 0 99999 7 -1 (Alternate authentication scheme in use.) daemon LK 2007-06-20 0 99999 7 -1 (Alternate authentication scheme in use.) adm LK 2007-06-20 0 99999 7 -1 (Alternate authentication scheme in use.)
How to retrieve information about locked accounts
1,437,404,310,000
Lets say the python repl takes a long time to start up, is there a way to start it up in the background so I can create an alias and feed it commands like python-current "command to run".
There is nothing in the python distribution of doing this kind of opening, 'batteries included' not withstanding. That programs like editors can handle that kind of thing, is because it is quite clear what to do when you open another file: you just open a window/tab on the new file, handled by the same executable. Implementing something like that yourself is not that difficult, but you have to think about what happens when the first command has not finished yet and the second is scheduled: abort the first command queue the second command run things in parallel (which requires extra startup time unless you have multiple threads waiting for command) I have notified running pythons to load and execute modules based on files in a directory that were scanned, http request (on a twisted based system) and with zeromq. What is appropriate depends IMHO what else is needed by the system, I always go with that which works and does have the least overhead. Your python-current would have to do the proper interfacing. Often this was combined with reloading certain modules (to get the processing for new commands). For that you can use the reload() built-in: import mycommand # test for a command that requires reloading reload(mycommand) As an aside: especially when using UI code I have found this reloading useful. Python loading of the executable is comparable with Perl (0.002s on my several years old system, time python -v ). Loading the base modules takes about ten times longer (time python -c "exit();", 0.025s). But when using UI based programs the whole startup easily grows to several seconds and more. And in that case implementing dynamic command reading and having apython-current` makes sense.
Start Repl/CLI in Background and Feed Commands
1,437,404,310,000
I remember hearing about something similar. My constraints are: It may not be installed on the machine It may not be booted via USB or LiveCD What I need, in decreasing order of priority: 0. free 1. gcc, binutils, bash 2. low network traffic e.g. =< 1kbps 3. sufficient resources to cross-compile gcc 4. ability to install programs from repos
I don't really understand why you have such unusual constraints. No installation, no live CD and low network traffic excludes the obvious solutions like booting a distro from USB, setting up a VM or using a remote system via SSH. How do you actually plan to run such a system? If you really only have a browser check out the JavaScript qemu port. But I doubt that you have enough resources to cross compile applications or that you will be able to use such a system efficiently. free Most of the linux distributions are free. gcc, binutils, bash Most of the distributions ship with those apps either pre-installed or installable via packet manager or from source. low network traffic e.g. =< 1kbps Linux distributions don't generate network traffic. It's the applications that generate the traffic. But the problem here probably is that I don't understand what you want to do. sufficient resources to cross-compile gcc That's the crucial point here. The JavaScript solution does not provide sufficient resources and you are not allowed to install a distribution locally. ability to install programs from repos Possible with most of the available distributions. Depending on what you actually want to do, I guess the best way is to set up a remote server and using an SSH solution which allows access from a browser (e.g. via Java applet). The network traffic is low and you don't need to install additional software.
Is there a linux distro, that is not installed, but runs from the browser?
1,437,404,310,000
The application I am referring to is Turpial and it is a Twitter client. The problem is to open the window I need to send twits, I have to right click on a systray icon and select an option. In the manual (man turpial), I can't find a command to open the 'send twit' window, just the timeline window. After I open the window and send a twit, the window closes. I would like configure a global shorcut to open this window and then send my twits. So, some possibles solutions that I need are: How do I discover the command in the application that opens the 'send twit' window (when the systray icon right click and option select) and then use it in a terminal? This is probably the better solution. How to open the 'send twit' window that can be accessed with just a right click (and option select) on the systray icon by the terminal? How can I enable a shortcut to open this window to send twits?
Unfortunately, I do not believe this is going to be possible with Turpial, or indeed any client which has not been designed to work with KDE's Global Shortcuts interface. However, if you are not bound to Turpial then a client that seems to offer exactly what you are looking for is Choqok. It has a similar lightweight interface to Turpial, however it is a KDE application and, as such, supports KDE's Global Shortcuts. The action you want in this case would be Quick Post, and has a default global key assignment of Ctrl+Meta+T (on my distro, at least) which you can use from anywhere within KDE to immediately present the New Post window.
How to configure a shortcut to open a window accessed by right click on the systray icon?
1,437,404,310,000
On MacOS X I can run open /some/path/index.html and this would open the page index.html with the default software that handles .html files. Is there something similar on Ubuntu Linux? I have used gnome-open in the past, but if there is no gnome installed, this command fails, of course. gnome-open /some/path/index.html Is there a generic "open this file with the default application" on Linux?
A desktop-environment-agnostic open utility is xdg-open, which could fill your need. It's probably packaged with some other utilities of xdg-utils. It's discussed here quite often, see for example this question for details on configuring it. (Other desktop environments come with *-open utilities, too, e.g. there is XFCE's exo-open)
How to open a local URL (webpage) on the command line