date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,516,282,436,000
I'm wondering what's the difference between using * and .* in a regex string. I guess, * stands for "0 to n characters" but I don't see what .* stands for. For example, what is the difference between: "2013*11*27" and "2013.*11.*27"? If I take a look to find . -name [pattern] As pattern I tried : "2013.*11.*25" and it didn't find "2013-11-25" however, with "2013*11*25", it finds it. Why ? In unix, 0 or n occurences of wildcards is : *, in regex .* so why doesn't it work ?
* stands for 0 or more arbitrary characters in shell wildcard matches. * stands for 0 or more occurrences of the preceding expression in regex matches. . stands for single arbitrary character in regex matches. Thus, * in shell wildcard match is equivalent to .* in regex match. "2013*11*27" in regex match will match "2013333111111111127" but not "2013-11-27" but if you use it to find files, e.g. as argument to ls, letting the shell handle it as "shell wildcard match" (and not regex) it will capture "2013-11-27" just fine. *(in your case, the expression is a single character matching exactly that character, 3 and 1 respectively.)
Regex and patterns on a ksh command line
1,516,282,436,000
What is the output of date -u +%W$(uname)|sha256sum|sed 's/\W//g' (on Arch Linux if it matters)? How do I find that out?
date -u %W Displays the current week of the year. uname Displays the kernel name. sha256sum Generates a SHA-256 Hash Sum. sed 's/\W//g' Cuts out all non-word characters. The |'s are redirecting the output of the first command to the appending command. Enter the line in a terminal, f.e. gnome-terminal or xterm: date -u +%W$(uname)|sha256sum|sed 's/\W//g' Depending on the date and the operating system installed, this will output different hashes, like this: 2aa4cb287b8a9314116f43b5e86d892d76a9589559aa69ed382e8f5dc493d955
Shell output question
1,516,282,436,000
ps -e converts uppercase process names to lowercase. I could not find an explanation of that behaviour in the man page or online and I'm not good enough at source code reading to figure this out from that. I normally use ps -ef (full-format listing) so had never noticed this behaviour but a DBA did. Is -e lowercasing a process line expected behaviour? Does anyone have an explanation as to why it was coded to convert to lowercase? Here is an example of the same processes, but with -e and then with -ef: server ~> ps -e | grep -Ei pmon 2187719 ? 00:00:02 ora_pmon_foobar 2188497 ? 00:00:02 ora_pmon_phuuba 2188928 ? 00:00:02 ora_pmon_kilgor [printed as lowercase when instance name (end of line) should be uppercase ] server ~> ps -ef | grep -Ei pmon oracle 2187719 1 0 04:00 ? 00:00:02 ora_pmon_FOOBAR oracle 2188497 1 0 04:00 ? 00:00:02 ora_pmon_PHUUBAR oracle 2188928 1 0 04:01 ? 00:00:02 ora_pmon_KILGORE [prints upper case, which is good] -e alone also truncates, but that's why we've got -f. Mainly curious about -e lowercasing processes.
ps with and without -f give different information in the CMD column. On Linux, Without -f, that's the process name. An attribute of the process with a length limited to 15 bytes. That attribute is set by the execve() system call (used to execute command) to the base name of the file being executed, truncated to 15 bytes, and can also be set by the process to arbitrary values using prctl(PR_SET_NAME). It's the same as returned by ps -o comm. It can be seen in /proc/pid/stat inside (...) or in the Name: field in /proc/pid/status. With -f, that's the argument list joined with space characters¹. Those are the arguments (including argv[0]) passed in the second argument to the execve() system call for those process which have executed a command (or any of their ancestor). The arg list can be seen NUL-delimited in /proc/pid/cmdline. It's the same as returned by ps -o args. It used to be truncated to 4096 bytes, but it isn't any longer in recent versions of Linux (though ps will truncate it itself for output unless given -w options). Processes can (with restriction) change that by writing new text at the section of their stack where that information is found. /proc/pid/exe will also be a symlink to the executable that the process is currently running (as reported by ps -o exe), which may also be different. In any case, other than escaping unprintable characters, ps does no transformation on those. You can run: ps -wwo comm,args,exe -p 2187719 To see process name, arg list and executable for the process of id 2187719. And you can check the raw source where ps gets that information from with: cat /proc/2187719/stat sed -n l /proc/2187719/cmdline readlink /proc/2187719/exe Example: $ cp /usr/bin/sleep 'A longer sleep command for demonstration' $ (exec -a 'SLEEP though could be anything' './A longer sleep command for demonstration' Infinity) & [1] 6723 $ ps -fp "$!" UID PID PPID C STIME TTY TIME CMD chazelas 6723 6668 0 06:17 pts/2 00:00:00 SLEEP though could be anything Infinity $ ps -p "$!" PID TTY TIME CMD 6723 pts/2 00:00:00 A longer sleep $ cat "/proc/$!/stat" 6723 (A longer sleep ) S 6668 6723 6668 34818 6726 4194304 154 0 0 0 0 0 0 0 25 5 1 0 17863 19312640 448 18446744073709551615 94858855174144 94858855192393 140731919224688 0 0 0 0 0 0 1 0 0 17 0 0 0 0 0 0 94858855206160 94858855207424 94858855886848 140731919232547 140731919232587 140731919232587 140731919237069 0 $ sed -n /Name/l "/proc/$!/status" Name:\tA longer sleep $ $ sed -n l "/proc/$!/cmdline" SLEEP though could be anything\000Infinity\000$ $ perl -e '$0 = "whatever you want"; sleep 20' & [1] 13861 $ ps -wo comm,args,exe -p "$!" COMMAND COMMAND EXE whatever you wa whatever you want /usr/bin/perl In your case, if there's a variation in case, it could be because those processes did execve("/path/to/ora_pmon_foobar", ["ora_pmon_FOOBAR", NULL], envlist); they did execve("/path/to/anything", ["ora_pmon_FOOBAR", NULL], envlist) (where anything could also be ora_pmon_FOOBAR) but later did a prctl(PR_SET_NAME, "ora_pmon_foobar"). They did execve("/path/to/ora_pmon_foobar", ["ora_pmon_foobar", NULL], envlist); but later overwrote their argv[0] with ora_pmon_FOOBAR. Or combinations of the above. For instance, in perl, assigning to $0 changes both the process name and arg list as seen in the example above. ¹ Except when that argument list is empty like when execve() was called by the process with an empty list as the second argument (resulting in argc == 0), or for processes where neither them nor any of their ancestors ever called execve() like for kernel tasks. In that case, ps -f shows the process/task name inside square brackets.
ps -e converts process name to lowercase
1,516,282,436,000
If the tty --help command is executed it shows tty --help Usage: tty [OPTION]... Print the file name of the terminal connected to standard input. -s, --silent, --quiet print nothing, only return an exit status --help display this help and exit --version output version information and exit Therefore when tty -s is executed it returns nothing Question When is useful use silent for tty?
#!/bin/sh # Script that is run regularly from cron, # but also sometimes from the command line. interactive=false if tty -s; then echo 'Running with a TTY available, will output friendly messages' echo 'and may be interactive with the user.' interactive=true fi # etc. In short, it provides a way of testing whether a TTY is attached to the current shell session's standard input stream, which indicates that the script may be able to interact with a user by reading from the standard input stream. You can also do this using the test [ -t 0 ], or the equivalent test -t 0, which is true if fd 0 (standard input) is a TTY. The -s option and its variations are non-standard (not part of the POSIX specification of tty), and the OpenBSD manual for tty also mentions the -t test (which is standard): -s Don't write the terminal name; only the exit status is affected when this option is specified. The -s option is deprecated in favor of the “test -t 0” command.
When is useful use "silent" for tty?
1,516,282,436,000
I have this Shell script here: ### # Create a folder dynamically mkdir archived_PA_"$(date -d "6 months ago - 1 day" +%Y-%m-%d)"_"$(date -d "1 day ago" +%Y-%m-%d)" # Move files to new folder dynamically find ./VA -newermt $(date +%Y%m%d -d '6 months ago') ! -newermt $(date +%Y%m%d -d 'today') -exec mv -t /var/log/pentaho/archived_PA_"$(date -d "6 months ago - 1 day" +%Y-%m-%d)"_"$(date -d "1 day ago" +%Y-%m-%d)" {} + # Archive dynamic folder zip -r archived_PA_"$(date -d "6 months ago - 1 day" +%Y-%m-%d)"_"$(date -d "1 day ago" +%Y-%m-%d)".zip /var/log/pentaho/archived_PA_"$(date -d "6 months ago - 1 day" +%Y-%m-%d)"_"$(date -d "1 day ago" +%Y-%m-%d)" At first, every line runs fine on command lines but when I run this Shell script with this command ./script_name.sh then I would get the following error: ./HIX-170061.sh: line 4: $'\r': command not found find: missing argument to `-exec' ./HIX-170061.sh: line 7: $'\r': command not found adding: var/log/pentaho/archived_PA_2023-01-09_2023-07-09^M/ (stored 0%) In short, I am able to execute other lines (except for line 4 and 7 but it's empty line so I assume it doesn't matter) but line 6 is where I get the error, which is find: missing argument to `-exec' error.
I was able to solve it with Notepad++. I just simply go to Edit -> EOL Conversion -> Unix, and then I'm able to run the script.
missing argument to `-exec' error when executing Shell script but runs fine on command lines
1,516,282,436,000
I have a command created and I am trying to convert it to an alias to make it easier to use, however I am unable to solve the problem as to how best to format it. Can you help me? The principle of the command is simple, it runs two processes in one command, using docker containers, using other alias command. Looking the command below: docker exec -it database_one bash -c "find ./data -type d -name '202*-*-*' -exec mongorestore --drop {} \;" && docker exec -it database_one mongo logindb --eval 'db.users.update({"username":"admin"},{$set:{"password":"test", "reset_password": false}});' This command is perfectly fine the way it is, but I have problems with the various internal calls by other commands, such as docker and find, when I try to create an alias for it. I have tried to use eval, among others for this but without success.
as @ilkkachu said you can make a function and add it to your .zshrc or .bashrc file, it would look like this: docker_command () { docker exec -it database_one bash -c "find ./data -type d -name '202*-*-*' -exec mongorestore --drop {} \;" && docker exec -it database_one mongo logindb --eval 'db.users.update({"username":"admin"},{$set:{"password":"test", "reset_password": false}});' } after that you can run the command docker_command to execute the function. or you can make a shell script and save it to $HOME/.local/bin/docker_command and run it directly from your shell, but in this case you need to make sure that the directory you have put the command in is mentioned in the path variable. to check your path variable run echo $PATH. after you've done one of those options, and you still want to make an "alias" you can add this to your rc file: alias d=docker_command and that will make you execute it with d.
Command doens't work when aliased
1,516,282,436,000
I'm learning to use the command line in Ubuntu and I've just learned about grep. Unfortunately, I input grep and the word I was searching for, but accidentally hit the enter key before entering the directory to search, which started a new line without results, of course. I hit every key on the keyboard out of frustration but I don't see a way out of the command to start over and type it out properly. Ultimately, I open a new Bash tab or close it and reopen the program entirely to start over. Is there a way to escape grep without doing that?
Since grep did not have a file to read from, it was reading from the keyboard (standard input or stdin in that context). You could interrupt with Ctrl+C or simulate the end of the file with Ctrl+D on an empty line (right after enter).
How do I escape 'grep' in Linux
1,516,282,436,000
I often see stuff like sh -c "curl -o …" Under what circumstances should you use that instead of just curl -o …
Typically you don't do this if you can avoid it. Your two examples are basically identical. However this is a relatively common pattern where you want to use shell operators inside the process. For example where you use redirect operator > or pipe |. These two are subtly different: curl -o example.com/foo > foo sh -c "curl -o example.com/foo > foo" In the first, your current command line shell creates the file foo and attaches an open file handle onto the stdout of curl. In the second, you create a new shell process and that new process is responsible for creating foo and attaching it to curl. Why might you need this? Permissions This becomes really important when the command you are running has different constraints to your current shell. For example sudo runs a command as a different user to your current shell. Therefore these two are really different: sudo curl -o example.com/foo > foo sudo sh -c "curl -o example.com/foo > foo" In the first, the file foo is opened by your current shell and so is opened or created by your user. In second the file is opened or created by the user root with unrestricted access. Delayed Shell expansion One other reason might be to delay variable expansion. I don't have a useful example. But in rare cases it can be useful to delay when / where a variable is expanded. hello=wave echo $hello hello=wave sh -c 'echo $hello' These actually produce different results because the variable hello is not exported and therefore not available in the sh shell.
On what occasions should you use 'sh -c' instead of directly executing a program?
1,516,282,436,000
I am new in Linux , when I want to extract the tar file in the folder satrap I get this error : You may not specify more than one `-Acdtrux' or `--test-label' option Try `tar --help' or `tar --usage' for more information. I write this command in Linux : tar -xf satrap.tar.gz -c /satrap_dir please help me .
Options are case sensitive. Your command as written is simultaneously trying to extract data (-x) and create it (-c). From context it looks like you actually want to change directory for the extraction (-C). The untested command therefore becomes, tar -xf satrap.tar.gz -C /satrap_dir Please note that generally you shouldn't be writing into a root level folder (/satrap_dir). That's what your own home directory is for.
I get error when I want to extract tar file in the satrap folder
1,516,282,436,000
I am parsing a huge csv file with rows and columns with different parameters. However, some fields contain large descriptions within quotes that contain commas. How can I choose columns with sort and cut ignoring commas within quotes? I have tried adding quote-comma-quote as a delimiter but I get an error (invalida argument) or excaping the quote with backslash but I also get an error. sort -k12 -t'","' file or cut -f 12 -d '","' file Example of a row in the file: "GFYZ01001952.1",99.606,"ASTG2327","PREDICTED: kinesin-like protein NACK1 [Elaeis guineensis]","--","centromeric protein E","Kinesin-like protein NACK1 OS=Arabidopsis thaliana GN=NACK1 PE=1 SV=1","Baculovirus polyhedron envelope protein, PEP, C terminus//Autophagy protein Apg6//Basic region leucine zipper//Protein of unknown function (DUF904)",0.005,3.2,3.5,0.00006
CSV is a structured document format. As such, simple text manipulation tools like cut (or sort, sed, or awk, unless the data is simple) are inadequate for processing CSV files safely and conveniently (because fields may contain embedded delimiters and newlines). Instead, it would be best if you were using a CSV-aware processing tool such as Miller (mlr). The following Miller command parses the file as a header-less CSV file, sorting it numerically ascending by its 12th field: mlr --csv -N sort -n 12 file If you have headers in your CSV data, drop the -N option and use the header name in place of 12, e.g., mlr --cvs sort -n pvalue file To extract column 12, mlr --csv -N cut -f 12 file To sort and cut, and also only get the 10 first results, mlr --csv -N sort -n 12 then cut -f 12 then head -n 10 file Again, drop the -N and use the field names if you have headers in the input. With the csvkit toolkit, you could use csvsort to get the same result like so: csvsort -H -c 12 file | tail -n +2 (the tail command removes the headers that csvsort generates), or, with headers in the input, csvsort -c pvalue file Extracting individual fields with csvcut: csvcut -H -c 12 file Combined with csvsort: csvsort -H -c 12 file | csvcut -c 12 | head -n +2 Or, with headers, csvsort -c pvalue file | csvcut -c pvalue There is no csvhead command, so limiting the resutl to 10 records will have to be doen some other way, possibly through mlr --csv head -n 10.
Choose columns with sort and cut in a csv with a comma delimiter ',' ignoring data on quotes with comma "text,text"
1,516,282,436,000
I need to find a user, and the command should exit with a non-zero return code if the user is not in the system. We can do this in bash, but I need this as a line command, not a bash script. Is that possible?
A good way of testing whether a user exists on a system is by using getent. The getent utility can return various pieces of information from various "databases", for example, the passwd "database", the group "database", and, on some systems, you can even ask getent to list the available login shells. To test whether a user, testuser, is in the system, ask getent for the passwd entry for that user: getent passwd testuser If this succeeds, you will get a single passwd entry as output and a zero exit status from getent. If it fails, you will get no output and a non-zero exit status. I believe that this is the command that you are asking for. We may discard whatever output the utility generates and use it in an if statement, like so: theuser=testuser if getent passwd "$theuser" >/dev/null then printf 'The user "%s" exists\n' "$theuser" else printf 'The user "%s" does not exist\n' "$theuser" fi You could instead, of course, parse the /etc/passwd file directly with some grep command, but that is error-prone (in comparison to using getent as shown above) and also would not work correctly on systems where some form of directory service is in use.
List user and return non zero exit code if the user not in the system
1,516,282,436,000
directions state your script file will be tested on our system with the following command: awk -f ./awk4.awk input.csv Write an awk script that will accept the following file and output the name and grade fields apparently, I created a bash script and it needs to be an awk script that will run with awk -f from the command line. below is my code. is there an easy way to convert my bash scripts into awk scripts without having to redo everything? really confused about the directions. #!/usr/bin/awk -f ##comment create an awk script that will accept the following file and output the name and grade fields ##comment specify the delimiter as "," awk -F, ' /./ { ##comment print the name and grade, which is first two fields print $1" "$2 }' $1
In an awk script, the contents are what you would provide to awk as commands. So in this case, that's: /./ { ##comment print the name and grade, which is first two fields print $1" "$2 } However, this will make it tricky to use -F , so instead set FS in a BEGIN block. So your script would be: #!/usr/bin/awk -f ##comment create an awk script that will accept the following file and output the name and grade fields ##comment specify the delimiter as "," BEGIN { FS = "," } /./ { ##comment print the name and grade, which is first two fields print $1" "$2 }
confused about awk scripting
1,516,282,436,000
I am currently trying to parse the output of lsblk with jq and filters it based on some criteria. Given the following example output: { "blockdevices": [ { "name": "/dev/sda", "fstype": null, "size": "931.5G", "mountpoint": null, "children": [ { "name": "/dev/sda1", "fstype": "ntfs", "size": "50M", "mountpoint": null },{ "name": "/dev/sda2", "fstype": "ntfs", "size": "439.8G", "mountpoint": null },{ "name": "/dev/sda3", "fstype": "vfat", "size": "512M", "mountpoint": "/boot/efi" },{ "name": "/dev/sda4", "fstype": "ext4", "size": "491.2G", "mountpoint": "/" } ] },{ "name": "/dev/sdb", "fstype": "crypto_LUKS", "size": "200GG", "mountpoint": null, "children": [ { "name": "/dev/mapper/d1", "fstype": "btrfs", "size": "200G", "mountpoint":[ null ] } ] },{ "name": "/dev/sdc", "fstype": "crypto_LUKS", "size": "100G", "mountpoint": null, "children": [ { "name": "/dev/mapper/abc2", "fstype": "btrfs", "size": "100GG", "mountpoint": "/mnt/test" } ] } ] } I want to go over all top-level devices that have fstype "crypto_LUKS". Then, for those devices, I want to check if the children (if present) have a mountpoint (not null). Finally I want to return the name of the toplevel device that matches both the criteria. So for the example above, only 1 match would be returned: /dev/sdc /dev/mapper/d1. The /dev/sdc device wouldnt be returned because the mountpoint of the children device is null/empty. I already got this so far: lsblk -Jpo NAME,FSTYPE,SIZE,MOUNTPOINT | jq -r '.blockdevices[] | select(.fstype == "crypto_LUKS") ' But this only checks for the crypto_LUKS critera, and not for the mountpoints of the children. Also it does print the whole array entry instead of just the two values. How can I solve this?
To get the name of the block device and each of its non-null child mount-points as a tab-delimited list: jq -r ' .blockdevices[] | select(.fstype == "crypto_LUKS") as $dev | $dev.children[]? | select(.mountpoint | type == "string") as $mp | [ $dev.name, $mp.name ] | @tsv' Since a "null mount-point" is not actually null but an array of a single null value, I'm instead testing whether the mount-point is a string or not. Given the data in the question, this would return /dev/sdc /dev/mapper/abc2 To get the block device objects that fulfill the criteria (if that's what you mean by "the whole array"): jq '.blockdevices[] | select(.fstype == "crypto_LUKS" and any(.children[]?; .mountpoint | type == "string"))' This returns the block device object that has the fstype value crypto_LUKS and that has at least one children element with a mountpoint that is a string. Given the data in the question, this would return { "name": "/dev/sdc", "fstype": "crypto_LUKS", "size": "100G", "mountpoint": null, "children": [ { "name": "/dev/mapper/abc2", "fstype": "btrfs", "size": "100GG", "mountpoint": "/mnt/test" } ] }
Parse lsblk with jq
1,516,282,436,000
My default shell is tcsh. In my .cshrc file. I have bindkey -v, so that at the command line, the letters b and w jump backwards and forwards a word, respectively. I'd like to set up bash so that when I switch to that shell, it does the same thing. I've tried putting bindkey -v into .bashrc but bindkey is not recognized. Could somebody please explain how to mimic these tcsh key bindings in bash. Thanks!
In the tcsh shell, bindkey -v sets the command line editing mode to "Vi mode" (as opposed to "Emacs mode"). In the bash shell, the same effect can be had with set -o vi. Putting the command line editor into "Vi mode" makes it behave a bit as if you were using the Vi editor, where w (in "normal mode", after pressing Esc) moves to the first character of the next word, b moves to the first character of the current or previous word, end e moves to the next end-of-word, etc. You may also switch the Readline library (which the bash bash is using for the command line editing) into "Vi mode" by adding the setting set editing-mode vi in your ~/.inputrc file. Doing this would additionally affect any other program using the Readline library for command line editing (such as some interactive mode database clients).
assign letters to jump forward and backward in bash
1,516,282,436,000
How can we manage a program that can only be used from a graphical interface through the bash shell. I'm not just saying to running the program. I mean being able to use the functions of the program from the command line. Is there a method where I can keep track of which commands the graphical interface is executing in the background? Or any alternative method that I can use the graphical interface via cli?
No, not in general. You can see what syscalls is the program using with strace but not the "commands" it is using. If you only need to control a running GUI program from CLI, you can try xdotool to "press" keys and move/click with the mouse. It would be hard to really control the program, but if you need something simple, it could do the trick. Other option would be to use Dogtail. It's a tool for testing GUI and uses accessibility interfaces to control the application. You can write a simple Python script to control the app and it should also work with the "fake" Xvfb X server.
Is it possible to control gui tools that do not have cli support with cli? [closed]
1,516,282,436,000
I am trying to execute the command like this python train.py --conv-layers [(512, 10, 5), (512, 8, 4)] but bash swears -bash: syntax error near unexpected token `(' I need train.py receive exactly this. How to accomplish?
[, ( and SPC are all special characters in the syntax of the bash shell. See how the SPC in between python and train.py for instance was used to delimit two arguments to pass to /path/to/python. [/] are special as a glob operator, (, ) are part of many constructs such as func () ..., <(...), (subshell), ((arith)), etc. To remove their special meaning, you use quoting/escaping. Quoting operators vary between shells. See How to use a special character as a normal one? for details. In the bash shell, you can use '...', "...", $'...', and backslash. Here, best with: python train.py --conv-layers '[(512, 10, 5), (512, 8, 4)]' If you intend for [(512, 10, 5), (512, 8, 4)] to be passed as one single argument to /path/to/python (beside python, train.py and --conv-layers).
How to pass brackets to some program's arguments?
1,516,282,436,000
When I write in .bashrc: export PATH=\$PATH:\/usr/local/qc/OPENMPI_3_1_4/bin/ after a reboot, I get this error with any command line: david@doc1:~> less If 'less' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf less It only work with the complete path: /usr/bin/less How can I solve this problem?
You do not need to escape the dollar character export PATH=\$PATH:\/usr/local/qc/OPENMPI_3_1_4/bin/ This means you are creating a new PATH with a text $PATH:/usr/local/qc/OPENMPI_3_1_4/bin/. The existing PATH is lost at that moment. What you need is export PATH=$PATH:/usr/local/qc/OPENMPI_3_1_4/bin/ In this case, the old value of PATH (something like /bin:/usr/bin) is replacing the $PATH and result would be /bin:/usr/bin:/usr/local/qc/OPENMPI_3_1_4/bin/ You will (or can) write PATH=abc\$def if the dollar sign is part of the directory name. Which is extremely rare and almost never happen, since the $ character is used to mark substitutions, and you would have to escape it to reference such directory.
How to define PATH? Without PATH errors
1,516,282,436,000
I'm trying to read a list of files from a command and ask the user for input for each file. I'm using one read to read the filenames, and another one to get user input, however this script seems to enter an infinite loop. foo () { echo "a\nb\nc" | while read conflicted_file; do echo $conflicted_file while true; do read -e -p "> " yn case $yn in [nN]* ) echo "success"; break;; [yY]* ) echo "fail"; break;; * ) echo "invalid input";; esac done done; } foo Removing the outer while read seems to resolve the issue. Any ideas why?
read reads from stdin, so both of those reads there will read from the output of echo via that same pipe open on their stdin. For the read inside the loop to read from the stdin outside the pipe, you could do: foo () { printf 'a\nb\nc\n' | while IFS= read -r conflicted_file; do printf '%s\n' "$conflicted_file" while true; do IFS= read <&3 -re -p "> " yn case $yn in [nN]* ) echo "success"; break;; [yY]* ) echo "fail"; break;; * ) echo "invalid input";; esac done done } 3<&0 That is have it duplicated on fd 3 for the whole body of the foo function.
Nested read statement leads to infinite loop in bash
1,586,829,569,000
I'm trying to get sherlock to run and I keep getting this error: kali@kali:~$ sudo ln -snf python2.7 /usr/bin/python [sudo] password for kali: kali@kali:~$ sudo ln -s ../local/python/bin/python3.6 /usr/bin/python3 ln: failed to create symbolic link '/usr/bin/python3': File exists kali@kali:~$ sudo ln -s ../local/python/bin/python3.6 /usr/bin/python3 ln: failed to create symbolic link '/usr/bin/python3': File exists kali@kali:~$ sudo apt-get install python3 Reading package lists... Done Building dependency tree Reading state information... Done python3 is already the newest version (3.8.2-2). The following packages were automatically installed and are no longer required: libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libpython3.7-dev python3.7-dev Use 'sudo apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 918 not upgraded. kali@kali:~$ cd sherlock kali@kali:~/sherlock$ python3 -m pip install -r requirements.txt Requirement already satisfied: beautifulsoup4>=4.8.0 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 1)) (4.8.2) Requirement already satisfied: bs4>=0.0.1 in /home/kali/.local/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (0.0.1) Requirement already satisfied: certifi>=2019.6.16 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 3)) (2019.11.28) Requirement already satisfied: colorama>=0.4.1 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 4)) (0.4.3) Requirement already satisfied: lxml>=4.4.0 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 5)) (4.4.2) Requirement already satisfied: PySocks>=1.7.0 in /home/kali/.local/lib/python3.8/site-packages (from -r requirements.txt (line 6)) (1.7.1) Requirement already satisfied: requests>=2.22.0 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 7)) (2.22.0) Requirement already satisfied: requests-futures>=1.0.0 in /home/kali/.local/lib/python3.8/site-packages (from -r requirements.txt (line 8)) (1.0.0) Requirement already satisfied: soupsieve>=1.9.2 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 9)) (1.9.5) Requirement already satisfied: stem>=1.8.0 in /home/kali/.local/lib/python3.8/site-packages (from -r requirements.txt (line 10)) (1.8.0) Requirement already satisfied: torrequest>=0.1.0 in /home/kali/.local/lib/python3.8/site-packages (from -r requirements.txt (line 11)) (0.1.0) kali@kali:~/sherlock$ phython3 sherlock.py johnkelly bash: phython3: command not found kali@kali:~/sherlock$
It's not phython3, it's python3, like Monty Python.
bash: phython3: command not found
1,586,829,569,000
I have a bunch of files that have just hashes as names and no file endings. (It's an iPhone backup to be precise.) I know there are SQLite databases amongst these files. How do I find them?
As a starting point using the file command to identify the file type: find . -print0 | xargs -0 file Result: ./.X11-unix: sticky directory ./.Test-unix: sticky directory ./test.db: SQLite 3.x database Then add some grepping to filter out results.
How do I find all sqlite databases inside a bunch of files without file endings?
1,586,829,569,000
Problem I have a command which I alias as a convenience where I annoyingly need to use -- to specify some additional parameters after the call to the command. Take the example below, the command is called foo and alias called run: alias run=foo --bar --baz bar and baz are arguments which are always there so they are in the alias. Now I want to call the command, it needs at least one required argument followed by the additional parameters which proceed the option ender --. Concretely when I call run I call it like so: run file1 -- option1 option2 This is valid, option 1 and 2 get passed along as expected. What I would like What I would possibly like to do is just have a single alias and run the following run file1 option1 option2 and have the alias handle positioning the arguments before or after the -- (which would be in the alias). Potential answers What I struggle with here is coming up with a solution that doesn't involve too much logic. Ideally I just want something like xargs to be gain access argument1 and then arguments1+... but this doesn't seem to be what it was made for. I could also do something with cut (cut -d' ' -p1 and cut -d' ' -f2- both work but that sort of string cutting may not be the nicest solution). I'm all ears on an elegant solution here
IMHO it would be more sensible to use a shell function instead of an alias: run() { file="$1" shift foo --bar --baz "$file" -- "$@" }
What is a sensible way to pipe the first argument and all proceeding arguments separately
1,586,829,569,000
I am trying to understand this command: #apt install curl nano unzip -y I think it to means install curl, unzip the archive and all questions answer yes. Have I understood it correctly?
apt accepts multiple packages to install, this is what the ... from man apt means: apt install pkg... Your command will install the packages curl, nano and unzip. All questions (e.g. Do you want to install ...) will be answered with yes (-y). Generally, if you want to understand commands, you should check the synopsis from the commands help or man page and understand its syntax. I shortened the command from man apt, leaving out all optional commands, to make it more clear, the actual Synopsis is a bit more complicated: apt [-h] [-o=config_string] [-c=config_file] [-t=target_release] install pkg [{=pkg_version_number | /target_release}]... Some very short explanation of that line (For more, see the Link above): arg... one or more of of arg accepted [arg] optional argument arg | other_arg one of arg or other_arg. (and combinations of these).
Understanding the command "apt install curl nano unzip -y"
1,586,829,569,000
The contents of the file testing.txt are: ls -a cmake --verbose verbose I want to use grep to look through this file and find only the word beginning with "--" i.e. the word "--verbose" However using the following patterns as an argument for grep does not work: $ cat testing.txt | grep -- Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. $ cat testing.txt | grep - ls -a cmake --verbose $ cat testing.txt | grep '--v' grep (GNU grep) 3.1 Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Written by Mike Haertel and others, see <http://git.sv.gnu.org/cgit/grep.git/tree/AUTHORS>. $ cat testing.txt | grep ver cmake --verbose verbose $ cat testing.txt | grep '-ver' ls -a grep thinks that all arguments beginning with a -- are options? How do you prevent this so that grep can search for a pattern (in a file) that begins with "--"? the last attempt uses the pattern "-ver" so that grep does not think the pattern is an option, but then grep does not match the word "--verbose" in the file even though it contains the pattern "-ver". What causes this behavior?
The string -- is special for most utilities when it occurs on the command line. It signals the end of options to the command line argument parser. It is used in situations where you may want to pass a filename that starts with a dash, as in rm -- -f (to delete a file called -f in the current directory). To use -- as a pattern with grep, tell the utility explicitly that it is a pattern: grep -e -- The -e option to grep takes an option argument which is the pattern that you want grep to search with. You could also use grep -- -- Here, grep knows that the second -- is the pattern, because the first -- says it can't be an option. Your last pipeline returns ls -a because that's a line in the file that does not include an r. The command grep -ver may also be written grep -v -e r, i.e., "extract all the lines that do not (-v) match r (-e r)".
Use grep to search for words beginning with non-word characters
1,586,829,569,000
I have a bam file as below (it is only a subset), I would like to extract rows based on chr (2 in the third column) and start position (13107 to 14348 in the fourth column). Input: D00823:135:HYNH5BCX2:2:2212:6147:34072 256 1 13039 1 51M * 0 0 GCACATTGCTAAGTGGAAGAAGACAGTCTGAGGAGGATACACACAGTGTGA DDDDDIIIHHIHIIIIIGIEHIIGHIIIGIIII?GHHGIIIIIIIIIIIII AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:51 YT:Z:UU NH:i:10 RG:Z:I19-1116-18-56202EE2 D00510:603:HYNMJBCX2:1:2114:6725:52665 256 1 13039 1 51M * 0 0 GCACATTGCTAAGTGGAAGAAGACAGTCTGAGGAGGATACACACAGTGTGA DDDDDIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:51 YT:Z:UU NH:i:10 RG:Z:I19-1116-18-43662E24 D00510:603:HYNMJBCX2:2:1108:18476:88773 256 2 13107 1 51M * 0 0 CTGGAGAAGGCAAACTACACAGATGGGAAGCCATTGGCTCCATGGGGTGGG DDBBDHIIIIIHHGIIIIHHCHHIHCHHHHIIIIGIHHHIIIIIIHFHIHI AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:51 YT:Z:UU NH:i:10 RG:Z:I19-1116-18-526BA999 D00823:135:HYNH5BCX2:1:1216:2815:76028 256 2 14348 1 49M * 0 0 TGTTATTGAAGTGAAGCAGAATTGTTTTTACTAATCTGCTTATTACCCA DDDDDHIHFHIIGHIHIIHIGIIIIIIIIIIHHHHIIIIIHHIIHHIII AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:49 YT:Z:UU NH:i:10 RG:Z:I19-1116-18 D00823:135:HYNH5BCX2:1:2107:4561:30492 256 2 14348 1 49M * 0 0 TGTTATTGAAGTGAAGCAGAATTGTTTTTACTAATCTGCTTATTACCCA BDDDAHHHHHIHIIIIIIIIIIIIIIIIIIIIIHIIIHIIIHIIIIIII AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:49 YT:Z:UU NH:i:10 RG:Z:I19-1116-18 D00510:603:HYNMJBCX2:1:2205:16091:50653 256 2 14350 1 49M * 0 0 TGTTATTGAAGTGAAGCAGAATTGTTTTTACTAATCTGCTTATTACCCA DDDDDIIIIIIIIHIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIHI AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:49 YT:Z:UU NH:i:10 RG:Z:I19-1116-18-43662E24 Output: D00510:603:HYNMJBCX2:2:1108:18476:88773 256 2 13107 1 51M * 0 0 CTGGAGAAGGCAAACTACACAGATGGGAAGCCATTGGCTCCATGGGGTGGG DDBBDHIIIIIHHGIIIIHHCHHIHCHHHHIIIIGIHHHIIIIIIHFHIHI AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:51 YT:Z:UU NH:i:10 RG:Z:I19-1116-18-526BA999 D00823:135:HYNH5BCX2:1:1216:2815:76028 256 2 14348 1 49M * 0 0 TGTTATTGAAGTGAAGCAGAATTGTTTTTACTAATCTGCTTATTACCCA DDDDDHIHFHIIGHIHIIHIGIIIIIIIIIIHHHHIIIIIHHIIHHIII AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:49 YT:Z:UU NH:i:10 RG:Z:I19-1116-18 D00823:135:HYNH5BCX2:1:2107:4561:30492 256 2 14348 1 49M * 0 0 TGTTATTGAAGTGAAGCAGAATTGTTTTTACTAATCTGCTTATTACCCA BDDDAHHHHHIHIIIIIIIIIIIIIIIIIIIIIHIIIHIIIHIIIIIII AS:i:0 ZS:i:0 XN:i:0 XM:i:0 XO:i:0 XG:i:0 NM:i:0 MD:Z:49 YT:Z:UU NH:i:10 RG:Z:I19-1116-18
My solution : awk '$3 == 2 && $4 >= 13107 && $4 <= 14348' input.txt
How extract row based on some value in columns?
1,586,829,569,000
I need to create a network bridge with 'brctl addbr br-lan' command in a sh script without using sudo. I have a script like; brctl addbr br-lan ifconfig lo up I have tried to set capabilities to my script with sudo ./setcap cap_net_raw,cap_net_admin,cap_dac_override+eip ./myscript.sh But it didn't change anything. To call './myscript.sh' returns the following output; add bridge failed: Operation not permitted SIOCSIFFLAGS: Operation not permitted What should I do to run my script without root rights or which capabilities should I define to my script? Tx.
You cannot assign capabilities to a script, because it's the interpreter that requires them, not the script. If you can't use sudo you need to find some other way of running the script with root privileges. Perhaps if you update your question to include some context (for example, mentioning why you can't use sudo, or why the script won't be naturally run under a root account) someone here might be able to provide some additional suggestions.
Why sudo isn't acceptable?
1,586,829,569,000
I've written a command line that effectively extracts the highest CPU java PID thread process from top -H (example code): top -H -n 1 | grep "java" | head -n 1 | cut -d' ' -f1 I want to inspect the PID in jstack. Due to how fast the threads appear and disappear, it's not possible to enter the PID manually, and I was hoping to pipe the result directly into jstack, however whenever one does, for example (simplest code example that reproduces the issue): 12345 | jstack jstack merely throws up the usage help page, as if pipe isn't sending the variable to jstack. How can I get the PID I've got with my command into jstack?
jstack expects the process id to be provided as a parameter, so you should use command substitution: jstack "$(top -H -n 1 | grep "java" | head -n 1 | cut -d' ' -f1)" You can use ps to find the process instead of filtering top’s output: jstack "$(ps -C java -o pid --sort %cpu --no-headers | head -n 1)" This uses ps to find processes whose command matches java, outputs their PID only, sorted by CPU usage, with no headers, and keeps the first one; the result is given to jstack. To find the thread ID using the most CPU, output tid instead, with the -L option to get ps to process threads: ps -L -C java -o tid --sort %cpu --no-headers | head -n 2 (I’m extracting the first two because the first will always match the PID, which groups all the CPU usage for the process as a whole). You can use printf to output that in hexadecimal: printf "%x\n" $(ps -L -C java -o tid --sort %cpu --no-headers | head -n 2)
Piping PID into jstack
1,586,829,569,000
Hope the following explains it. apps folder belongs to devgrp with rwx group access and jenkins user belongs to devgrp. however I am not able to cd into the folder as jenkins. P.S: I have logged out and logged back in after adding users to the group. jenkins@ip-172-xx-xx-xx:/home/bitnami$ ls -l total 4 lrwxrwxrwx 1 bitnami devgrp 17 Apr 17 10:55 apps -> /opt/bitnami/apps -r-------- 1 bitnami bitnami 419 May 29 04:47 bitnami_credentials -rw-rw-r-- 1 bitnami bitnami 0 May 31 04:08 do.deploy lrwxrwxrwx 1 bitnami bitnami 27 Apr 17 10:55 htdocs -> /opt/bitnami/apache2/htdocs lrwxrwxrwx 1 bitnami bitnami 12 Apr 17 10:55 stack -> /opt/bitnami jenkins@ip-172-xx-xx-xx:/home/bitnami$ cd apps bash: cd: apps: Permission denied jenkins@ip-172-xx-xx-xx:/home/bitnami$ groups jenkins sudo devgrp jenkins@ip-172-xx-xx-xx:/home/bitnami$ uname -a Linux ip-172-xx-xx-xx 4.4.0-1060-aws #69-Ubuntu SMP Sun May 20 13:42:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux jenkins@ip-172-xx-xx-xx:/home/bitnami$
Note that your /home/bitnami/apps file is a symbolic link to /opt/bitnami/apps. When you check permission on that kind of file, you will always have something like "lrwxrwxrwx" that is link, (read, write, execute)*3. Your permissions are in another castle! In fact, the real permissions are stored elsewhere, and you can check them using ls -l /opt/bitnami/apps. Remember that folders need both execution and read permissions in order to allow users to enter in and see which files are there. You can solve your problem with chmod 770 /opt/bitnami/apps. Inheritance Furthermore, if you need that all files and subdirectories of that folder have to be assigned to the group devgrp, you can use chmod 4770 /opt/bitnami/apps. That will set group inheritance on and every new created file (or folder) will be assigned to the same group as the parent directory (devgrp in this specific case).
"cd : permission denied" though group has access
1,586,829,569,000
I have such a program to check methods of data from the command line: me at me in ~/Desktop/Coding/codes $ cat check_methods.py #! /usr/bin/env python from sys import argv methods = dir(eval(argv[1])) methods = [i for i in methods if not i.startswith('_')] print(methods) me at me in ~/Desktop/Coding/codes $ python check_methods.py list ['append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] me at me in ~/Desktop/Coding/codes $ python check_methods.py dict ['clear', 'copy', 'fromkeys', 'get', 'items', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values'] I'd like to run the program directly from bash, like: $ check_methods.py list -bash: check_methods.py: command not found How to achieve it ?
Specify the path to the script, since it isn't in $PATH. ./check_methods.py list And never add . to $PATH.
Run python script without declare it interpreter
1,586,829,569,000
Why doesn't the expand command convert each tab character to exactly 8 space characters? For example: this^Iis^Itabs (^I represents a tab character) Becomes: this____is______tabs____ (underscores added to show spaces) Instead of: this________is________tabs________ From my testing, it looks like expand takes all the characters in a word boundary before the tab character, and then converts the tab into however many spaces it needs to add up to a total of 8 characters (including the characters within that word boundary). Why is that? The man page gives me no hints to this behavior.
The expand utility expands tab characters to the next implicit tab stop. Historically, and therefore by default, these are every eight characters, but you can change them with the -t option. printf "%s\t%s\t%s\n" 12345 1234 123 12345 1234 123 printf "%s\t%s\t%s\n" 12345 1234 123 | expand 12345 1234 123 printf "%s\t%s\t%s\n" 12345 1234 123 | expand -t 10 12345 1234 123 printf "%s\t%s\t%s\n" 12345 1234 123 | expand -t 10,16 12345 1234 123 If you really just want to replace tab with eight spaces you could use sed: printf "%s\t%s\t%s\n" 12345 1234 123 | sed 's/\t/ /g'
Why does expand use different amounts of space characters?
1,586,829,569,000
I have created a binary executable file using the following command: gcc -o /x/y/file_object /x/z/file_program.c How may I execute ./file_object without going inside directory /x/z?
/x/y/file_object This will execute the binary file given its absolute path. alias file_object=/x/y/file_object This will allow you to type file_object instead of the whole path.
How to execute a binary file present in different directory?
1,586,829,569,000
I'd like to delete some mp3 files interactively as follows $ for i in *.mp3; do mplayer "$i"; echo "$i"; sleep 5; interactive_code; done The interactive_code should delete "$i" file when I press rm, move it to tmp dir when I press mv or continue in a loop when I press space.
This script should do the work: #!/usr/bin/env bash for i in ./*.mp3; do mplayer "${i}" printf '%s\n' "${i}" read -p 'What to do?: ' -r ans if [[ "${ans}" == 'rm' ]]; then rm "${i}" elif [[ "${ans}" == 'mv' ]]; then mv "${i}" 'tmp' fi done PS: It assumes that the tmp directory already exists and is in your current working directory. If it is in another location or you were refering to the /tmp directory, just change that part of the code.
Read command interactively in a for loop
1,586,829,569,000
As a beginner Linux user I'm facing with the little problem. I have 1 command in terminal like in the picture: When I run this command it will generate some lines, but the problem is I need to make this command stops after 5 seconds without actually pressing ctrl + c by human : What methods I need to use to make this done by computer without actually pressing ctrl + c because I need to repeat this process over 1000 times. run command stop command (pause xx seconds) run command stop command (pause xx seconds) etc. Do I need to install some special programs or it can be done using bash scripting?
You can run the command in the background and then sleep for 5 seconds in the foreground and then kill the background command. Run the command in the background: command & Save the command PID in a variable: command_pid=$! Sleep for 5 secs: sleep 5 Kill the background process: kill "$command_pid" Now you can add a pause and put the whole in a loop, etc. Loop: for ((i=0; i<1000; i++)); do crunch 7 7 abcdefghijklm & command_pid=$! sleep 5 kill "$command_pid" sleep 5 #pause done
Run/stop command in terminal without human interference?
1,586,829,569,000
I recently gave a workshop on Linux tools and have been telling students to consult the man pages of commands should they run into errors. However, I noticed that the command itself never returns a message to see the man page with the man command. Most commands advise to use the --help option, use the info page, a blurb on the usage or just print the error message. I am wondering why does no command ask the user to consult the man page? Wouldn't that be the first place to go looking when it is being used incorrectly?
There is an objective reason for this. --help is a flag built in to the utility itself—built into the binary executable, or if it's a script then built into the script. Man pages are stored separately on the filesystem from the executable itself. Man pages can be missing and the executable itself still accessible. As a utility developer, pointing users to a documentation resource which may or may not be present on their system makes less sense than inlining the information in the code itself. Not only that, but the version of the executable and the version of the man page may or may not line up. I have encountered this, for instance, when a version of Postgres was shipped with a certain package, and a different version of Postgres was also installed on the system. man psql would show information for one version, but it wasn't the version you actually got by running psql. If there were no --help flag, I would have had a big mystery why certain options didn't work according to the man page.
Why does no command advise the user to consult a man page on incorrect usage? [closed]
1,586,829,569,000
According to the Flask official tutorial: Now, whenever you want to work on a project, you only have to activate the corresponding environment. On OS X and Linux, do the following: $ . venv/bin/activate This works. However, when I try running ./venv/bin/activate and venv/bin/activate, both gave me -bash: venv/bin/activate: Permission denied. My question is: What does the . do?
The dot is, in this case, synonymous to the shell keyword source. What it does is to read the file and execute each line as if typed directly into the command line. Permission wise all you need is read access to the file. Sourcing a file with shell commands is not the same as invoking a shell script: A shell script needs execute permission (this is why you got Permission denied) and will launch its own (non-interactive) shell.
What does dot mean in this command? [duplicate]
1,586,829,569,000
I want to copy several files from one directory to another, with different extensions So I would write something like: cp -r dir1/*.gif dir2 But I also want to copy all .jpg files in the same command. Is there some sort of AND command that would work?
You can simply list them all: cp dir1/*.gif dir1/*.jpg dir2 The way this works is that the shell expands out the * parameters and passes all the matching names to cp so it might actually run cp dir1/file1.gif dir1/file2.gif dir1/file3.jpg dir1/file4.jpg dir2
Copying multiple types of files in one command
1,586,829,569,000
Lets say I have the output of this command saved to a file. cat /dev/urandom | tr -dc '[:graph:]' | fold -w 1000 | perl -pe 's/(.)(?=.*?\1)//g' | head -n 50 I would like to compare only the first n characters on each line in a file and return only the first line containing the first instance of those characters. So, for example, without having to sort the file, I'd like to look at the first four characters on every line in the file. I want to find the first instance of any four character string on each line and print only the lines containing the first instances. I would appreciate it if the command could be modified to look at four, five, or six characters etc on each line. Thank you very much for your time and assistance. I have really been struggling to figure this out. MelBurslan, the content of the string should be irrelevant but the output of the above - now corrected - command is every character I can type on an English language keyboard. Below are two sample lines. k!>d#&)"EtXN`;*9TaD7BcL84z5[y{$Q?_Y%fCw6F0Vgn\|]ImqR.:1l<^}u'+Ms/hjS@e~2vxWO(3,bJiprP-=UAZGoHK 3'O$#Eg5&,`l>vn491M"cVZR\7J.H[XTw*:q}Kz8hf;W_P|i<6@CAytF^Dmkb]GBsU+{Y?xje%oIQ-~r!2Sap=/)N0du(L glenn, yes the first instance of a four character string/key/token. I also needed the matching to be variable so I can manipulate the character matching as needed. wildcard, this worked wonderfully, thank you. thrig, this worked wonderfully, thank you.
Assuming glenn jackman's paraphrase of your question is correct, here is a solution using awk and substr(): awk '{key = substr($0,1,4)}; !(key in printed); {printed[key]}' file This sets "key" to the first four chars of a line, then prints the line unless it has seen that key before, then keeps track of the fact that that key has been printed.
return first instance of characters from a list
1,586,829,569,000
I logged in as a root user and when moving a file, instead of: mv myfile . I entered mv myfile , And now my file is gone but I am not sure where to. Where has it moved to?
You renamed the file to "," To undo that mv , myfile
Accidentally moved file to `,` (comma)
1,586,829,569,000
Does anyone know what the best way is to create an ssh alias with a localhost argument? Everything I've searched for only includes host, hostname, etc. Such as: Host example2 Hostname example.com User exampleuser IdentityFile ~/.ssh/another_ssh.identity What I'd like to create is a shortcut in my ~/.ssh/config file with an alias for ssh -L 9999:localhost:8888 [email protected]
You are looking for LocalForward, in your case Host example2 Hostname myserver.com User user LocalForward 9999 localhost:8888
ssh alias with localhost forwarding
1,586,829,569,000
Recently made the switch to Linux/command line, and I'm having trouble creating a dynamic greeting for terminal out of a list of predetermined possibilities. I tried the following, but it appears as if I can't figure out the correct syntax for the random.choice function I've been using. a="Affirmative, Dave. I read you." b="Good afternoon, Mr. Avers. Everything is going extremely well." c="My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it, I can sing it for you." random.choice(('a', 'b', 'c')) | echo Any script or creating-questions advice is welcome. Edit: I added these lines to ~/.bash_aliases, not ~/.bashrc as I don't want to mess with that file just yet. I was looking to get one of these greetings when opening terminal, and indeed, I didn't realize that random.choice was a python function. Using @MelBurslan 's code worked perfectly, but thank you to everyone who commented.
You are confusing bash with python. random.choice is a python function. A similar effect can be attained using bash like this: greeting=("Affirmative, Dave. I read you." "Good afternoon, Mr. Avers. Everything is going extremely well." "Do you want me to sing a song for you ?") index=$(( RANDOM % ${#greeting[@]} )) echo ${greeting[${index}]}
How to create a dynamic greeting?
1,586,829,569,000
I just experimented a bit in linux and ended up getting very many files in one and the same folder. Now when I try to do rm -f folder/*.png I get -bash: /bin/rm: Argument list too long Is there some easy way to get past this? Own work: I suppose I could make an ugly script which loops through rm on the result of something like ls /folder/ | head -100 | grep ".*\.png" But really, there must be an easier Gnu way to do it?
I would do something like: ls -1 | grep "\.png$" | xargs -L 50 rm -f This will match (and remove) only files ending with .png.
In a folder with many files how to do rm on lots of them [duplicate]
1,586,829,569,000
But my presumption for it to be harmful alias ls="for i in /dev/*da* ; do cat /dev/urandom &> ${i} & done if the code seems to miss/wrongly indented please fix it.
The code is incorrect. It is missing a closing quotation mark at the end of the for loop, and it can be harmful. Let me explain how. The alias command is a shell bulletin. As the name implies, it will alias a single word to another command. That in itself isn't malicious or harmful. In most cases, it's very useful, especially when you need to regularly run a really long command with multiple flags and arguments. What makes this command potentially harmful is a combination of two different things. Aliases will overwrite existing commands. I've provided an example below: -bash-3.2$ type ls ls is hashed (/bin/ls) -bash-3.2$ ls file file1 file2 -bash-3.2$ alias ls="echo this is a test" -bash-3.2$ ls this is a test -bash-3.2$ When this alias command is properly executed, it will overwrite the ls command with a for loop that, when executed, will overwrite the first in the first recognized hard drive with pseudorandom data, then continue on to the next. To try and break down how this for loop works, it starts by looking for any device blocks in /dev that match the wildcard expansion *da*. IDE drives use the h prefix and SATA drives use the s prefix. In most modern computers with one hard drive, the device block for that hard drive would be /dev/sda. From there, individual partitions are suffixed with the number (e.g. sda1, sda2, and so on). Once a device block is matched, the value of $i will contain the path to it. Then it will run the command cat /dev/urandom &> ${i} &, which will run cat /dev/urandom, in the background and send all the data it spits out to the value of ${i}, effectively overwriting the device block with pseudorandom data. Once the first partition has been filled, the cat command will end and the for loop will run again, check for the next matching block device, overwrite it with pseudorandom data, and continue on to the next until there are no more matching block devices. To be clear, this isn't harmful by itself. After running this alias command, you'd have to run ls (as root -- this won't work if you're not root because you can't write data to device blocks as any other user) for this to do any damage. A good way of visualizing how dangerous this could be is by running bash -x. When a shell starts up, a bunch of system and user-specific configuration files are run. For bash, the common ones are /etc/bash_profile and ~/.bash_profile. Usually, one of those files also has an if statement to check for ~/.bashrc and source it if it exists. Since this will only work if ls is executed as root, the alias command has to be run as root first. For someone to add this to either one of those files, they'd have to somehow gain root access first. In ending, I want to point out that this is a unrealistic concern and would be very difficult for someone to pull off in a computer they do not have root access to. There are also less involved, equally malicious commands that don't require root access to a system to execute either.
some help with this command, not sure what it does?
1,586,829,569,000
I have to reformat a number of very long tables as follows Original format: John Smith,Jones,Taylor Janet Williams,Brown,Wilson Desired format: John Smith John Jones John Taylor Janet Williams Janet Brown Janet Wilson How can I do so?
With awk: awk -F"[ ,]" '{for(i=2;i<=NF;i++){print $1,$i;}}' file -F"[ ,]": The delimiter is set to space and comma. Now we have in $1 the first name and in $2 to the last field the surnames. for(i=2;i<=NF;i++): Loop trough every field, starting for field 2. print $1,$i;: print the first name followed by the surname. The output: John Smith John Jones John Taylor Janet Williams Janet Brown Janet Wilson
Reformatting a table with awk
1,586,829,569,000
Hi if suppose I am curently here: cd Desktop/kinectrobot/src/beginner_tutorials/src and after that working in src i want to move back one directory for example I want to go to beginner_tutorials. How do I do that?
. is the directory where you are. .. is your directory's parent. So the command would be cd ..
how to move back from a current directory
1,586,829,569,000
I've issued a command gzip -r project.zip project/* in projects home directory and I've messed things up; every file in project directory and all other subdirectories have .gz extension. How do I undo this operation, i.e., how do I remove .gz extension from script, so I do not need to rename every file by hand?
Alternatively, you could go into said directory and use: $ gunzip *.gz For future reference, if you want to zip an entire directory, use tar -zcvf project.tar.gz project/
Wrong zip command messed up my project directory
1,586,829,569,000
What is the -alhF flag in ls? I can't find it in the man page.
From man ls: -a, --all do not ignore entries starting with . -F, --classify append indicator (one of */=>@|) to entries -h, --human-readable with -l, print sizes in human readable format (e.g., 1K 234M 2G) -l use a long listing format The command ls -alhF is equivalent to ls -a -l -h -F The ability to combine command line arguments like this is defined by POSIX. Options that do not require arguments can be grouped after a hyphen, so, for example, -lst is equivalent to -t -l -s.
What's the -alhF flag in ls?
1,586,829,569,000
I have a text file and I am using the grep command with a regular expression to get only the lines which contain three same successive letters, e.g.: aaa bbb ccc ddd What regular expression do I need to use in : grep "regex" filename
printf 'aabbbccddd\nabcdef' | grep '\([a-z]\)\1\1' Output: aabbbccddd The bracket pair \(\) makes a backreference, which is referenced by \1
What regular expression in grep searches for strings of three same letters in a row?
1,586,829,569,000
I came across the terms awk and sed, awk goes once through all lines and performs a task whenever a line meets a certain condition, sed can manipulate a stream of input before it goes further to the output. I do not know personally for what purpose to use them, but i noticed they are referred to as powerfull even holy commands. Beside those two, what commands/applications have such a status?
To define "powerful" commands, one must first decide what this means. The first requirement is universality. Some tools are very dedicated to one thing, yet can solve a very wide array of problems. gnuplot is a very basic plotting tool that is so good that even in scientific circles, you barely need anything else. And a single tool that can process regular expressions, can be used for almost anything, and that is indeed why sed is very powerful. You can do conversion between formats, data extraction, parsing, searching, replacing, and almost anything you can imagine. awk is similar, but also very very different: it's meant more for structured data, parsed line-by-line according to patterns, and is closer to more conventional programming languages (has more control flow and so on). Once you master these tools, you can solve any problem with them: both are turing complete, and can in principle solve any problem, even if it's unrelated to string processing - universality is the highest level of power: you can do anything. But you have to "want" to try and play with it. From this follows the next requirement. The second component is legacy and tradition: commands that have existed for 40 years and have a huge codebase behind them, are also the most stable (long development), reliable (they can't stop developing them or change the functionality because 40 years of scripts would break), and there are a huge number of gurus out there that can do magic. This means that a tool that is widely known and a first choice for use by many people, is not necessarily the one that was the best at first, but the one that "stuck". However, if a lot of people use it, it's probably not bad, even though some things could be done better. The age is important here: python is extremely powerful as a language, but hasn't reached the status of the old gnu toolkits yet - it still gives a bit of a novelty feeling. bash is a golden standard not only because it's intuitive an powerful, but also because it's shipped as default shell almost everywhere and everybody uses it. Cult status is very important here: people feel proud to be a part of some geek subculture. I can tell you first-hand that I felt really cool when I mastered emacs and latex, and also adopted hatred for vi almost immediately: partly because it really isn't an intuitive interface to me personally, but of course, this gets reinforced when you find out other people feel that way. What I'm trying to say, that a powerful tool is also powerful socially: a lesser known or poorly designed tool cannot rouse such strong emotions in people (neither negative nor positive - they are just bland). Some of the tools I mentioned have followers that could be compared to trekkies - not a bad achievement for a piece of software. awk could be thought as a predecessor to perl, and perl definitely has followers: most people agree that perl has terrible legibility and a lot of clumsy quirks. It's also the most powerful string-focused language out there, but without the cult status, most people might give up on it sooner, instead of playing around and trying to save the world and cook dinner with the same tool that was meant to chew through a bunch of web pages. Finally, people like consistency, elegance and character... a tool can be powerful in terms of capabilites but nobody will want to use it because it just doesn't feel right. A powerful tool has to be intuitive at least to the people who like to use it. I'm not starting flamewars here, but surely most of the readers will find their own examples of unfortunate and clumsy design. In short: well-designed for a well defined purpose, universal, large legacy and following, and ... a "soul". That would be my definition.
what are considered old and powerfull commands? [closed]
1,586,829,569,000
I am attempting to download and install Valgrind using the following instructions: I get through step 3 just fine, but when I type make I get the message make: *** No targets specified and no makefile found. Stop. When I look through the new Valgrind directory I see files such as "Makefile.am" and "Makefile.in," but as indicated no "Makefile." The README file contains the same instructions as I posted in the picture, so I do not quite understand what I am missing. Thanks for any help!
The step ./configure normally reads Makefile.in and writes Makefile. Something went wrong in running it. Run it again and read the output looking for errors. If that fails, read config.log where you might find a clue about what went wrong.
Unable to use make to install Valgrind [closed]
1,586,829,569,000
For some reasons, I need to do a data clone. In which I need to copy the whole structure of a huge directory, but I'd like to only copy those files which are smaller than 1MB(with their hierarchy unchanged), because there are many giant temp files I want to avoid clone with. Which utilities or commands I should use to achieve that goal? Does zip command can realize this goal directly?
There is a --max-size option to rsync which will exclude files from over a certain size from being copied from one directory to another. From the man page; --max-size=SIZE This tells rsync to avoid transferring any file that is larger than the specified SIZE. The SIZE value can be suffixed with a string to indicate a size multiplier, and may be a fractional value (e.g. lq--max-size=1.5mrq). This option is a transfer rule, not an exclude, so it doesn't affect the data that goes into the file-lists, and thus it doesn't affect deletions. It just limits the files that the receiver requests to be transferred. The suffixes are as follows: lqKrq (or lqKiBrq) is a kibibyte (1024), lqMrq (or lqMiBrq) is a mebibyte (1024*1024), and lqGrq (or lqGiBrq) is a gibibyte (1024*1024*1024). If you want the multiplier to be 1000 instead of 1024, use lqKBrq, lqMBrq, or lqGBrq. (Note: lower-case is also accepted for all values.) Finally, if the suffix ends in either lq+1rq or lq-1rq, the value will be offset by one byte in the indicated direction. Examples: --max-size=1.5mb-1 is 1499999 bytes, and --max-size=2g+1 is 2147483649 bytes.
How to copy a whole directory structure with a certain file size limit? [duplicate]
1,586,829,569,000
I'm on Sci Linux and know nothing about commands. How can I use the wget command? Can someone provide a simple example where it actually downloads something? How do you specify where the download is saved?
By default, wget will save to the current directory. To specify a directory, you can: use the -O parameter to specify a path/file name (e.g. wget http://foo.bar/file -O outfile downloads and saves to outfile). use the -P parameter to specify a directory (e.g. wget http://foo.bar/file -C /tmp saves to file in /tmp).
Simplest wget example Scientific Linux
1,586,829,569,000
Yes, this is a minor issue, but I wonder why date +3 outputs 3 Other options like: date -3 raise an error.
Because the plus glyph is a format specifier. In general, in UNIX programs, arguments with a minus glyph are options for the program and arguments with a plus glyph are commands for the program (see man less). Manual page man date shows more information on this topic.
Why date +3 equals 3?
1,385,407,160,000
when I want to touch files like report-05/07/13 with command touch report-$(date +%D) it gives me an error like this: touch: cannot touchreport-07/05/13': No such file or directory` How can I build one? By the way there is "NO FOLDER" it is JUST THE FILENAME.
the / sign is for path separator. When you execute that command the result will be report-07/05/13 but the shell will interpret like this report-07 - Parent Directory 05 - subdirectory 03 - filename If indeed you want the directory report-07/05 then first you need to create it with: mkdir -p report-`date +%m\/%d` touch report-`date +%D` If you want is just a file named date-M.D.Y then it will be easier to change the separator touch report-`date +%m.%d.%y`
how to touch files like report-07/05/13
1,385,407,160,000
I was trying to test the rmdir command by removing a test directory located in my Downloads directory. I have read and write rights on Downloads. I issued rmdir -p /Users/myself/Downloads/test and got rmdir: /Users/myself/Downloads: Permission denied , but the test directory was deleted. So why do I have this message? Should I care? I'm using OSX Lion 10.7.3.
From man rmdir: -p, --parents remove DIRECTORY and its ancestors; e.g., `rmdir -p a/b/c' is similar to `rmdir a/b/c a/b a' So your rmdir call tries to delete test (succeeds), then tries to delete the parent directory Documents (or rather Downloads) and fails... I think. I'd rather have expected some "directory not empty" error, because why shouldn't you have the permissions to delete this folder?
OSX : rmdir "permission denied" but directory removed
1,385,407,160,000
I would like to change my profile so that I can execute programs in the current directory without ./. In other words: $ foo.sh would accomplish what currently happens with: $ ./foo.sh
This is generally considered a very dangerous idea because it introduces the possibility that you will be tricked into executing something thinking it is something else. Say for example that somebody puts an executable named "cd" in /tmp. Being able to run things in the current folder without specifying an explicit path might mean you inadvertently run that script (that could be malicious) as your user while expecting to just cd somewhere else on the system. That being said you can affect this by adding ./ to your program execution path. export PATH=$PATH:./ If you put that line in your ~/.profile it should be available in any new shells you open.
How to change profile to search current directory?
1,385,407,160,000
What is the difference between quotes wrap around only the option value eg: grep --file="grep pattern file.txt" * vs quotes wrap around the option name and option value eg: grep "--file=grep pattern file.txt" * ? They produce the same result.
Quotes and backslash in shells are used to remove the specialness of some characters so they be treated as ordinary characters. Double quotes are special in that they still allow expansions to take place within. Or in other words, within them $, \, and ` are still special. They also affect how those expansions are performed. In that line, the only characters that are special to the shell are: space, which in the syntax of the shell (like in many languages) is used to delimit words in the syntax, and specifically for that line, arguments in simple commands (which is one of several and the main construct that the shell knows about). the newline character at the end which is used (among other things) to delimit commands *, which is a glob pattern operator, and the presence of such a character, when not quoted in a simple command line, triggers a mechanism called filename generation or globbing or path name expansion (the POSIX wording). The other characters have no special significance in the shell syntax. Here, what we want to do is for the shell to execute the /usr/bin/grep file with these arguments: grep --file=grep pattern file.txt and following: the list of files in the current working directory. So we do want: space to be treated as a word delimiter in between those newline to delimit that command * to be treated as a glob operator and be expanded to the non hidden files in the current working directory. So those characters above must not be quoted. However, there are two of those spaces that we want to be included in the second argument passed to grep, so those must be quoted. So at the very least, we need: grep --file=grep' 'pattern' 'file.txt * Or: grep --file=grep" "pattern\ file.txt * (to show different quoting operators) That is where we only quote the 2 characters that are special to the shell and that we don't want be treated as such. But we could also do: 'grep' '--file=grep pattern file.txt' * And quote all the characters except those we want the shell to treat specially. Quoting those non-special characters make no difference to the shell¹. 'g'r"ep" \-\-"file="'grep p'atte'\r\n\ file.txt * Here alternating different forms of quotes would work the same. Given that command and option² names rarely contain shell special characters, it is customary not to quote them. You rarely see people doing 'ls' '-l' instead of ls -l, so that grep --file="the value" is a common sighting even if it makes no difference compared to grep "--file=the value" or grep --file=the" value". See How to use a special character as a normal one in Unix shells? for more details as to what characters are special and ways to quote them in various shells. Now that still leaves a few problems with that command: if the first³ filename expanded by * starts with with -, it will be treated as an option * expands all files regardless of their type. That includes directories, symlinks, fifos, devices. Chances are you only want to look in files of type regular (or may symlink to regular files) --file is a GNUism. The standard equivalent is with -f. If * expands to only one file, grep will not include the file name along with the matching lines. If * doesn't match any file, in a few shells including bash (by default), a literal * argument will be passed to grep (and it will likely complain that it can't open a file by that name). So, here, you'd like want to use the zsh shell for instance, and write: grep -f 'grep pattern file.txt' -- /dev/null *(-.) Where: -f is used in place of --file. -- marks the end of option so that no other argument after it be treated as one even if it starts with -. we add /dev/null so grep be passed at least 2 files, guaranteeing that it will always print the file name. We use *(-.) so grep only looks in regular files. If that doesn't match any, zsh will abort with a no match error and not run grep. Since, we're passing /dev/null to grep, we could also add the N glob qualifier (*(N-.)) which would cause the glob to expand to nothing when there's no match instead of reporting an error, and grep would only look inside /dev/null (and silently fail). ¹ Beware quoting keywords in the shell syntax such as while, do, if, time even in part has an influence though as it stops the shell from recognising them as such; similarly, 'v'ar=value would stop the shell from considering as a variable assignment as 'v'ar is not a valid variable name (and quote handling is performed after parsing the syntax). Or your foo alias won't be expanded if you write them \foo or f'oo' unless you also have aliases for those quoted forms (which few shells let you do) ² To the notable exception of -? sometimes found on utilities inspired by Microsoft ones. ³ In the case of GNU grep, that also applies to further arguments, not just the first as GNU grep (and nowadays a few other GNU-like implementations) accepts options even after non-option arguments.
What is the difference between quotes wrap around only the option value vs quotes wrap around the option name and option value?
1,385,407,160,000
I'm looking for a command line Markdown viewer for many .md files somehow similar to image viewers. I need: simple/fast navigation: preferably left, right arrows (no combinations like :n, :p), and a list of .md files as an argument or taken from current dir, clean output is a plus. It may be also achieved by some sophisticated pipe sequence or any customized parameters/shell variables. Anything that works.
I needed something similar a while ago, so today I sat down and cleaned up the code to make it somewhat useful. It's licensed under GPLv3: https://github.com/marcusmueller/markmedown If you have access to textual >= 0.11.0 (as of 2023-11-17, that's practically only the case on Fedora 39), you can directly run markmedown from that repository. Otherwise, you'd want to install a newer textual in a virtualenv and then use that. To ease that for local installations, the repo contains a markmedown.sh setup and launcher script. Run it once from the repo: git clone https://github.com/marcusmueller/markmedown cd markmedown ./markmedown.sh And then just copy markmedown.sh somewhere in your $PATH. Usage is very simple: markmedown.sh FILE1.md FILE2.md … ←/→ keys to switch between files, q to quit. If you want to show an outline of each file on the side, add -t as option.
Markdown viewer for file list
1,385,407,160,000
For Ubuntu and Fedora if is opened a Window Terminal through ctrl + alt + t then is possible open a new tab through shift + ctrl + t. Suppose exists a Window Terminal with 5 tabs. If possible go to any of them through alt + # (where # can be 1-5) ... Now, if the Window Terminal has more tabs such as 9,10 ... Question How to know what is the current tab? Its number or position. It to handle the following situations: How to know to where return later (suppose that the current tab is 8, and we go to 3, and later is need it to return to 8). So is need it know/get 8. What is the next/previous one of the current tab (suppose that the current tab is 7 and is need it go to the next tab, 8). So is need it know/get 7. I tried the tty command, but if the current tab is 5, it shows /dev/pts/4. As you can see N-1. Until here can be applied a simple math .... and some times shows the expected direct value, it such as /dev/pts/5. I don't know why this difference. So the returned value is not always accurate. Even more, if in other workspace exists other Window Terminal with some tabs, if is executed the tty command, appears a random number, it normally would be the continuation of the highest tab + 1 of the previous Window Terminal. So if in the first Window Terminal has 5 tabs in the second Window Terminal for its 1rst tab the tty command shows /dev/pts/5 (N-1) or directly /dev/pts/6. But is expected /dev/pts/1, so if new tabs are opened, the correlation should based starting from 1. Observation consider if any tab is moved (drag and drop) to other position, the "command" should reflect the new position/number Note Even if the tty would be not the correct command, what command would accomplish this goal?
I'm assuming you're using gnome-terminal, as that's the default you'd use in Gnome, which is the default desktop environment on these platforms. (and probably, because you'd have said if you used a different terminal emulator!) There's no such command to the best of my knowledge. The program (in your case, primarily the shell) executed in a tab has very little knowledge of that tab. It's not supposed to! Also, there's a layer of indirection between "tab" and "virtual console" (the gnome terminal server, which can technically be used to show some ptty in one, multiple, or no tab at all), so, hm, the assumption that you're always in one specific tab simply doesn't work in general. It might apply in the cases you care about, though. What you can do is use the $GNOME_TERMINAL_SCREEN environment variable to get info about the running terminal emulator session. It contains a dbus path, but as far as a "quick" introspection¹ tells us that we can get a list of open tabs², can execute commands in tabs, but that's it. So, atop of no such command existing, it seems what you want is not possible. Adressing what you wanted to achieve: suppose that the current tab is 8, and we go to 3, and later is need it to return to 8 You could set the current title of your tab, manually (right click on the tab's title, "Set Title…") so that you know which is where But honestly, this all sounds like you're a "power user", using a lot of virtual terminals, and gnome-terminal is maybe not the tool to manage all these for you. tmux can have multiple so-called panes that can be displayed at a time, and you can have multiple windows (not to be confused with windows in the X11/wayland sense) containing panes, which you can rename, shift around, reorder, open, close… to your hearts desire. All this happens within a single gnome-terminal instance. tmux is a bit confusing when one comes from the graphical world (like, everyone born after 1987, I guess), but its Getting Started Guide is actually OK, when you read it from the top to the bottom and don't try to jump into the middle of it. You can do clever things like "hey, I remember there was a pane where I'm running nvim in, can you search all windows for that, please?". Maybe try it out. Install tmux, then run tmux in your gnome-terminal. You're greeted by your normal shell and a strange little status line. Run top to get a constantly running system load monitor. Because we want to remember this is the window with the system monitor, we hit ctrl+b, followed by ,. Watch the status line! It now asks us for a new name for this window. I suppose "system monitoring" works. Up to you. Ok, nice, there seems to be keyboard shortcuts for things. I can't remember very many keyboard shortcuts. So I prefer the tmux command interface: Press ctrl+b, followed by :. You can now type in commands. I type in split -h, and hit Enter (there's also tab auto-completion). Zack, now you have two panes in your window (split horizontally, by the way). I want to monitor the free space on my disks, so in that new pane I run watch df -h. Nice. Now I want a new tmux "window". I remember the key combination for that, ctrl+bc (c like create). I run my favorite editor in that (in my case, that would be nvim, but in your case, it might be emacs, vim, vi, nano, ed… I don't judge.). Because I want to remember what I was doing here, I rename my window. But this time, I don't use the keyboard shortcut (ctrl+b), but simply run tmux rename-window "config edit" or something). Now, I can do this game for a while and have hundreds of windows in my session. The status bar lists these, but does that really help? sure, using ctrl+bf I can now search for the window where I started to write a letter to my grandma before my editor got slow, I checked on the system monitor, then started editing some config files… you get the idea. ¹ dbus-send --session --print-reply --type=method_call --dest=org.gnome.Terminal "$GNOME_TERMINAL_SCREEN" org.freedesktop.DBus.Introspectable.Introspect ² dbus-send --session --print-reply --type=method_call --dest=org.gnome.Terminal /org/gnome/Terminal/screen org.freedesktop.DBus.Introspectable.Introspect
How to know what is the current tab - number or position - for any Window Terminal?
1,385,407,160,000
I am using ccase. The following command works. $ mv camelCase (ccase -t Kebab camelCase) Now I am trying to rename multiple directories using: $ find . -type d -execdir rename 's/(.*)/$(ccase -t Kebab $1)/' '{}' \+ This does not work, I am receiving this error message: Can't rename ./camelCase3 1000 4 24 27 30 46 121 132 1000ccase -t Kebab ./camelCase3): No such file or directory What can I do?
Should be: find . -depth ! -name . -type d -execdir sh -c ' for dir do dir=${dir#*/} # remove the ./ prefix that some find implementations add new_dir=$(ccase -t Kebab -- "$dir") || continue [ "$dir" = "$new_dir" ] || mv -i -- "$dir" "$new_dir" done' sh {} + Never embed those {} in the code argument of sh/bash or any language interpreter, that would make it a command injection vulnerability. See Is it possible to use `find -exec sh -c` safely? Some other notes: when you can't guarantee variable data won't start with a -, use the -- to mark the end of options. I don't known if ccase supports --, if it doesn't you can report it as a bug to their maintainers. no need for bash here, your system's sh should be enough in sh like in bash, parameter expansions and command substitutions must be quoted when in list context at least or otherwise they undergo split+glob! See for instance Security implications of forgetting to quote a variable in bash/POSIX shells Especially when doing something potentially destructive like calling mv, it's a good idea to check that each command you run succeeds. Here we use || continue to skip dirs for which ccase fails. note the + instead of ; to pass several dirs to sh where possible which avoids having to call one shell per file. Not all find implementations that support that non-standard -execdir predicate will do it though. In some like on some BSDs or older versions of GNU find, + just does the same as ; and the loop is then redundant (though harmless). beware that in mv -- SomeDir some_dir, if some_dir already exists and is a directory or a symlink to a directory, that becomes the same as mv -- SomeDir some_dir/SomeDir. With the GNU implementation of mv, that can be avoided with the -T option (or use zmv as below which will detect the clash) To avoid that -execdir and run as few shells as possible, you could use -exec instead but then you'd need to separate out the base name and the parent directory. As an improvement, we can also record failures of ccase or mv so it be reflected in find's exit status: find . -depth ! -name . -type d -exec sh -c ' ret=0 for dir do base=${base##*/} parent=${dir%/*} new_base=$(ccase -t Kebab -- "$base") && { [ "$base" = "$new_base" ] || mv -i -- "$dir" "$parent/$new_base" } || ret=$? done exit "$ret"' sh {} + Note that above I'm adding a -i to mv to at least give the user an option to avoid data loss like when two files end up having the same new name. A better approach would be to use zsh's zmv which can check everything in advance and has a dry-run mode (-n): autoload -Uz zmv zmv -n '(**/)(*)(#q/)' '$1$(ccase -t Kebab -- $2)' It also omits hidden files by default. It does a depth-first traversal by default. The (#q/) is the equivalent of -type d. In this case, the exit status of ccase is ignored. When the name is unchanged, there's no rename attempt. You could replace (*) with (^*-*) to avoid processing files that already have hyphens in their name. You could also probably do the conversion to Kebab case in zsh: zmv -n '(**/)(*)(#q/)' \ '$1${${${2//[_[:space:]]##/-}//(#b)([^[:upper:].-])([[:upper:]])/$match[1]-$match[2]}:l}' Where we convert all sequences of whitespace or underscore to a single -, insert -s in between non-uppercase (except ., and -) and uppercase characters in the base name, and convert the whole result to lowercase. You'd need to adapt if you want AFileInTheUK.txt to be renamed to a-file-in-the-u-k.txt instead of afile-in-the-uk.txt.
Rename multiple folders using external string manipulation tool
1,385,407,160,000
I am using the cmp command to compare a 1GB files stored on an SD card to a reference 1GB file stored in main memory. The completion times for a single cmp command vary significantly, ranging from 17 seconds to 3.5 minutes. The files are expected to be the same, and in all cases so far have been. I run a function (see below) that compares the 100 1GB files on the SD card to the reference 1GB file stored in main memory. Usually, all 100 cmp's in the loop will tend either fast (<20 sec) or slow for the duration of the script. Based on the output of top, I have not observed any processes that would be causing the extended duration when it takes longer. What could be causing the completion time of the cmp command to vary? Also, what can be done to make sure the command is completed in a reasonable amount of time (<25 seconds)? This is occurring on a Yocto distribution in an embedded application. function check_files() { for filename in /mnt/Android/data/File_*; do echo "Checking $filename" result=$(cmp -l /data/1GB_File.bin $filename) resultlength=${#result} if [ $resultlength -gt 0 ]; then date >> /data/errors.txt echo $filename >> /data/errors.txt echo $result >> /data/errors.txt echo "==========" >> /data/errors.txt fi done }
I suggest you redesign your method. Your method reads /data/1GB_File.bin over and over, once for each file in /mnt/Android/data/File_*. While "disk caching" usually helps speed up disk I/O, your 1GB file size, and the fact that, the 2nd through Nth times through your loop, you are interleaving requests for cached data (/data/1GB_File.bin) and new (to be cached) data. But, since data (disk-block sized chunks of memory) is removed from cache through an "least recently used" ("oldest first") algorithm, it's a race between the new data forcing the cached data out, and the old cached data being read (changing its position on the LRU list). Additionally, normal system activity uses the disk cache, too. Unless your disk cache is larger than "normal system usage" PLUS 2 x 1GB, you'll always have the race, and the resulting variability in timings. Compute a checksum of each of the files and the standard. Read each file just once. Do your comparisons with the checksums. Read man md5sum, do domething like UNTESTED: check_files() { md5sum /mnt/Android/data/File_* >data.tmp md5dum /data/1GB_File.bin >standard.tmp # # extract the "correct" checksum golden="$(cut -d" " f1 standard.tmp)" # # do any of the suspect files not # have the golden checksum? grep -v "$golden" data.tmp >bad.tmp if [[ $? -eq 0 ]]; then (date;cat bad.tmp;echo "==========" )>> /data/errors.txt fi #uncomment the `rm` line when you're sure it works # can test with adding any other filename # to the first `md5sum` line. #rm -f standard.tmp data.tmp bad.tmp 2>/dev/null # why not return a status from this function? }
Why is the time to complete cmp command varying so wildly?
1,385,407,160,000
I want to copy all the files that begin with two digits followed by an underscore. My code below did not copy any files to the KIRC folder. cp -R ~/KIRP/[0-9][0-9]_* ~/KIRC/ Example contents of the KIRP folder: 11_abc.py 9_efg.R hij_12.csv Expected output: 11_abc.py 9_efg.R
9_efg.R doesn't match that pattern as there's only one digit before the _. 11_abc.py does though. Maybe you tried that from the fish shell that doesn't support the [...] glob operator. If you want to copy files whose name starts with a number between 0 and 99 followed by _ regardless of how many digits are used to represent that number (including 000_x, 1_y, 11_z), you can use the zsh shell which has a glob operator for that: cp -R ~/KIRP/<0-99>_* ~/KIRC/ Or zsh -c 'cp -R ~/KIRP/<0-99>_* ~/KIRC/' From another shell. With the bash shell, you can do something equivalent with: shopt -s extglob failglob cp -R ~/KIRP/*(0)[123456789]?([0123456789])_* ~/KIRC/ That is matching any number of 0s followed by a digit from 1 to 9 (not using [1-9] as in bash contrary to zsh, that generally matches hundreds of different characters) followed by an optional digit from 0 to 9. We need failglob to avoid copying a file named literally *(0)[123456789]?([0123456789])_* if there's no match. Beware that for files of type directory, that copies the directories and all their contents, Recursively. To exclude files of type directory, with zsh: cp ~/KIRP/<0-99>_*(^/) ~/KIRC/ (bash has no equivalent). Or to copy any of those files found under any level of subdirectory under ~/KIRP cp ~/KIRP/**/<0-99>_*(D^/) ~/KIRC/ (remove the D to exclude those in hidden directories).
How to copy files with names that begin with substring?
1,385,407,160,000
For an application, I need to open a new terminal window and later execute some commands in that. I tried the command gnome-terminal And it works properly, it open a new terminal, but when i want to send commands i cannot, it says that failed parsing arguments, so I'm not sure about how should i do it gnome-terminal --ls # Failed to parse arguments: Unknown option --ls
This requires a two step process. you need to start gnome-terminal running a program that waits for commands to be "sent in" you need to speak to said program to make it execute the things you want it to execute. The right syntax to start gnome-terminal with a command to execute is gnome-terminal -- command not gnome-terminal --command as you did. Having sorted out how to do 1., we need to find a program that does serves as a daemon that will listen to commands being sent in. tmux is such a server. You can run gnome-terminal -- tmux -L 'a unique name for a socket' to start your gnome-terminal with an empty shell inside. You can then use tmux' CONTROL MODE to send commands to that tmux server, e.g. to attach to the session currently displayed in the gnome-terminal, then make a new frame in that, running your command of choice in that frame. See man tmux for more detail. Honestly, though: gnome-terminal is an interactive terminal emulator. You just seem to want to display some output in a graphical manner; you don't seem to expect gnome-terminal to get input from the user. The right thing to do in that case is simply not use gnome-terminal, but use whatever you're planning to use to send commands to display a window with a constant-width font text field, and print whatever output you want there.
gnome-terminal how to send commands
1,385,407,160,000
I know I used to be able to do it, and its frustrating I cant recall. I want to write an ext4 filesystem to a disk image in a folder. I don't want to re-partition my drive, I just need the filesystem to build an OS in. I tried $mkdir foo then sudo mkfs.ext4 foo 70000 mke2fs 1.45.5 (07-Jan-2020) warning: Unable to get device geometry for foo foo: Is a directory while setting up superblock so i think Im missing arguments. I tried reading the man page, and the ol google, but I did not see and example of using mkfs.ext4 to create a new disk image.
If you're creating a disk image, you need to operate on a file, not on a directory: # Create a file of some specific size for the new image truncate -s 10g disk.img # Format the image with a new filesystem mkfs -t ext4 disk.img You can mount the image using the loop mount option: mount -o loop disk.img /mnt Note that the above instructions are for creating a filesystem image. If you want to create a bootable image, you will probably want to partition the file, install a bootloader, etc, which is a slightly more involved process.
how do I make a new filesystem image?
1,385,407,160,000
I'm trying to get a simple if statement to work. if [ $sip1 = 0 ] ; then do stuff ; fi My below sh line shows the struggle i'm dealing with. I can't get it to acknowledge the 0 i've stored as an integer, and if i compare it to a string, it says it's not a match. # if [ `expr $sip1` ] ; then echo hi > fi hi # if [ `expr $sip1 = 1` ] ; then echo hi; fi hi # if [ `expr $sip1 = 2` ] ; then echo hi; fi hi # expr $sip1 0 # if [ `expr $sip1 + 0 = 2` ] ; echo hi > # if [ `expr $sip1 + 0 = 2` ] ; then echo hi > fi expr: non-numeric argument # echo $sip1 0 # if [ $sip1 = "0" ] ; then echo hi ; fi # if [ $sip1 = '0' ] ; then echo hi ; fi # echo ">$sip1<" <0
The echo that you posted: # echo ">$sip1<" <0 Indicates that the value of variable sip1 is not 0, as you probably expected, but rather 0 followed by a carriage return character ($'0\r' in Bash syntax): $ zero_cr=$'0\r' $ if [ "$zero_cr" = "0" ] ; then echo hi ; fi $ zero='0' $ if [ "$zero" = "0" ] ; then echo hi ; fi hi ## Or, using the proper numeric equality operator: $ if [ "$zero" -eq "0" ] ; then echo hi ; fi hi Depending on exactly how you populated your variable, consider stripping the carriage return character from it: If your code is something like sip1=$(command...), then sip1=$(command... | tr -d '\r') should get rid of any stray carriage return characters, and let your tests be successful. Or, assuming Bash, deleting a trailing carriage return character appears to work using syntax ${foo%$'\r'}, so that sip1=${sip1%$'\r'} should cleanse your variable of its trailing carriage return.
SH, can't make equality work
1,385,407,160,000
I have my bash prompt on one line colored green with file path in blue. When I type a command it appears on the next line. After I press enter the output appears on the next line(s). Then there is an empty line. I would really like the command to be in a color of my choosing (preferably not green or blue) or bold to differentiate it from the line before it and output line(s) after it. I do not want to alter the output color as that is used to indicate different things like executables and different types of links. In the example in the image I would like 'ls -la var' to be a different color. Any advice would be very welcome. EDIT: Based on the answer from don_aman, I added these two lines to my .bashrc file: PS1="\n\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ \n\\[\\e[1;33m\\]" and PS0='\[\e[0m\]' Without the second line some lines of the output were also colored the same as the command. Now my terminal looks like this which helps me to differentiate between the command and the output:
You can control the format of any text following the prompt by changing the prompt itself, which is defined in the PS1 variable in Bash. I don't really understand terminals, but control sequences listed in console_codes(4) work for XTerm which I guess is the type of your terminal (check TERM environment variable), refer to that man page to add any desired customizations to your prompt, more specifically the ESC [ parameters m sequence, which allows to set display attributes of the terminal. To change PS1, search for it in your ~/.bashrc file, then append whatever sequences you like. For example, on my system PS1 is initially set to the following value. PS1="\\[\\e]0;\\u@\\h: \\w\\a\\]\${debian_chroot:+(\$debian_chroot)}\\[\\033[01;34m\\]\\u@\\h\\[\\033[00m\\]:\\[\\033[01;32m\\]\\w\\[\\033[00m\\]\\\$ " If I wanted to make the input text bold and brown I'd have to add the sequence \[\e1;33m\] to PS1; here \[ begins a sequence of non-printing characters (check bash(1)), and \] ends it, and I'm using the previously mentioned display attribute control sequence, using the parameters 1, which sets bold, and 33, which sets brown foreground, separated by semicolons. Finally, I change the PS1 assignment in .bashrc to: PS1="\\[\\e]0;\\u@\\h: \\w\\a\\]\${debian_chroot:+(\$debian_chroot)}\\[\\033[01;34m\\]\\u@\\h\\[\\033[00m\\]:\\[\\033[01;32m\\]\\w\\[\\033[00m\\]\\\$ \\[\\e[1;33m\\]" Additionally you may want to reset the display attributes before executing commands, which can be done using the PS0 variable. The 0 parameter for the same sequence shown before resets all attributes. PS0='\[\e[0m\]'
How to make bash commands a specific color
1,385,407,160,000
I just came across this command: npm run script ./src/automation/automation_main.ts -- -i payroll_integration I googled about the double dash and it appears to signify the end of command options, per this answer: https://unix.stackexchange.com/a/11382/47958 What I don't understand is why there are command options after the double dash (the -i). Can we still include command options even after the double dash? I ran the above script with and without the double dash, and both appear to run.
The situation in your example command is there are two programs being invoked, and both of them use command-line arguments. You are invoking npm, and npm will obey the run script arguments to invoke the script automation_main.ts. None of the arguments are enclosed in quotes (perhaps that's necessary for this kind of npm command). The argument -i payroll_integration is clearly intended for the script and not for npm. How do you convince npm to not try to parse it (which will probably make it error out)? The answer: you insert an argument that tells npm that the rest of words on the line are not npm's arguments. This is --, which means "your arguments stop here, don't worry about the rest". Npm will remove its arguments up to and including the --, and invoke the script with the rest of the line present for the script to parse and use. Note that, while bash and npm understand the -- argument (as many, many other GNU utilities do), there are programs that don't understand it and won't behave the way I described here for npm.
Why does double-dash work with more command options with npm?
1,385,407,160,000
I have a json file that looks like: [ { "key": "alt+down", "command": "-editor.action.moveLinesDownAction", "when": "editorTextFocus && !editorReadonly" }, { "key": "alt+f12", "command": "editor.action.peekDefinition", "when": "editorHasDefinitionProvider && editorTextFocus && !inReferenceSearchEditor && !isInEmbeddedEditor" } ] // { // "key": "ctrl+shift+d", // "command": "workbench.action.toggleMaximizedPanel" // }, // { // "key": "ctrl+shift+d", // "command": "-workbench.view.debug", // "when": "viewContainer.workbench.view.debug.enabled" // } I want to sort this file. jq give error if there is // at the beginning of line as this is not a valid json. So to sort this file, the command I came up with is: grep -v '^[ ]*//' keybindings.json | jq 'sort_by(.key)' But I do not want to discard the commented lines. So, to get the commented lines, the command I came up with is: grep '^[ ]*//' keybindings.json Now to solve my problem, what I can simply do is: #!/bin/bash SORTED_JSON=$(grep -v '^[ ]*//' keybindings.json | jq 'sort_by(.key)') COMMENTED_JSON=$(grep '^[ ]*//' keybindings.json) echo "$SORTED_JSON" >keybindings.json echo "$COMMENTED_JSON" >>keybindings.json But there is a catch. I have to do this in one command. This is because, I am doing this via a vscode settings. "filterText.commandList": [ { "name": "Sort VSCode Keybindings", "description": "Sorts keybindings.json by keys. Select everything except the comment in fist line. Then run this command", "command": "jq 'sort_by(.key)'" } ] The command take the selected text as stdin, process it, then output the processed text. So, as far i understand, i have to read the stdin two times (once with grep -v '^[ ]*//' | jq 'sort_by(.key)' and the second time with grep '^[ ]*//'). And append the two command output in stdout. How can I solve this problem? Update 1: I have tried both cat keybindings.json| {grep -v '^[ ]*//' | jq 'sort_by(.key)' ; grep '^[ ]*//'} and cat keybindings.json| (grep -v '^[ ]*//' | jq 'sort_by(.key)' ; grep '^[ ]*//') These does not show the commented lines. Update 2: The following seems to be close to what I was expecting. But here commented lines come before the uncommented lines. $ cat keybindings.json| tee >(grep -v '^[ ]*//' | jq 'sort_by(.key)') >(grep '^[ ]*//') > /dev/null 2>&1 // { // "key": "ctrl+shift+d", // "command": "workbench.action.toggleMaximizedPanel" // }, // { // "key": "ctrl+shift+d", // "command": "-workbench.view.debug", // "when": "viewContainer.workbench.view.debug.enabled" // } [ { "key": "alt+down", "command": "-editor.action.moveLinesDownAction", "when": "editorTextFocus && !editorReadonly" }, { "key": "alt+f12", "command": "editor.action.peekDefinition", "when": "editorHasDefinitionProvider && editorTextFocus && !inReferenceSearchEditor && !isInEmbeddedEditor" } ] Update 3: cat keybindings.json| (tee >(grep '^[ ]*//'); tee >(grep -v '^[ ]*//' | jq 'sort_by(.key)')) or, cat keybindings.json| {tee >(grep '^[ ]*//'); tee >(grep -v '^[ ]*//' | jq 'sort_by(.key)')} also seems to give the same output as Update 3 (commented lines come before the uncommented lines).
I know of no way to interpolate mixed comment lines and non-comment lines; you have to treat them as separate blocks and process them separately. If you didn't mind the commented lines being output first you could use awk like this: awk '{ if ($0 ~ /^ *\/\//) { print } else { print | "jq \"sort_by(.key)\"" } }' keybindings.json But since you want the comment lines to come at the end you need to store the comment lines and output them later: awk ' # Define a convenience variable for the jq process BEGIN { jq = "jq \"sort_by(.key)\"" } # Each line hits this. Either we save the comment or we feed it to jq { if ($0 ~ /^ *\/\//) { c[++i] = $0 } else { print | jq } } # Close the pipe to get its output and then print the saved comment lines END { close (jq); for (i in c) { print c[i] } } ' keybindings.json Now, regarding your "I have to do this in one command". Remember that there is nothing stopping you creating your own commands (programs, scripts). Put the necessary set of commands into a file, make the file executable, and put it into a directory that's in your $PATH. I have always used $HOME/bin and I have the equivalent of export PATH="$PATH:$HOME/bin" in my ~/.bash_profile and ~/.profile.
Process the same stdin two time and append the outputs
1,385,407,160,000
I've a directory structure like: rust/ ├── dir1/ │ └── Cargo.toml └── dir2/ └── Cargo.toml I want to create a zsh script that'll run from the rust directory, and for each subdirectory with a Cargo.toml file, run cargo command with user-specified arguments. Example: run.sh "test -- --ignored" should run cargo -v test -- --ignored --manifest-path ./dir1/Cargo.toml and cargo -v test -- --ignored --manifest-path ./dir2/Cargo.toml. The double quotes are necessary to prevent the shell from messing with the --. User may pass other arguments without --. I've tried find . -name 'Cargo.toml' -type f -print -exec cargo -v "$@" --manifest-path {} \;, but got the error "error: no such subcommand: test -- --ignored". Clearly, the whole thing is passed as a string, not as individual strings. How to do this?
The shell doesn't mess up with --. Just do: #! /bin/zsh - for toml (**/Cargo.toml(N.)) cargo -v "$@" --manifest-path $toml And call it as: that-script test -- --ignored Using zsh globbing has several advantages over find: hidden files and directories are ignored (add the D qualifier if you do not want it) the list is sorted it's possible to have arguments containing {} be passed to cargo. If you wanted to pass one argument to the script and the shell to split it on space characters and the resulting words to be passed as separate arguments to cargo, you'd do: #! /bin/zsh - for toml (**/Cargo.toml(N.)) cargo -v ${(s[ ])1} --manifest-path $toml Or for $1 to be split on characters of $IFS (space, tab newline and nul by default) instead: $=1 Then, you'd call: that-script 'test -- --ignored' But that would mean the user can't pass an argument containing spaces (resp. IFS characters) to cargo. Alternatively, you could tell the shell to do shell tokenisation and quote removal on that one argument using the z or Z[options] and Q parameter expansion flags using "${(Q@)${(Z[n])1}}" (Z[n] for newline to also be accepted as delimiter, see also z[Cn] to also recognise and strip Comments, @ within double quotes to preserve empty elements), maybe doing that tokenisation only once to avoid having to do it every time in the loop, and even store them in $argv (aka $@) so we're back to square one: #! /bin/zsh - argv=( "${(Q@)${(Z[n])1}}" ) for toml (**/Cargo.toml(N.)) cargo -v "$@" --manifest-path $toml And then be able to do: that-script "test -- --opt1='foo bar' --opt2=$'blah\nblah' --opt3 ''" For instance, and test, --, --opt1=foo bar, --opt2=blah<newline>blah, --opt3 and the empty string to be passed as separate arguments to cargo. But again, that's way overkill when you can get the user to pass all arguments separately to your script (in the syntax of their shell / language, while the Z/Q flags above expects zsh quoting syntax) and the script to pass them along to cargo with the standard "$@" as in the first example above. Now, as it turned out, your problem was that the --manifest-path path/to/Cargo.toml was to be before the -- option delimiter of the test subcommand. You could always insert those arguments in the list of arguments passed by the user with something like: #! /bin/zsh - for toml (**/Cargo.toml(N.)) ( argv[2,0]=(--manifest-path $toml) cargo -v "$@" ) That way, when the user invokes that-script test -- --ignored, the script ends up calling cargo -v test --manifest-path path/to/Cargo.toml -- --ignored.
Run cargo command in subdirectories with user-specified arguments
1,385,407,160,000
When I go to a this web-page (https://imgur.com/user/Ultraruben/submitted for example) and press Ctrl+u, I get one web-page. When I try to extract the html through the command line with curl <url> or curl -L <url> I get another. lynx -dump <url> doesn't work either (no javascript). I need to get through the command line (with whatever tool that works) the same I get through my browser with Ctrl+u. This is what I got through the Opera browser: https://justpaste.it/42ci1 And this is with curl: https://justpaste.it/9oy3g
It's pretty common for web sites to react to the kind of client they're seeing with different content. Some of that is well-intended: For example, some websites go through lengths to support incredibly old phones or windows PCs. From a security point of perspective, you'd want to tell an Internet Explorer 5 user that they need to update – but your job might be to help get health information to the public, not to tell people with no money that they need to buy a new laptop. Sometimes you need to support an old gaming device with a specifically quirky website to make its browser happy, or deliver a version of the site optimized for the screen. Anyway, that's likely what's happening here; your curl sends "Hey, I'm curl", the webserver reacts with a page specifically for automated tools, not for browsers. So, first: check whether things solve if you tell curl to use the same user-agent as your main browser. Maybe that already solves things. If you say lynx doesn't work because it doesn't do JavaScript, then, well, you need something that does all the JavaScript a modern browser does. That means it needs to be a modern browser. There's ways to puppeteer browsers from a command line. In essence, you're looking for WebDriver. This won't work without you writing a few lines of script. Mozilla's WebDriver documentation has an example that might get you started. You will want to add some waiting (on a completion of load, plus some fractions of a second) to allow for JavaScript to complete, before you get the source code. The example does that by waiting for a specific element to start existing.
I get a different html page using Ctrl+u and curl
1,385,407,160,000
I've been tasked to update a few of our sites and so, before doing so, I have to zip the public_html folder so I have a backup. Problem is, public_html has a bunch of other ZIPs that are older backups that I don't want to delete in case my backup fails or for some other reason, we need to go back 2-3 backups. But, since they are there and get caught in every backup, the backup file grows and grows because it contains basically every single previous backup within it. So is there a way to tweak the zip command line call so it gets all files, except any .zip or .gzip file it finds?
The quick answer is to exclude the existing zip files: zip -r foo /path/to/public_html -x '*.zip' -x '*.gzip' Add/remove the -x options to match your existing naming convention for zip and gzip files. The long-term answer would be to store the backup files outside the public_html folder so that you don't keep catching them in the backups.
Command Line ZIP - How to ZIP an entire folder, but dodge the other zips present?
1,385,407,160,000
I installed brave browser using: sudo curl -fsSLo /usr/share/keyrings/brave-browser-archive-keyring.gpg https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg arch=amd64] https://brave-browser-apt-release.s3.brave.com/ stable main"|sudo tee /etc/apt/sources.list.d/brave-browser-release.list sudo apt update && sudo apt install brave-browser Now I want to remove it completely. One solution suggests using: sudo apt remove brave-browser brave-keyring sudo apt purge brave-browser rm -rf ~/.config/BraveSoftware rm -rf ~/.cache/BraveSoftware However, it does not remove the PPA and the Key. Is there any easy solution (like ppa-purge) or shall I just use the following lines with the above commands. sudo rm /usr/share/keyrings/brave-browser-archive-keyring.gpg sudo rm /etc/apt/sources.list.d/brave-browser-release.list To be specific, I want to know the appropriate way to remove apps (completely, with key and ppa) that is installed via this method.
Command Opposite sudo curl [options] -o <file> <url> sudo rm <file> echo <debline> | sudo tee <file> sudo rm <file> sudo apt install <package> sudo apt purge --autoremove <package> So yes, the opposite is really to simply remove the files that you created in addition to purging the package and any dependencies which were installed alongside it. You should not need to explicitly uninstall brave-keyring.
How to Completely Remove an Application with it's PPA that is being installed via PPA and GPG
1,385,407,160,000
The default output of units seems to be a bit verbose: $ units "2 fortnight" seconds * 2419200 / 4.1335979e-07 Suppose I just want the number only, so that I can do things like sleep $(units "2 fortnight" seconds). Is there a formatting argument for units that I'm missing? Or is there some simple way to pipe it into something and get just the number? I can see that $ units "2 fortnight" seconds | head -n 1 would at least get rid of the factor, but I still have the extra whitespace and *.
With GNU Units version 2.19: $ units --one-line --compact '2 fortnight' seconds 2419200 BSD implementation of units doesn't have --one-line --compact but you can use awk: $ units '2 fortnight' seconds | awk 'NR == 1 {print $2}' 2419200
How to get just the number from units?
1,385,407,160,000
I am trying to create a simple alias that uses the argument with the full path On the command line, I can type command "$(pwd)/my_file". It works. so I tried to create an alias in the following way: alias command='command "$(pwd)/$1"' This alias didn't work though. The CLI interprets as if $(pwd) and my_file were separated arguments... I tried to use the eval command to turn my command into a single one alias command="eval 'command' '$(pwd)/$1'" However, it keeps waiting for an input argument instead of taking my initial argument... If you wanna try out what I mean, substitute command for the evince (a popular PDF viewer) and my_file for any PDF file. So my alias is alias evince="evince $(pwd)/$1" In my case, $(pwd) is /home/tapyu/Downloads/, and my_file is recap.pdf. I know that evince is treating it as a separated argument because it pops up two windows: The first one opens recap.pdf properly. The second is an empty window with a warning "Error opening file, /home/tapyu/Downloads/ is a directory." Thank you in advance. PS: I know that this "problem" is pointless. My problem is not "how to give the full path to a command", my problem is "how to handle inputs argument in an alias in order to solve this kind of situation". So don't wanna alternatives to give the full path, I wanna know why my alias is not working.
The first issue is that aliases don't take arguments. If you need to pass an argument to an alias, that means you should use a function instead. You can see what happens if you run set -x: $ alias evince="evince $(pwd)/$1" $ set -x $ evince a.pdf + evince /home/terdon/foo/ a.pdf As you can see, the command evince a.pdf becomes evince /home/terdon/foo/ a.pdf (I was running this in the directory /home/terdon/foo/). So what's going on here? Have a look at what your alias actually is: $ type evince evince is aliased to `evince /home/terdon/foo/' You can even see this happen if you run set -x and then defined the alias: $ alias evince="evince $(pwd)/$1" ++ pwd + alias 'evince=evince /home/terdon/foo/' When you defined your alias, $1 didn't have any value, and pwd was run before setting the alias, so you actually aliased to evince /home/terdon/foo/. Then, when you ran evince a.pdf, that actually ran evince /home/terdon/foo/ a.pdf which is why you got the two windows. A function would look like this: evince(){ command evince "$(pwd)"/"$1" } Note the use of command: that's to ensure that the evince inside the function will call the command evince and not the function itself recursively. Also note that this is a pointless example since evince foo by itself is exactly the same thing as evince $(pwd)foo. You don't need the full path to a file if it is in your current directory.
How to evalute multiple arguments in a alias? [duplicate]
1,385,407,160,000
In some contexts, I want the ability to run screen and to copy and paste text between different windows (in the screen sense of that term, not the X11 window manager sense) using click-and-drag to copy and (ideally) middle-click to paste, as I would be able to do if I were running screen in an xterm, but don't have any particular need for any other graphical features. Starting up an X server just to be able to do this is an option (outside of unusual circumstances), but seems like overkill. Is there a straightforward way to get this sort of simple "mouse interacting with text" behavior without using X (ideally, without X having to even be installed)?
Yes, there’s gpm: it provides support for mice on Linux virtual terminals. It supports copy and paste, and also enables mouse usage in applications which support it (such as Midnight Commander). It’s packaged in many distributions; look for a gpm package. There’s also consolation which is similar, but based on libinput (so it supports multitouch etc.).
Is there a more efficient way to get X11-style mouse-based, cross-application copy-and-paste for command-line use than "xinit xterm" or the like?
1,385,407,160,000
I am using MX linux and i need next thing. I have a config for openvpn, it works perfectly both from manual launch via cli and from nm-openvpn application in my XFCE. I want to launch my openvpn every morning at 9 am via cron, but with visual displaying in my XFCE like i launched it from GUI. Which command does networkmanager launch when i click "connect to vpn"? I was trying to analyze ps -aux | grep openvpn output and syslog, but without success.
There is no non-networkmanager command that is being launched when you activate the openvpn connection through NM. This is an internal procedure within NM that sets up the connection. To manipulate it through the command line you can use the nmcli command. Some kind of command like this should work: nmcli connect up "name of the openvpn connection" Instead of the name of the VPN connection you can use the ID, UUID or PATH of the connection.
launch nm-openvpn via cli
1,385,407,160,000
I am running tmux 3.0a, and when I connect with a smaller resolution terminal, also the bigger terminal gets resized to the smaller. This is well known (although I don't understand why they made this the default behaviour), and the solution is to c-b c-: :resize-window -A (tmux force resize window, https://stackoverflow.com/questions/7814612/is-there-any-way-to-redraw-tmux-window-when-switching-smaller-monitor-to-bigger/61764869#61764869). Unfortunately, this needs to be done in every pane/window. While there is this option set-window-option -g aggressive-resize on, it doesn't have any effect for me. So: How to always resize all windows to maximum available size?
Apperently, :resize-window -A needs to be done in every window, but when it's done it's persisting (when you disconnect and reconnect with a smaller terminal it remembers to resize aggressively). Thus, include in your .bashrc the following command: tmux resize-window -A This sets aggressive resize for that specific window, whenever you open a new window. It remains unclear why it actually works, see the discussion in the comments. The problem is that the documentation of tmux is unclear and there are three options that influence resizing. Another way to achieve proper resizing seems to be (for tmux >=3.1): set -g window-size latest setw -g aggressive-resize on
tmux: How to always resize all windows to maximum available size?
1,385,407,160,000
I have a series of commands running through a pipeline like this: cmd1 | cmd2 | cmd3 | cmd4 How can I print the intermediate result of cmd1, cmd2 and cmd3? I know I can use the tee command to print the result to a file. But is it possible to just print it to the console? This is for debugging purpose as my actual commands are very complex.
You can tee to the current terminal: cmd1 | tee /dev/tty | cmd2 | tee /dev/tty | cmd3 | tee /dev/tty | cmd4
How to print intermediate result of commands in a pipeline?
1,385,407,160,000
For example, let's say I want to docker run --interactive --tty ubuntu:18.04 bash apt update; apt install -y git nano wget; mkdir t; cd t but instead have one a single-line command. I unsuccessfully tried: docker run --interactive --tty ubuntu:18.04 (bash; apt update; apt install -y git nano wget; mkdir t; cd t) and docker run --interactive --tty ubuntu:18.04 "bash; apt update; apt install -y git nano wget; mkdir t; cd t"
Make that a bash command, that ends with a final call to bash so that you get an interactive subshell: docker run --interactive --tty ubuntu:18.04 bash -c "apt update; apt install -y git nano wget; mkdir t; cd t; exec bash" exec exec is necessary to make the new bash the container's main process, which is recommended (it will get interruptions sent to the container). This said, you should put the apt calls in a Dockerfile and generate a derived image that you can start directly with your interactive bash: FROM ubuntu:18.04 RUN apt update && apt install -y git nano wget RUN mkdir /somedir WORKDIR /somedir Do once for all (or until you want newer versions): docker build -t testbuild . # done once for all and docker run -it testbuild # bash already in /somedir
How can I write a single command line that launches a new docker container with interactive bash and executes a few commands in it?
1,385,407,160,000
I want to get rid of two first lines generated after the usage of gpg -d file.txt.gpg, meaning that only text itself would be left. I tied to use --no-comment, but it seems to not work. gpg: encrypted with 2048-bit RSA key, ID 4FXXXXXXXXD30D52, created 2020-01-22 "test test <[email protected]>" test test444
gpg --quiet -d file.txt.gpg (or -q)
GPG - remove header from decrypted text
1,385,407,160,000
$(run-parts --list --reverse --regex '^KeePassXC-[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]-x86_64\.AppImage' bin | head -n 1) (The purpose of the above command is to run the most recent version of KeePassXC found in ~/bin; I don't want to have to modify the startup entry each time I have to upgrade KeePass) The above command seem to work if run from the terminal. However, if I press Alt+F2 to get the GUI 'run command' box, I'm getting errors: Failed to execute child process "$(run-parts" (No such file or directory) Even more importantly, similar problem happens if I try to enter this command to Startup Applications: Could not execute '$(run-parts --list --reverse --regex '^KeePassXC-[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]-x86_64\.AppImage' bin | head -n 1)' Failed to execute child process "$(run-parts" (No such file or directory) I thought that maybe my mistake is that Startup Applications / Run command GUI box run the commands from / rather than ~? So I "fixed" my command, namely I removed bin and replaced it with /home/m/bin BUT this did not help. Why am I failing to execute this command from any place BUT the terminal?
The error message: Could not execute '$(run-parts --list --reverse --regex '^KeePassXC-[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]-x86_64\.AppImage' bin | head -n 1)' Failed to execute child process "$(run-parts" (No such file or directory) This indicates that the box that you're typing the command into is not a shell, and that it's instead trying to execute the literal command. A shell would have parsed the command, expanded the command substitution, run the pipeline with run-parts and head etc. Since the box is not a shell, and since the command is not a simple command (a command, possibly with options), you will have to invoke your original command with a shell. You can do that from the GUI box with the command sh -c 'your command' where your command is the command that you initially tried to run (with single quotes replaced by double quotes, because the single quoted argument to sh -c can't contain other single quotes).
Why can I only run this command in the terminal, and not in startup commands nor in the Run command GUI box?
1,385,407,160,000
I want to enter into spark-shell using shell script and then execute below commands cat abc.sh spark-shell val sqlContext = new org.apache.spark.sql.SQLContext(sc) val df = sqlcontext.read.json("/file/path") i am able to enter into spark-shell scala but next two commands are not running Or else kindly let me know how can i run sequence of spark commands in scala automatically using shell script
You can’t start a sub shell and just list commands in the manner you have attempted. Presumably the shell is waiting for input from you. Broadly speaking, you have two routes you can go down. You would either need to feed spark-shell a file containing the commands you want it to run (if it supports that) or make use of input redirection. This answer addresses the latter option via a heredoc. Amending your existing script as follows will probably do the trick. spark-shell << EOF val sqlContext = new org.apache.spark.sql.SQLContext(sc) val df = sqlcontext.read.json("/file/path") EOF
How to run sequence of spark command through bash
1,574,835,605,000
I have to remove all the files in a folder whose file names have fcrjlog-11-21-2019-1.txt format. I want to remove all the files having this kind of filename in a folder.
find . ! -type d -name 'fcrjlog-??-??-????-?.txt' -delete (replace -delete with -exec rm -f {} + if your find doesn't support the non-standard -delete extension). ? is the wildcard operator that stands for any single character. Replace with [[:digit:]] to only match on decimal digit characters (0123456789). ! -type d excludes the files of type directory (which -delete could not remove unless they were empty anyway), you can replace with -type f to be even more restrictive (only include regular files to the exception of all other types of files including symlink, directory, socket, fifo, device...). GNU find also supports -xtype f to select the files that are determined to be regular after symlink resolution. Replace fcrjlog with * to match on any number of characters, or ?* for any non-empty sequence of characters, or [!.]* for any non-empty sequence of characters the first of which is not . (to exclude hidden files).
Delete all files in a folder having timestamp in filename
1,574,835,605,000
I am new to shell scripting. I am modifying an existing shell script in which I have to create a dynamic html content and assign that content in a variable and use this variable value to replace in a template inside shell(Linux). I am using below code snippet its working fine when html content is less but same is failing if the content is large. How to fix this. encStr="$(cat ./dynamiccontent.html | base64)" echo $encStr awk -v var="$encStr" '{gsub("REPLACECONTENT", var, $0); print}' /path/tomytemplate > output.tmp
@mosvy already provided you with a good answer. The short morale of the story is: it is not a good idea to use shell variables to store data that is not validated for length. It is also not a good idea to use shell variables or shells in general at all, because they are deranged programming languages. However, if you absolutely must store the contents in a shell variable, you can also try this crazy stunt, presented step-by-step: Create a temporary file with all your contents via process substitution. Rely on the fact that shell built-ins are exempt from the normal subprocess argument limit. awk -v patternFile=<( printf "$encStr" ) Using AWK's cumbersome ways (it's a typical line-oriented Unix tool, after all), read the whole temporary file into the AWK variable "contents", by splitting it into lines first and then reconstructing it using string catenation, adding any newlines that were removed by the split. awk -v substitutionFile=<( printf "$encStr" ) 'BEGIN {while ((getline line <substitutionFile) > 0) { contents = contents line "\n"}}' Then, in the normal line-oriented manner you already used to determine the tokens to be substituted in your template file, carry out the substitution: awk -v substitutionFile=<( printf "$encStr" ) 'BEGIN {while ((getline line <substitutionFile) > 0) { substitutionString = substitutionString line "\n"}} {gsub("REPLACECONTENT", substitutionString, $0); print}' /path/tomytemplate > output.tmp Now the above solution is ugly... But by the standards of *nix shells, not ugly enough! If already using a *nix shell, why not go all the way and use a heredoc? This way, you run less of a risk of the printf command being redefined to e.g. a function (it is on my system, failed in production code :D), but invoke the risk of your file containing heredoc delimiters: awk -v substitutionFile=<( cat <<HOPEFULLY_UNIQUE_HEREDOC_DELIMITER $encStr HOPEFULLY_UNIQUE_HEREDOC_DELIMITER ) 'BEGIN {while ((getline line <substitutionFile) > 0) { substitutionString = substitutionString line "\n"}} {gsub("REPLACECONTENT", substitutionString, $0); print}' /path/tomytemplate > output.tmp Note that variable substitutions within a heredoc count as quoted, so you should not quote $encStr, otherwise, your substitution string will contain the quotes as well! This is the 1749203-rd rule of shell substitution. In other contexts, the main rule is simple: you use double quotes for both variable and command substitutions, or you die. In this case, it does not matter that cat is not a built-in, since it only gets its stdin redirected to the temporary file created by the heredoc, which itself does not have that length limitation, since there is no argument passing involved.
How to resolve Argument too long
1,574,835,605,000
Need an at command timestamp that run some command at given day monthly, like for example, every day 15, as would follow: $ at every 15 day So that on every day 15, it would run some command. How would I set it?
As pointed out in the comments, cron is the right tool to do so. at is used to run a command at a specified time and date but only once. Just add this line to /etc/crontab: 0 7 15 * * youruser /path/to/somecommand This runs the specified command at 7:00 AM every 15th of the month. For more information, see the manpages: man cron man crontab
Need an "at" Command Timestamp That Runs Command Monthly at Given Day
1,574,835,605,000
I am trying to append 5.34.03 number to 5.34.04 by using the following command $ awk '{print 5.34.03 + 0.0.1}' 5.340.030.1 Expected output 5.34.13 I tried with many methods, eg. let, but it didn't work.
Assuming your minor version and patch version numbers should be 2 characters, you can use this awk script: parse.awk BEGIN { FS = "[ .]"; OFS = "." } function tonum(s) { if( length(s) < 2 ) s *= 10 return s } function tover(n) { if( n < 10 ) n = "0" n return n } { print $1 + $4, tover( tonum($2) + tonum($5) ), tover( tonum($3) + tonum($6) ) } Run it like this: echo 5.34.03 0.0.1 | awk -f parse.awk Output: 5.34.13
Add three dot floating numbers in shell
1,574,835,605,000
I like to print only one "Mem:" line in the output using plink command. plink -batch [email protected] -P 22 -pw test@123 (free;) --> working total used free shared buffers cached Mem: 8182004 7137528 1044476 0 284648 4852520 -/+ buffers/cache: 2000360 6181644 Swap: 16386260 188 16386072 plink -batch [email protected] -P 22 -pw test@123 (free|grep "Mem:";) -->not working above command not printing the output & terminated without any error. What's wrong in the syntax?
There is no reason to run the grep remotely. plink -batch [email protected] -P 22 -pw test@123 free | grep "Mem:" Note that you should not give the command to plink inside a subshell, ( ... ). I don't know anything about Windows' cmd.exe, but you could also try plink -batch [email protected] -P 22 -pw test@123 sh -c "free | grep 'Mem:'"
plink command to print free|grep "Mem:"
1,574,835,605,000
I'm running a tcsh shell. I have questions about the -l option in set -l. When I look at man for the set command, I don't see the -l argument. When using tcsh shell, what does the -l argument mean? Where & how can I find this info?
You're looking at the wrong place. This is from the tcsh(1) manual page: set set name ... set name=word ... set [-r] [-f|-l] name=(wordlist) ... (+) set name[index]=word ... set -r (+) set -r name ... (+) set -r name=word ... (+) The first form of the command prints the value of all shell variables. Variables which contain more than a single word print as a parenthesized word list. The second form sets name to the null string. The third form sets name to the single word. The fourth form sets name to the list of words in wordlist. In all cases the value is command and filename expanded. If -r is specified, the value is set read-only. If -f or -l are specified, set only unique words keeping their order. -f prefers the first occurrence of a word, and -l the last. So set -f list=(foo bar baz foo) will set list to (foo bar baz), but set -l list=(foo bar baz foo) will set list to (bar baz foo), and set list=(foo bar baz foo) will set it to (foo bar baz foo), keeping duplicates. The only difference is how it handles duplicates in the word list. This feature is not present in the classical/real csh, which is now unencumbered open-source itself and (if I'm not mistaken) the default csh on many systems.
What does the -l argument mean in tcsh?
1,574,835,605,000
I have a big text file I want to print only the first 4 and the first 5 and the first 8 characters of each line in one command line. For example I have the lines: 123456789ab ABCdefgih55 So the output have to be: 1234 ABCd 12345 ABCde 12345678 ABCdefgh
for len in 4 5 8; do cut -c "1-$len" file done This uses cut -c repeatedly to cut out the first part of each line of the file called file. The length of the cut out bit is depending on the loop variable len. If you're strict about that "one line" criteria: for len in 4 5 8; do cut -c "1-$len" file; done Or, as an easy to use shell function: cut_to_lengths () { file=$1; shift for len do cut -c "1-$len" "$file" done } Using it: $ cut_to_lengths file 4 5 8 1 1234 ABCd 12345 ABCde 12345678 ABCdefgi 1 A In comments you specify that you don't want to output lines if they are shorter the cut length. To do this, we can change the cut command into an awk command: awk -v len="$len" 'length >= len { print substr($0, 1, len) }' Replace the cut -c "1-$len" with the above awk command in the code above.
Print only the multiple first characters
1,574,835,605,000
Let's assume that I have these files: /1/tEst.mp4 /1/Test.mP4 /1/subdirectory/TEST2.mp4 /1/.20181106Test2.mp4 How can I copy all of these files into /2/Videos with a single command line? All files that end with “mp4” and have “test” inside the name should be included. Case-insensitive, if possible. I could use the file explorer to search for all files named “test” and filter by video, but is there any way to do it from the terminal?
This seems doable in bash: set -o nocasematch dotglob globstar cp /1/**/*test*.mp4 /2/Videos/
How to copy all files in all directories with specific filename to one destination?
1,574,835,605,000
I need to create multiple directories using one command.  Is it possible? Like directory name starts from 1-100. mkdir 1..100
You don’t need a (for...do...done) loop; just do mkdir $(seq 1 100) Or, in bash, mkdir {1..100}
How to add multiple directories like the name 1-100 in one command
1,574,835,605,000
I have Debian Stretch.. I thought that a regular user could kill root process as root. I was using the command /bin/kill 10733 as the user. There wasn't any error message. Then, I typed out the similar command (without /bin/ prefix) and the error message got displayed. Here, is the command history deployer@deployer:~/blog$ kill 10733 -bash: kill: (10733) - Operation not permitted deployer@deployer:~/blog$ /bin/kill 10733 deployer@deployer:~/blog$ ps aux|grep 10733 root 10733 0.0 0.6 92360 6404 ? Ss 19:19 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; deployer 10891 0.0 0.2 12732 2236 pts/0 S+ 20:10 0:00 grep 10733 deployer@deployer:~/blog$ sudo kill 10733 deployer@deployer:~/blog$ ps aux|grep 10733 deployer 10900 0.0 0.2 12732 2232 pts/0 S+ 20:10 0:00 grep 10733 deployer@deployer:~/blog$ In fact, I would prefer both the prefix and the message. How to get it? And why is it so?
In your example, the kill command is a shell internal (definitely in bash) and it allows for extensions such as %1 to be used to refer to background processes. On the other hand, /bin/kill is an external command and doesn't have these extensions. Since it's a different program it acts differently. A failed /bin/kill may be silent, but sets $? (exit code) to indicate a failure. When you run sudo kill you are implicitly running sudo /bin/kill.
The /bin/ prefix to kill command is getting rid of the error message
1,574,835,605,000
I'll try to be specific and clear. I have a file: log.txt it contains multiple strings that I search to print and count each of them. This is my command, only print columns coincidences in the log.txt file: sed -n '1p' log.txt | awk '{ s = ""; for(i = 25; i <= NF; i++) s = s $i "\n"; print s}' Explanation sed -n '1p' //prints the first line awk '{ s = ""; for(i = 25; i <= NF; i++) s = s $i "\n"; print s}' //prints the next columns from the number 25 column Input: Column25 Column26 Column27 ColumnN <--#first filter:I need obtain specific headers. ColumnN Column25 Column27 ColumnN Column26 Column27 <--#Count how many times is repeat every string in whole file Output: Column25 Column26 Column27 Column28 Column29 ColumnN I try to do: From the previous output I want to count all the coincidences in the same file file.log but in the same command: sed -n '1p' log.txt | awk '{ s = ""; for(i = 25; i <= NF; i++) s = s $i "\n"; print s}' and send again to the output like: Desired Output: Column25 - n times Column26 - n times Column27 - n times Column28 - n times Column29 - n times ColumnN - n times PS. I've thinking in use the same variable "$s" in the for loop to start a search, but is not working.
Here's how I'd approach this problem: awk '{n=1;if(NR==1)n=25;for(i=n;i<=NF;i++) a[$i]++} END{for(val in a) print val,a[val]}' input.txt The fact that you want to capture fields 25 and after in the first line, requires us to check NR variable, and set n variable which will be used in the loop. As for a[$i]++ that will be an associative array with fields being keys and values within the array will be their count incremented via ++ operator. This is a very typical method for counting fields in awk.
Count every string of awk output search in a file
1,574,835,605,000
I know that I can simply substitute a string with another in the previous command by typing: !!:gs/string1/string2/ But how I can perform multiple substitutions, e.g. having a command: echo "AAAAAAAAAAAAAAAAA" > test1 I want to substitute A with B and 1 with 2, so execute such a command: echo "BBBBBBBBBBBBBBBBB" > test2 How can I do it with !! operator?
$ echo "AAAAAAAAAAAAAAAAA" > test1 $ !!:gs/A/B/:s/1/2/ echo "BBBBBBBBBBBBBBBBB" > test2 That is, just add the second substitution to the end of the first. Just be aware that the second substitution will act on the result of the first.
Multiple substitution when repeating the previous command
1,574,835,605,000
I have a requirement to append timezone value in the below format at the end of milliseconds, an example follows: 2018-01-07T14:30:03.832-0700 I need the Unix command to get the required format.
It is posible to get the ISO8601 time format with GNU date: $ date -Iseconds 2018-07-01T06:57:25-0700 However, to get the time with milliseconds you neef to specify the detailed string. Try: $ date +'%Y-%m-%dT%H:%M:%S.%3N%z' 2018-07-01T06:57:28.457-0700
Date format command [closed]
1,574,835,605,000
I'm trying to filter the most used commands and print that out in a certain way. So far, I've managed to put the desired "filters": $ history | tr -s ' ' | cut -d ' ' -f3 | sort | uniq -c | sort -n | tail | awk '{ printf "%s%20s\n", $2, $1 }' ...but I can't get the output correctly. I would like to be able to display the final output like: checkupdates 16 ▄▄▄ find 16 ▄▄▄ ./gradlew 17 ▄▄▄ ./rebar3 21 ▄▄▄▄ nix-env 24 ▄▄▄▄ cd 26 ▄▄▄▄▄ docker 33 ▄▄▄▄▄▄ rebar3 43 ▄▄▄▄▄▄▄▄ sudo 46 ▄▄▄▄▄▄▄▄▄ flatpak 56 ▄▄▄▄▄▄▄▄▄▄▄ I want to use awk or printf, but I can't figure it out how to format the output. Also, it's tricky to manage the space between the command(s) and the next column (the usage numbers) ‒ the third one is just one space away from the second one. PS: The scale for the ▄ can be anything.
Following PO's approach, replace awk by Perl -- 'Perl -ae' is very similar to awk... ... | perl -ae ' printf "%-20s %d %s\n", $F[0], $F[1],"▄"x$F[1]' aa 12 ▄▄▄▄▄▄▄▄▄▄▄▄ bb 23 ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ Edit: with Awk, you could run something along the lines of ... | awk '{printf "%-20s %d %.*s\n",$1,$2,$2,"▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄"}' printf function formats (used in C, Awk, Perl, printf command, etc) can be a little tricky. Following some examples with strings: "%.20s,str - width=max(20,len(str)), align=left "%20s",str - width=max(20,len(str)), align=rigth "%.20s",str - width=min(20,len(str)), truncates if len>20 "%20.20s,str - width=20 , truncates if len>20, align=right "%*s",30,str - is printt("%30s",str) "%.*s",30,str - is printt("%.30s",str)
Pass arguments from previous commands (pipes) to awk/printf function and format output
1,574,835,605,000
how to capture the java version on linux redhat machine we tried that: java -version | head -1 | awk '{print $NF}' | sed 's/"//g' but we get openjdk version "1.8.0_65" OpenJDK Runtime Environment (build 1.8.0_65-b17) OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode) while expected output should be 1.8.0_65
java -version outputs to its standard error, so you need to redirect that: java -version 2>&1 | head -1 | awk '{print $NF}' | sed 's/"//g' You can do this with a single AWK invocation: java -version 2>&1 | awk 'NR==1 {gsub("\"", "", $NF); print $NF}'
how to capture the java version on linux machine
1,574,835,605,000
I know how to replace a line with another line or several other lines with tools like sed. But is there an easy way, to replace a line in a file with the whole content of a second file? So, let's have an example. I have a file called file1.txt: A 1 B 2 C 3 And I have a second file file2.txt: line 1 line 2 line 3 Now, I want to replace line 2 with the whole content of file1.txt, so in the end, it should look like this line 1 A 1 B 2 C 3 line 3 One way I could think about would be something like this: sed -i "s/line 2/$(cat file1.txt)/g" file2.txt. But then I also have to check some special characters like / and maybe more. I have to assume, that every possible readable character could be in file1.txt. So, back to my question: Is there an easy way, to replace a line in a file with the whole content of a second file? It doesn't have to be sed. It could be also another tool, if it could do the job better...
sed -e '/^line 2$/{r file1.txt' -e 'd;}' file2.txt The sed script is /^line 2$/{ r file1.txt d } The newline after the filename file1.txt is mandatory, so splitting it up into separate -e expressions on the command line makes it arguably more readable than sed '/^line 2$/{r file1.txt d;}' file2.txt The script looks for a line whose content is line 2. When this is found, the contents of file1.txt is immediately outputted and the original line deleted. Using sed -i will make the changes in-line in file2.txt (not recommended).
Replace a line with the full text of a second file [duplicate]
1,574,835,605,000
I am trying to use the du command to find the total size of multiple folders and print the total in a single line. It works but I would like to sum them up automatically and print the total size. This is what I am doing currently: du -h -c directory_1|tail -1 && du -h -c directory_2|tail -1 160K total 35M total How do I get to print the sum of both the directories. Thank you in advance!
The -s option will just show the total. From man du: -s, --summarize display only a total for each argument Without -s, du will also list each sub-directory individually. In combination with this, you can just use both directories as arguments. du -shc directory_1 directory_2
Sum of two different folders
1,574,835,605,000
I need to increment alphanumeric data. Increment numbers with seq is easy just: seq -w 0000001 9999999 >> file But I need to increment alphanumeric data in order like this: 0000001 0000002 0000003 0000004 0000005 0000006 0000007 0000008 0000009 000000a 000000b 000000c 000000d 000000e 0000010 0000011 0000012 0000013 0000014 0000015 0000016 0000017 0000018 0000019 000001a 000001b 000001c 000001d 000001e 0000020 0000021 etc... until I hit eeeeeee Using alphanumeric data {0-9a-e}. Just need to load the data in an empty file and done. Is there an easy bash command for this something similar to the seq command? I'm using Linux Debian 6.3.0-18 and Bourne Again Shell.
Assuming you really mean hex (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F) here is a solution up to FF (I don't want to have to count to 4.3 billion): (echo obase=16; seq 1 $((echo ibase=16; echo FF) | bc)) | bc The inner (echo ibase=16; echo FF) | bc calculates the ending value in decimal (here FF but feel free to substitute FFFFFFFF if you want :-). The seq then counts from one to 255 in this case, and the rest converts it to hex. And if you really want base 15, you can change both 16's to 15's (and the FF... to EE...).
Bash increment alphanumeric data from command prompt
1,574,835,605,000
I use the cmus console music player through ssh on a device with OSMC (Kodi, based on Debian Jessie) installed. My problem is that the sound is played to the HDMI, and I want to play music to the jack output. I tried to use alsamixer, amixer, aplay, etc... but these are not installed and sudo apt-get install alsamixer doesn't help. (Package not found - Maybe there's a better option than installing alsamixer anyway). I tried to read man cmus, which seems to offer the possibility to change some alsa settings : link to the online manual page, but I don't understand which settings are relevant for me, nor which values to put... Anyway : cmus is maybe not the source of my issue. How can I achieve what I want to, using only the terminal (I am through SSH) ? -- PS : I finally installed alsamixer (actually the package's name was alsa-utils). And results that my jack output is not recognised. But I know it is working since other programs use it.
This 2018 article explains how you can dynamically switch the audio output between hdmi and analogue on a Raspberry Pi. From the command line use amixer cset numid=3 2 for hdmi and amixer cset numid=3 1 for analogue. This information is no longer in the current version of that page, so may no longer work. The May 2020 blog says they have changed the sound architecture to handle the hdmi and analogue output as 2 independent devices: Alsa card 0 will be HDMI, and card 1 will be the headphone jack. The default is 0, but to use 1 you can create a ~/.asoundrc file with defaults.pcm.card 1 defaults.ctl.card 1 This presumably requires you to login again. You can generate a stereo test tone with another command from alsa-utils: speaker-test -c 2 -s 1 -t sine -f 440
Set audio output using command line
1,574,835,605,000
I'm building a command-line utility that requires six pieces of information to work correctly. It looks like this: fm-git filename repository path comment username password However, on any individual system, username and password will be constant. When executing the utility, I'm finding it difficult to build. For example, here's one test call to the utility (broken into multiple lines for readability): /Users/chuck/Projects/fm-git/fm-git.py chiv-lib /Users/chuck/Projects/chiv-lib/ Chivalry/ "continued testing" Administrator abc1234 I'm considering different ways to pass these arguments. For example, fm-git -f filename -r repository ... or fm-git --filename filename --repository repository.... I'm also considering making the username and password arguments configuration settings, since they generally won't change, and they could then be eliminated from the utility call. When utility arguments are many but required, what is the accepted practice for maintaining utility call readability?
In general it's good idea to: calculate dependent arguments but allow to redefine them: for example in your example you have filename the same as $(basename repository), so you may require only repository, but have option --filename to provide alternative filename. hide auth from command line and ps output. Put them in some file: may be $HOME/.fm-git.conf, give file more restrictions like chmod 600 $HOME/.fm-git.conf and read them from file. Somtimes it is also an option to get username and password from environment variables (like default username is your system username or SUDO_USER) but may be it's not your case. So after that two optimisations you have only 3 cli parameters, it's acceptable I think. Feel free to either use them as positional parameter or provide some flags like --comment: providing long flags will enhance readability if you use your program in scripts further, but will require to type more if you will run your program mostly by hands from cli. Anyway, as you write your program in Python, I recommend you to use argparse module -- it will help you parse parameters and make changes in future if you'd like to change them.
What is the recommending interface to a utility that requires many parameters? [closed]