date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,543,536,286,000
I had to use the following command in an assignment: who|tee test|wc -l The output on my system is below, which implies that 2 users are connected: 2 Why is it I don't get the output of who on the screen, and right after that the output of wc -l? I thought tee wrote the output to the screen and created a file with the same output at the same time? The output of who is however found in my file "test", but still it doesn't make sense to me.
The effect of piping to tee is that whatever your first command writes to its standard output is written to a file (whose name you passed as a command-line argument to tee) as well as written to the standard output of the tee command. If the pipeline doesn't continue and you don't perform any redirections on the tee command, then tee's standard output is that of your shell, usually your terminal. That's why running who and running who | tee test show the same text on your terminal. The difference with tee you also write it to a file. If the pipeline continues, as with who | tee test | wc -l, then whatever standard output would have been written to the terminal is sent to the next command in the pipeline instead. This is the wc command and, unlike tee, wc does not copy its input to standard output (or anywhere). Instead it shows statistics. With the -l option it shows just line counts, so that's all you see. So the reason you see just 2 from who | tee test | wc -l is the same as the reason you see just 2 from who | wc -l. The tee command writes the output of who to a file but it does not cause it to be printed to the terminal unless its standard output is the terminal. By default it usually is, but not when you pipe it to yet another command. If you've seen a command on the left side of a | whose output is displayed on the terminal rather than being used as input to the next command in the pipeline, then it likely was writing to standard error instead of standard output.
Strange output when using tee in pipe command
1,543,536,286,000
I am downloading a couple of thousand files listed in a file using: wget -i filename Sometimes it encounters the following error reported by the server for a particular file: HTTP request sent, awaiting response... 500 Internal Server Error The problem is that wget then hangs. What I want for it to skip that file and continue downloading the rest of the list. How can I do that?
Try using wget --tries=1 --waitretry=1 -i filename. This will try only once after a failure and only wait one second before doing so. It's also possible the server is not closing the socket after sending the 500 error. In this case, adding --read-timeout=30 will timeout the connection after 30 seconds of no data from the server. See the manual
Prevent `wget` hanging when it encounters error 500
1,543,536,286,000
I'm trying to detect if a package is installed in a bash script using the following but the script keeps erroring out and preventing anything after it from running. Is there an option for apt that tells it to NOT throw an error when a package is not in the list? pkgExists=$(apt list "azure-cli" | grep "azure-cli" -s)
If a package is not in the list, apt list just shows Listing... Done and exits. If you try to pipe its output like you do however it throws a clear warning: WARNING: apt does not have a stable CLI interface. Use with caution in scripts. Use dpkg-query --list instead, e.g.: dpkg-query --list "azure-cli" && echo "exists" || echo "doesn't exist" Note that dpkg-query --list doesn’t show packages that are not installed.
Prevent apt list from throwing error [duplicate]
1,543,536,286,000
I am using these commands in Linux Kali but I keep getting an error when I run the second command: "No such file or directory found." end=7gb read start _ < <(du -bcm kali-linux-1.0.8.amd64.iso | tail -1); echo $start parted /dev/sdb mkpart primary $start $end These are some commands out of a larger set of commands I am using to try to get persistence. I do not actually know what any of these mean. My request is for an explanation of what each command does so I can fix my errors.
read start _ This assigns the first word (according to $IFS) of the input line to the variable start. du -bcm kali-linux-1.0.8.amd64.iso | tail -1 is a strange way for getting the size of the file, rounded up to the next megabyte. parted /dev/sdb mkpart primary $start $end creates a partition on sdb which begins after the space necessary for the iso file (assuming the default unit for parted is megabyte which I have not checked) and ends at 7GB.
What does these commands do?
1,543,536,286,000
To clarify the intent of my question, let me make an analogy to the question "What is the simplest way to put data into a file?" The usual way that GUI users will put data into a (new) file is to double click on a program icon, click the menu bar, click "new," click "save," click to choose a location for the file, type the name of the file, and click the "save" button. The simplest way to put data into a file (from the command line) is: echo whatever > file As I understand it, email addresses originally referred to actual usernames on machines and actual machine names. So if the IP of the machine you logged into (say, at a university) was 7.7.7.7, and if you logged in with the username pete, you could be reached by email sent to [email protected]. (Is that right?) The point is that the email was directly associated with your user name and computer. Hence why an email I received from the command line of a server at work showed as sent from "[email protected]". So, what is the minimal setup needed to send and receive email between two computers (directly to command line user accounts), with no third computer or Google server or MS Exchange or whatever else? (For UNIX and Linux systems, obviously. Mostly interested in Linux, though if Mac is included that would be nice.) Note: If there are a huge number of different ways to do it so this is "too broad," please help me edit the question. I'm not asking for software recommendations, I'm asking for how the parts fit together at the simplest level without proxies and relays and other complexities. Edit: The answers so far are helpful but omit any details on how to receive the email. It seems that the Google search phrase I was missing is "minimal MTA Linux" but if anyone would like to answer more fully I would love it. (If not I'll have to work it out and eventually self-answer.) :)
I'll assume that the two users and their two computers are independent, e.g. that user A can't simply access user B's computer and write files into the filesystem. That means that the minimal config is one where A can connect to the MTA on B's machine, and the MTA considers itself responsible for email to B's machine/domain. This means that when A says it has a message for B, the MTA takes responsibility for securing the message to B's mailbox. Going down a level, this means: A connects to the listener port of B's MTA (traditionally port TCP/25) A identifies the sender and recipient, and B's MTA says ok A passes the message, and B's MTA sends a response that indicates it takes responsibility B's MTA then writes the message to disk (B's mailbox) There are hacky ways past this, which I mention in passing. If A is root on B's machine, A can append a message directly to B's mailbox just by creating/editing a suitable file. For example by editing an mbox file. But that's a kinda pathological case.
What is the simplest way to email between two computers?
1,543,536,286,000
I came across the notion of primaries from running man find: . . . PRIMARIES All primaries which take a numeric argument allow the number to be preceded by a plus sign (``+'') or a minus sign (``-''). A preceding plus sign means ``more than n'', a pre- ceding minus sign means ``less than n'' and neither means ``exactly n''. . . -depth n True if the depth of the file relative to the starting point of the traversal is n. Searching POSIX documentation for "primaries" turned up no results. Doing a little bit of exploring, it looks like primaries are different from switches and flags because they appear after the switches, flags, and main arguments: $ find -depth 1 . find: illegal option -- e usage: find [-H | -L | -P] [-EXdsx] [-f path] path ... [expression] find [-H | -L | -P] [-EXdsx] -f path [path ...] [expression] $ find . -depth 1 ./.DS_Store ./.vagrant ./foo ./some I'm wondering: What are primaries? Is there any documentation I can read about them? How are they different from switches or flags?
They're the conditions/actions of find's language, the ones that the "expression" referred to in the usage line mainly consists of: -name, -type, -print, -exec etc. The term is used to separate them from the operators that only combine the primaries: !, -a, -o and the parenthesis. I don't remember seeing that term used in other contexts than find. It's used in the POSIX specification for find and in the FreeBSD man page. GNU stands out in this, too, the documentation for GNU find (e.g. the man page) doesn't use the term, but instead divides the primaries to tests about the properties of the files, actions that do something, and options that affect how find itself works. The division seems helpful but is slightly inaccurate, since all primaries return a truth value, even the actions.
What are command primaries?
1,543,536,286,000
I looked into the integrated manual of the xargs command, where the -I option is explained. And though I read the few lines repeatedly, I can not make any sense of it: -I replace-str Replace occurrences of replace-str in the initial-arguments with names read from standard input. Also, un‐ quoted blanks do not terminate input items; instead the separator is the newline character. Implies -x and -L 1. Could you explain this to me with other words or give me a jump start by an example, which shows the importance of this option?
Here is a quick pair of examples of xargs -I in action: $ echo foo bar baz | xargs -I quux echo quux foo bar baz $ echo -e "foo\nbar\nbaz" | xargs -I quux echo quux foo bar baz -I means "Replace this marker with the newline-separated items coming in from standard input".
what is the purpose of the -I option of the xargs command? [duplicate]
1,543,536,286,000
I recently switched from OSX to Linux for my personal use. I have a home media server running headless Ubuntu, and now a laptop running Mint. My habit is to move things to the server with scp. In the past, on OSX, when I typed to copy target I would painstakingly type each character of the path, because if out of habit I tried to tab complete the machine would get all cranky and I'd have to start over again. However, I just set up the ssh keys of my new computer and was in the process of scping a file from laptop to server, when I accidentally hit tab, and much to my surprise, it completed the path correctly! Is this normal behavior? Why did it not work on term2 on OSX? Note, I did have an open connection to the the server in another terminal window.
This is your shell’s command completion in action: it “knows” that when the current command-line starts with scp, in certain contexts it needs to connect to the target system (if it can) to complete paths there. This can be done transparently because you’ve loaded your key. You’ll see this typically implemented in /usr/share/bash-completion/completions/scp (if you’re using Bash), or /usr/share/zsh/functions/Completion/Unix/_ssh (for Zsh).
scp aware of target machine path?
1,543,536,286,000
In what way does the final value of number being assigned by read var number (and we enter 5) and number=5 differ? I was making this for loop: read var number #number=5 factorial=1 i=1 for i in `seq 1 $number` do let factorial=factorial*i done echo $factorial when I noticed that if the number has the value assigned by the read, rather than direct assigning, it doesn't enter my for. I'm guessing it's because of the data type.
If you change the first line to read number you’ll get the behaviour you’re looking for. read var number reads two values and stores them in variables named var and number. If you only input one value, seq 1 $number expands to seq 1 which is just 1.
For with read value
1,543,536,286,000
It's for a bash script. Basically, I want to format, or erase a USB (or SD) storage device; with a single command line. I was going to use fdisk, but it seems to require user interaction where I want automation. So then I decided to try zeroing it out with: dd if=/dev/zero of=/dev/<target disk>; but it only seems to zero 2.0 GB of vacant, or unused disk space. root@linux:~# dd if=/dev/zero of=/dev/mmcblk0 dd: writing to '/dev/mmcblk0': No space left on device 3842249+0 records in 3842249+0 records out 1967230976 bytes (2.0 GB, 1.8 GiB) copied, 2.9054 s, 677 MB/s Ideally, I'm talking about re-formatting a removable storage device, and prepping it to be imaged with an .iso image file (via dd). Re-formatting won't always be required, but it also erases data; and clearing the device of any stored data probably ought to be the default behaviour / standard procedure, for this kind of thing anyway.
If you want to use fdisk, with only one partition, with all blocks used, this will suffice: echo -e "n\np\n1\n\n\nw\n"| fdisk /dev/<target disk> && mkfs.ext4 /dev/<target disk> Change mkfs.ext4 to whatever filesystem type you want it to use. If you just want to delete data, your dd command should be fine.
What's the quickest way to format a disk?
1,543,536,286,000
What's the best way to trim the massive disclaimer from the end of the whois output? It looks something like this: >>> Last update of WHOIS database: 2017-01-30T20:17:39Z <<< For more information on Whois status codes, please visit https://icann.org/epp Access to Public Interest Registry WHOIS information is provided to assist persons in determining the contents of a domain name registration record in the Public Interest Registry registry database. The data in this record is provided by Public Interest Registry for informational purposes only, and Public Interest Registry does not guarantee its accuracy. This service is intended only for query-based access. You agree that you will use this data only for lawful purposes and that, under no circumstances will you use this data to(a) allow, enable, or otherwise support the transmission by e-mail, telephone, or facsimile of mass unsolicited, commercial advertising or solicitations to entities other than the data recipient's own existing customers; or (b) enable high volume, automated, electronic processes that send queries or data to the systems of Registry Operator, a Registrar, or Afilias except as reasonably necessary to register domain names or modify existing registrations. All rights reserved. Public Interest Registry reserves the right to modify these terms at any time. By submitting this query, you agree to abide by this policy.
From the manual page: -H Do not display the legal disclaimers some registries like to show you. So.. whois -H domain.example.com?
How to trim the WHOIS disclaimer?
1,543,536,286,000
When I run into errors I sometimes get errors in my language set by locale. Is there a way besides switching locale to force English error messages for the sake of googling the solution?
Locale settings are how most programs decide what language to use. While a few programs have a different setting, the most common way to select the language of messages is through locales. There's no other way that works across more than one application (or family of related applications). You don't need to set any system settings, however. Just run the program this one time with a different setting. The locale setting for messages is LC_MESSAGES (see What should I set my locale to and what are the implications of doing so?), so you can set it by setting the environment variable LC_MESSAGES. The special value C is supported on all systems and means the default, untranslated messages (normally in English). From a shell, the following command runs myprogram with the environment variable LC_MESSAGES set to C, i.e. runs myprogram with messages in English and other locale settings unchanged (so the program still uses your favorite character set, sort order, date format, etc.). LC_MESSAGES=C myprogram After the program runs, other programs executed from the same shell will use your usual locale settings, the change doesn't stick. If you want the change to stick within a terminal, run export LC_MESSAGES=C This won't affect programs started from other terminals.
Is there a way to force program to output errors in English?
1,543,536,286,000
I knew how to set up a command in $PATH, but I need someone refresh my mind. In fact, I have the '.sh' script /home/jeremie/Downloads/pycharm-community-2016.3.2/bin/pycharm.sh that I want to put in $PATH. My purpose is to be able to use pycharm as a command. I figure out that the first step is export PATH = $PATH:Downloads/pycharm-community-2016.3.2/bin/pycharm.sh, but It is unclear. How to convert /home/jeremie/Downloads/pycharm-community-2016.3.2/bin/pycharm.sh to obtain the pycharm command?
The PATH is a list of colon (:) separated directories where the shell will search to find the file you're calling. Therefore you would need to add /home/jeremie/Downloads/pycharm-community-2016.3.2/bin to it and not include the file itself. If you want to change the name from pycharm.sh to pycharm, you would either rename or copy it, or preferably make a symbolic link to it such as: ln -s /home/jeremie/Downloads/pycharm-community-2016.3.2/bin/pycharm.sh /home/jeremie/Downloads/pycharm-community-2016.3.2/bin/pycharm Note that your expression export PATH = $PATH:Downloads/pycharm-community-2016.3.2/bin/pycharm.sh will fail because of extra spaces surrounding the equal sign (=). Properly corrected, it should be: export PATH=$PATH:/home/jeremie/Downloads/pycharm-community-2016.3.2/bin
The PyCharm command
1,543,536,286,000
The following find-exec(mv) command finds a directory named say prog-3.6.9-stable-gnu and changes its name sucessfully. Yet, the command also returns: find: './prog': No such file or directory This is the command: find ./ -type d -name 'prog-*' -exec mv {} prog \; I get a similar result when find-exec(rm) that dir: Given the find command comes before the working exec unzip (or exec rm -rf for that matter) I want to ask why would I have this stderr? I mean, if the file was found and was changed, why would the stderr be "No such file or directory" ?
The error appears because you are moving a folder "prog-*". The actual behaviour of find is: find analyzes first the directory itself, and then its contents. So, find, in your example: 1. finds the directory prog-3.6.9-stable-gnu 2. renames it in prog (so now has a new name) 3. tries to access prog-3.6.9-stable-gnu 4. it gives back an error because now, it is not able to find the folder prog-3.6.9-stable-gnu find's order of analysing first the directory, and then its contents, is known as "breadth-first traversal". The opposite is "depth-first traversal". There is an option -depth which invokes this. It is interesting to read mentions of -depth in the man page for find(1). "-depth: Process each directory's contents before the directory itself." "The -delete action also implies -depth." "Don't forget that the find command line is evaluated as an expression, so putting -delete first will make find try to delete everything below the starting points you specified." "When testing a find command line that you later intend to use with -delete, you should explicitly specify -depth in order to avoid later surprises." "Because -delete implies -depth, you cannot usefully use -prune and -delete together."
find exec mv finds an inode (dir), changes dir's name, but returns "No such file or directory"
1,543,536,286,000
I have directory scalable for vector icon theme and I have bunch of symlinks in there which I want to replace by a file that contain names that I can use with ln command, I've create the file it contain lines that look like this: scalable/actions/messagebox_warning.svg scalable/emblems/emblem-danger.svg scalable/emblems/emblem-nowrite.svg scalable/emblems/emblem-unreadable.svg scalable/distributor-logos/debian.svg scalable/emblems/Debian.svg scalable/mimetypes/gnome-mime-text-plain.svg scalable/emblems/emblem-documents.svg scalable/emblems/emblem-personal.svg scalable/emblems/emblem-readonly.svg scalable/mimetypes/package-x-generic.svg scalable/emblems/emblem-package.svg scalable/apps/download.svg scalable/emblems/emblem-downloads.svg scalable/devices/network-wired.svg scalable/emblems/emblem-shared.svg scalable/distributor-logos/ubuntu.svg scalable/emblems/emblem-ubuntu.svg I've try to use this command: xargs -L 1 ln -sf < src/symlinks also try this: xargs -L 1 -I{} ln -sf {} < src/symlinks but it created symlinks that point to themself. How can I create symlinks where desination in filename are taken from file?
Assuming your file is in the right order (target, then link name), and that no filename contains special characters or spaces: sed 's/^/ln -sf /g' < src/symlinks | sh will transform your list of symlinks into a series of ln -sf commands and run it using sh.
How to create symlinks from file
1,482,553,843,000
Sometimes when I log on to a system via SSH (for example to the same server), I have such privileges that there can install some software, but to do that I need to know how package management software is in the system. Is there a way to quickly find it out? In particular, for me uname -a returns: Linux cloud 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux How package management system can be here?
Well, the easiest way (at least to me) would be to simply check which package manager is installed. It is not a wild guess to assume you are either using apt or yum (Debian based or Red Hat based package managers). So, if you try: which apt /usr/bin/apt You see that apt is installed. If you try: which yum <no output> Or: which pacman <no output> Then you do not have yum, or pacman in other words; for a case like this, use apt! If you have none of the above, you will have to find out first of all which distribution you are using. Try this command: lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.6 (jessie) Release: 8.6 Codename: jessie Based on the output above you can do a simple online search for the package manager for said distribution.
How can I find information about the package management software in the linux (unix) systems, in particular in cloud?
1,482,553,843,000
I have a file in which I want to change all of the code that has the following format : n{,3}L{,2}n{,5} where n= [0-9] any number and L [a-zA-Z] any letter either capital or not I want to change A or a into AB and d or D into DK, something like this: Annnnn--> ABnnnnn ; Dnnn-->DKnnn the file looks like: $ cat filename 123a67,64,xx A67990,12,ttt 89d7,34,ggg 234AB445,78,ooo 145aB7699,67,rrr 278Dk89,25,ppp I tried the following sed script sed 's/[aA]/AB/g;s/[dD]/DK/g' filename it works for instances which have only A or D but for those which are already AB or DK it adds up the letter as AB--> ABB or DK-->DKK. Any help appreciated with explanation. Thanks!
As for what's wrong with your script, you are replacing A or a with AB and D or d with DK, so any pre-existing B or K would not be affected; sed is not looking for it. You could put an optional [bB] or [kK] using ? (zero or one of the preceding character) to make it replace that character too if it occurs. To make sure the replacement only happens if [aA] or [aA][bB] etc is followed by a number, you can add the number to the pattern and add it back in the replacement with () and \1 sed -r 's/ab?([0-9])/AB\1/Ig;s/dk?([0-9])/DK\1/Ig' filename I am using -r to use ERE (so no need to escape ?) and I for case-insensitive search instead of using character classes.
How to substitute some letters in a multi-length word consisting of digits and letters in a specific format?
1,482,553,843,000
I got the following command: curl -H 'Content-Type: application/json' -X POST -d '{"host": "'$(hostname)'"}' http://sitename.com/update.php Which works as expected, but if I try to send uptime output instead of hostname I get: curl: (6) Could not resolve host: 19:12; Name or service not known curl: (6) Could not resolve host: up; Name or service not known curl: (7) Failed to connect to 0.0.0.4: Invalid argument curl: (6) Could not resolve host: days,; Name or service not known curl: (6) Could not resolve host: 5:57,; Name or service not known curl: (7) Failed to connect to 0.0.0.3: Invalid argument curl: (6) Could not resolve host: users,; Name or service not known curl: (6) Could not resolve host: load; Name or service not known curl: (6) Could not resolve host: average; Name or service not known curl: (6) Could not resolve host: 0.07,; Name or service not known curl: (6) Could not resolve host: 0.05,; Name or service not known curl: (3) [globbing] unmatched close brace/bracket at pos 6 It's obvius being caused by spaces, but how can I scape them? I can remove spaces with awk: curl -H 'Content-Type: application/json' -X POST -d '{"uptime": "'$(uptime | awk '{print $3$4}')'"}' http://sitename.com/update.php It gives me "4days," but there must be a better workaround for it :D
Using one type of quote is more simple and solves that issue; curl -H "Content-Type: application/json" -X POST -d "{\"uptime\": \"$(uptime)\"}" "http://sitename.com/update.php" or you can use 2 quote types but it's less elegant; curl -H 'Content-Type: application/json' -X POST -d '{"uptime": "'"$(uptime)"'"}' 'http://sitename.com/update.php'
How to POST 'shell output' as JSON data with Curl
1,482,553,843,000
I have a directory with subdirectories and files structured like this: 01/fileA 01/fileB 01/fileC 02/fileD 02/fileE 03/fileF 03/fileG 03/fileH 04/fileI I'd like to get a CSV that looks like this: 01, fileA, fileB, fileC 02, fileD, fileE 03, fileF, fileG, fileH 04, fileI In other words, I want to generate a CSV with one row per subdirectory, with files listed as columns. Is it possible to do this from the Linux command line?
That can be done in a number of ways. One simple method could be this for d in * do echo -n "$d, " ls -m $d done
Find all files, create CSV with one row per subdirectory and file names in collumns
1,482,553,843,000
I have a binary sequence like 0011000111000111. I put this in a file abc.txt. I want to test its randomness using rngtest. I am getting as follows: /Documents$ rngtest <abc.txt> bash: syntax error near unexpected token `newline' Please help me. I am getting this after using rngtest <abc.txt. Is it random? rngtest <abc.txt rngtest 5 Copyright (c) 2004 by Henrique de Moraes Holschuh This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. rngtest: starting FIPS tests... rngtest: entropy source drained rngtest: bits received from input: 16000 rngtest: FIPS 140-2 successes: 0 rngtest: FIPS 140-2 failures: 0 rngtest: FIPS 140-2(2001-10-10) Monobit: 0 rngtest: FIPS 140-2(2001-10-10) Poker: 0 rngtest: FIPS 140-2(2001-10-10) Runs: 0 rngtest: FIPS 140-2(2001-10-10) Long run: 0 rngtest: FIPS 140-2(2001-10-10) Continuous run: 0 rngtest: input channel speed: (min=0.000; avg=0.000; max=0.000)bits/s rngtest: FIPS tests speed: (min=0.000; avg=0.000; max=0.000)bits/s rngtest: Program run time: 173 microseconds
Remove the ">" after the .txt. rngtest < abc.txt
Randomness test using rngtest
1,482,553,843,000
When i run this find command: find /html/car_images/inventory/ -type f -iname \*.jpg -mtime -4 i get output like this: /html/car_images/inventory/16031/16031_06.jpg /html/car_images/inventory/16117/16117_01.jpg /html/car_images/inventory/16126/16126_01.jpg /html/car_images/inventory/16115/16115_01.jpg /html/car_images/inventory/16128/16128_02.jpg /html/car_images/inventory/16128/16128_03.jpg /html/car_images/inventory/16128/16128_04.jpg My goal is to delete a "thumbnail" folder that exists in each of these directories (ie delete this folder: /html/car_images/inventory/16128/thumbnails/ and also delete /html/car_images/inventory/16115/thumbnails/ I'm thinking perhaps of a script that takes each line of output from the above find command, then replaces " *.jpg " with "thumbnails" and adds as a prefix "rm -fr" such that i end up with this: rm -fr /var/www/html/car_images/inventory/16115/thumbnails/ rm -fr /var/www/html/car_images/inventory/16128/thumbnails/ and so on... Any ideas on how to do this? (maybe using the -exec option of find and sed or cut?) (another way to phrase my entire goal is, if a folder contains a .jpg file that is "younger" than X days, than delete the "thumbnails" folder, in its folder)
Assuming you don't have filenames with newline(s): find /html/car_images/inventory/ -type f -iname \*.jpg -mtime -4 \ -exec sh -c 'echo "${1%/*}"' _ {} \; | sort -u | \ xargs -d $'\n' -I{} rm -r {}/thumbnails The parameter expansion, ${1%/*} extracts the portion without the filename from each found entry sort -u sorts and then make the entries unique so that we don't have any duplicate xargs -I{} rm -r {}/thumbnails adds thumbnails at the end, and then remove the resultant directory
Use output from find command to then remove a specific directory
1,482,553,843,000
This command on ubuntu is giving no such file or directory error: /# mv mongodb-linux-x86_64-$VERSION mongodb mv: cannot stat 'mongodb-linux-x86_64-2.6.7': No such file or directory even though both file and directory exist. Any idea why? Thanks edit /# ls mongodb-linux-x86_64-* mongodb mongodb: mongodb-linux-x86_64-2.6.2-rc0: GNU-AGPL-3.0 README THIRD-PARTY-NOTICES bin
The file (directory) name you have is mongodb-linux-x86_64-2.6.2-rc0, not mongodb-linux-x86_64-2.6.7. The variable VERSION is being expanded to 2.6.7, but the desired expansion as far as your directory name is concerned would be 2.6.2-rc0. So you need to either define variable as such, and do the mv-ing: VERSION='2.6.2-rc0' mv mongodb-linux-x86_64-"$VERSION" mongodb Or just use the path directly: mv mongodb-linux-x86_64-2.6.2-rc0 mongodb Note that, environment variables are usually denoted as all uppercase letters, the user-defined variables (and shell variables) should not contains all-caps to avoid ambiguity.
No such file or directory when moving a file
1,482,553,843,000
I am trying to follow this answer on OS X 11.x block return from any to 192.0.2.2 The console displays : -bash: block: command not found So, I tried to install it using brew: brew install block However, I got another error . How to install this firewall utility?
On recent versions of OS X, pf is installed and running by default. The linked question is referring to changing the pf configuration, not installing a new utility. Modifying a firewall on a production system is not something which should be done without reading the documentation (man pf.conf , man pfctl). To add that block line (or experiment with other configuration changes), you would add it to the configuration file /etc/pf.conf with your preferred editor, and then reload the firewall configuration with $ sudo pfctl -f /etc/pf.conf
block command line not found
1,482,553,843,000
According to this question, which said "How to run a command multiple times?", the correct answer was for i in `seq 10`; do command; done Now, If the command has an argument and every iteration, we should pass this argument to the command automatically. How can we perform this in Linux terminal? thanks.
With the loop you reference in your command, you are storing the next "word" from the seq command in the variable i. You can use that value anywhere you like, so to pass it to the command you can invoke it as command "$i" You can avoid the need for the extra seq process with bash at least you can do it like for ((i=1; i<=10; i++)); do command "$i" done or with brace expansion like for i in {1..10}; do or if you want to do it with POSIX compliance you could do something like i=1 while [ "$i" -lt 11 ]; do command "$i" i=$((i+1)) done
How can we change a multiple running command line arguments in Linux terminal?
1,482,553,843,000
How can I modify the pattern in the second instruction, so as to exclude nested directories? (such that ls returns only foo.mp4, not the content of bar:). $ ls * foo.mp4 foo.ogg bar: bar.ogg $ shopt -s extglob $ ls !(*.ogg) foo.mp4 bar: bar.ogg PS: I use OS X.
What was I thinking?! $ ls *.!(ogg) foo.mp4
Look for files in the current directory that don't match a pattern
1,482,553,843,000
Can anyone explain what the significant of the '@' symbol in the following command date -d @1472067906.1413 +%Y.%m.%d 2016.08.25 How does the date command handle this; I can't seem to find any information on man page.
Your best hint in the man page is indeed in one of the examples – @x means x seconds past the epoch: EXAMPLES Convert seconds since the epoch (1970-01-01 UTC) to a date $ date --date='@2147483647' (I assume there could otherwise be parsing ambiguities if you wanted something like 7 seconds past the epoch: date --date=7 thinks you want 7AM of the current day.)
Purpose of '@' in Unix Date command (for epoch)
1,482,553,843,000
How can I find out the number of visitors in real time in my website? I'd like to access to it via SSH, so it should be some CLI programs. In the worst case scenario I was thinking to analyse the number of IPs in the Apache/Nginx access file for a range of the last 5 min or so.
Most web statistics tools summarise the log over a period of 24 hours or a month. The simplest cli ncurses-based one is goaccess. For an instant view of your apache server current cpu usage and threads there is server-status which you could retrieve via curl, in html. See a live demo (beware large file). Nginx has a similar feature. You might also look at answers on our sister web site webmasters.stackexchange.com such as this and this.
How can I see the number of visitors in my website via CLI
1,482,553,843,000
I am new to bash. When I run the following command: grep "type" /root/myFile | cut-d'=' -f2 I get the following: 102 304 503 442 I want to store the contents of the first command into a variable such that the content of the variable would be the equivalent of declaring it like this: myVariable="102 304 503 442" I don't know how to go about this. Do I initialize myVariablewith an empty string and traverse the content of the command line-by-line and append it into myVariable with white spaces in between or is there an easier way for doing this with bash?
myVariable=`grep "type" /root/myFile | cut-d'=' -f2` What is between back-ticks (`) is run and the output is assigned to myVariable. If your current output is separated by line feeds (\n), then you may want to replace them with spaces with tr such as: myVariable=`grep "type" /root/myFile | cut-d'=' -f2`|tr '\n' ' '` Note: Some people prefer using the $() syntax instead of back-ticks but both are useful and when I have the choice I use the back-ticks. The real advantage of having both is if you want to handle execution at two levels since the back-ticked expression will be sent to a sub-shell first and then the $() portion will be executed.
Generating strings of words with space delimiters from cut statement
1,482,553,843,000
I need to list all mount points associates to external storage devices such as USB keyfobs and SATA external drives. The only way I found under Ubuntu, is to call 'mount' and grep for '/media'. But I wonder if there is a better, more universal way. All this from the command line interface (terminal/bash).
Looking in /media is a reasonable way to find hotplug block devices. You can also use lsblk to list the block devices and whether they are hotpluggable: $ lsblk -l -p -o name,rm,hotplug,mountpoint NAME RM HOTPLUG MOUNTPOINT /dev/sda 0 0 /dev/sda1 0 0 / /dev/sda2 0 0 [SWAP] /dev/sda3 0 0 /home /dev/sdc 0 1 /dev/sdc1 0 1 /dev/sdc2 0 1 /dev/sdc3 0 1 /media/wd3 /dev/sdc4 0 1 /dev/sdd 1 1 /dev/sdd1 1 1 /media/clip This shows that /dev/sdc is probably an external device (HOTPLUG=1), and that a partition is mounted on /media/wd3. Also there's another device on /media/clip. The RM column means removable, which sometimes applies to sd card readers, though in this case it is actually just a usb flash key. You can also use findmnt to get from a directory name to the name of the device it is on: $ findmnt -n -o source -T /media/wd3/my/sub/dir /dev/sdc3
List of mount points of external storage devices such as USB keyfobs and SATA external drives, from the cli
1,482,553,843,000
How shell processes the content of command line in order to execute? Command first and then option and arguments. Dividing command line into segments. Processes from beginning to the end.
"shell" is a generic word for bash, ksh, zsh and all. For all those shells, there is a man page (e.g. man bash) which details how command is expanded before execution (variable $foo are replaced by content, fu* in replaced by fun funny (provided thoses files exixts) and the like). You can debug simple command using echo my-command ${foo} fu* More complex command (having a pipe (|) for instance) can be debugged by setting set -x before the command. set -x my-command ${foo} fu* | while read x do done set +x However, this looks like an XY-problem.
How shell processes the content of command line in order to execute?
1,482,553,843,000
I have list in the file called name.txt with these strings: Los Angeles, CA us1.vpn.goldenfrog.com Washington, DC us2.vpn.goldenfrog.com Austin, TX us3.vpn.goldenfrog.com Miami, FL us4.vpn.goldenfrog.com New York City, NY us5.vpn.goldenfrog.com Chicago, IL us6.vpn.goldenfrog.com San Francisco, CA us7.vpn.goldenfrog.com Amsterdam eu1.vpn.goldenfrog.com Copenhagen dk1.vpn.goldenfrog.com Stockholm se1.vpn.goldenfrog.com Hong Kong hk1.vpn.goldenfrog.com London uk1.vpn.goldenfrog.com Now I want with sed delete everything before *.vpn.goldenfrog.com (where * is three characters). The output I want: hk1.vpn.goldenfrog.com dk1.vpn.goldenfrog.com etc ...
If you want a sed solution: sed 's/.*[[:blank:]]\([^[:blank:]]*\)$/\1/' file.txt The captured group (\1) will contain the portion of the line after last space, we are using that in the replacement. Example: % sed 's/.*[[:blank:]]\([^[:blank:]]*\)$/\1/' file.txt us1.vpn.goldenfrog.com us2.vpn.goldenfrog.com us3.vpn.goldenfrog.com us4.vpn.goldenfrog.com us5.vpn.goldenfrog.com us6.vpn.goldenfrog.com us7.vpn.goldenfrog.com eu1.vpn.goldenfrog.com dk1.vpn.goldenfrog.com se1.vpn.goldenfrog.com hk1.vpn.goldenfrog.com uk1.vpn.goldenfrog.com grep can easily do this too: % grep -o '[^[:blank:]]*$' file.txt us1.vpn.goldenfrog.com us2.vpn.goldenfrog.com us3.vpn.goldenfrog.com us4.vpn.goldenfrog.com us5.vpn.goldenfrog.com us6.vpn.goldenfrog.com us7.vpn.goldenfrog.com eu1.vpn.goldenfrog.com dk1.vpn.goldenfrog.com se1.vpn.goldenfrog.com hk1.vpn.goldenfrog.com uk1.vpn.goldenfrog.com
Remove start of lines with sed
1,482,553,843,000
I have the following log: 2016/01/20 00:00:16.035 [T114BaseServlet] ... Blah Blah Blah 2016/01/20 00:00:16.036 [ApplicationState] ... Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah 2016/01/20 00:00:29.531 [T114BaseRequestPayloadParser] ... Blah Blah Blah 2016/01/20 00:00:36.036 [ApplicationState] ... Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah I want to remove those ApplicationState lines, but there's no pattern at the end of those Blah Blah Blah. This is the block that I want to remove: 2016/01/20 00:00:16.036 [ApplicationState] ... Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah before the next request begins.
To remove the whole block of lines beginning with one including your match up to the line occurring immediately previous to the next occurrence of [T1114Base you can do the following: sed -e'$!N;/ApplicationState.*\n/,/\n.*\[T1114Base/!P;D' <in >out It is fairly simple to understand how this works. By default sed eats input a line at a time. But if you want a wider outlook you need only script it. So for every input line, if the current is ! not the $ last, sed appends the Next line to pattern space as delimited by an intervening \newline character. In the range expression, I first look for any match for ApplicationState followed by any * number of any . characters, followed by at least one \newline. To end the range expression I need to peek input - which is the purpose of the $!N in the first place. sed scans for the next occurrence of the line which would occur after the last you want to remove. It looks for a \newline followed by a pattern which should match the beginning of the next input block. If that range expression is ! not matched, sed will Print up to the first occurring \newline in pattern space, and regardless of a match sed will afterward Delete up to the first occurring newline in pattern space and recycle to the top of the script with what remains. Basically, sed slides over input 2 lines at a time, possibly Printing the oldest one if it does not occur within your delete block, and always Deleting only the oldest one before appending the Next.
How do I remove text blocks within a log file? [closed]
1,482,553,843,000
let's say the command is "ls word1 word2 word3" I want to move the cursor from end of line to "word2" but do not want to clear "word3" in process. Just jump back by two words and reach beginning of "word2" is there any keyboard shortcut to jump word backward and forward without deleting the word on command line?
When tcsh's line editor is configured in emacs mode (which is the default, use bindkey -e to go back to it if you had enabled vi mode earlier): % bindkey | grep word | egrep 'for|back' "^[^H" -> backward-delete-word "^[B" -> backward-word "^[F" -> forward-word "^[b" -> backward-word "^[f" -> forward-word "^[^?" -> backward-delete-word So ESC and then b / f (or B / F), which are the sequences that terminals usually send upon pressing Alt + b/f (Alt + Shift + b/f for the uppercase ones). That's the same as you would do in the emacs editor. In vi mode (after bindkey -v), you'd use the usual vi motion commands in vi command mode (back, word, end, and their uppercase equivalents for WORD motions (whitespace separated words)).
how to move by a word in command line in tcsh?
1,482,553,843,000
I have multiple files ( about 20 files with 30000 lines and 32 columns) and I need to keep only the lines that start with the same string. I found these cases that are quite similar to what I need but I don't know how to adapt them.. compare multiple files(more than two) with two different columns how to compare values in two columns in two different files, echoing full lines where absolute value of difference is < a small maximum value? In my case each file has a first column made of strings of 12 characters, I need to keep only the lines starting with strings that are present in ALL the files. (one file for every input file or also a single output file like the one in the above mentioned cases is fine). My files are like this: file1: -13 -5 0 19.3769 46.9197 1 -13 -4 -2 347.911 57.7232 1 -13 -4 -1 38.5696 39.0027 1 -13 -4 0 2227.39 124.894 1 -13 -3 -3 113.001 40.2117 1 -13 -3 -2 850.847 78.2881 1 file2: -13 -5 0 2.19085 50.4632 1 -13 -4 -2 283.628 56.7731 1 -13 -4 -1 41.179 48.6423 1 -13 -4 0 1753.54 125.88 1 -13 -3 -3 28.2363 40.6518 1 -13 -3 -2 562.736 66.0301 1 -13 -3 -1 750.747 77.2795 1 Output file1: -13 -5 0 19.3769 46.9197 1 -13 -4 -2 347.911 57.7232 1 -13 -4 -1 38.5696 39.0027 1 -13 -3 -3 113.001 40.2117 1 -13 -3 -2 850.847 78.2881 1 Output file2 -13 -5 0 2.19085 50.4632 1 -13 -4 -2 283.628 56.7731 1 -13 -4 -1 41.179 48.6423 1 -13 -3 -3 28.2363 40.6518 1 -13 -3 -2 562.736 66.0301 1
One approach would be to first find all sets of 12 initial characters that are present in more than one file: cut -c-12 file* | sort | uniq -c The cut command above prints the 1st 12 characters from every file whose name starts with file, these are then sorted and the number of times each line is found is appended by uniq -c. Running this on your example files returns: $ cut -c-12 file* | sort | uniq -c 1 -13 -3 -1 2 -13 -3 -2 2 -13 -3 -3 2 -13 -4 0 2 -13 -4 -1 2 -13 -4 -2 2 -13 -5 0 So, all lines but the 1st appear in both files. Now, keep only those lines that appear the desired number of times (20 in your case): cut -c-12 file* | sort | uniq -c | rev | sed -n 's/ 20 *$//p' | rev rev simply prints its input reversed. I am using it here to make the number of times each line was seen the last field. This is then passed to sed which is told to only print lines which end with a space, a 20 and 0 or more spaces. This keeps only lines that appeared 20 times and the final rev brings us back to the original format. You can now pass the whole thing to grep as a list of strings to search for: $ grep -f <(cut -c-12 file* | sort | uniq -c | rev | sed -n 's/ 20 *$//p' | rev) file* -13 -5 0 19.3769 46.9197 1 -13 -4 -2 347.911 57.7232 1 -13 -4 -1 38.5696 39.0027 1 -13 -4 0 2227.39 124.894 1 -13 -3 -3 113.001 40.2117 1 -13 -3 -2 850.847 78.2881 1 If your shell doesn't support the <() format, you could save the results of cut in a separate file and use that, or just run it in a loop: cut -c-12 file* | sort | uniq -d | while IFS= read -r l; do grep -- "^$l" file1; done To have each file's output in a separate file, use: cut -c-12 file* | sort | uniq -c | rev | sed -n 's/ 20 *$//p' | rev > list for f in file*; do grep -f list "$f" > "$f.new"; done
Compare columns between different files
1,482,553,843,000
I'm trying to use a custom PS1 line, including colors and git repo information, on a Red Hat Enterprise Linux 6 machine. I have a predefined version I'm successfully using on other systems running Ubuntu or Mint. In my .bashrc, I added the following part at the bottom: # Colors Black='\e[0;30m' # Black Red='\e[0;31m' # Red ... NC="\e[m" # Color Reset # show git branch parse_git_branch() { # git branch 2> /dev/null | sed -e '/^\[^*\]/d' -e 's/* \(.*\)/|\1/' git rev-parse --abbrev-ref HEAD 2> /dev/null | sed 's/^/|/g' } PS1="\[$Green\]\u@\h \[$BBlack\]\w\[$Yellow\]\$(parse_git_branch)\[$NC\] $ " However, when opening a terminal, I still see the default PS1 line. A echo $PS1 prints \[\033]0;\u@\h: \w\007\]\u@\h:\w>. Apparently, this variable gets overridden somewhere. But where, or how can I find this out? By the way, .bashrc definitely gets executed. I verified this by adding a line like echo "hello" and seeing the result when opening a terminal. Update Running bash -x prints a lot of output, ending with + On_White='\e[47m' + NC='\e[m' + PS1='\[\e[0;32m\]\u@\h \[\e[1;30m\]\w\[\e[0;33m\]$(parse_git_branch)\[\e[m\] $ ' ++ PS1='\[\033]0;\u@\h: \w\007\]\u@\h:\w> ' Update II Output of grep -H PS1 ~/.bashrc ~/.profile ~/.bash_profile ~/bash.login ~/.bash_aliases /etc/bash.bashrc /etc/profile /etc/profile.d/* /etc/environment 2> /dev/null: /home/myself/.bashrc:# this does not apply, but PS1 env var is empty. /home/myself/.bashrc:[ -n "$PS1" ] || INTERACTIVE=0 /home/myself/.bashrc:PS1="\[$Green\]\u@\h \[$BBlack\]\w\[$Yellow\]\$(parse_git_branch)\[$NC\] $ " /etc/profile.d/company.sh: linux:root) PS1="\u@\h:\w# "; TMOUT=3600 ;; /etc/profile.d/company.sh: linux:*) PS1="\u@\h:\w> " ;; /etc/profile.d/company.sh: *:root) PS1="\[\033]0;\u@\h: \w\007\]\u@\h:\w# "; TMOUT=3600 ;; /etc/profile.d/company.sh: *) PS1="\[\033]0;\u@\h: \w\007\]\u@\h:\w> " ;; /etc/profile.d/company.sh:export PS1 /etc/profile.d/colorls.sh: [ -z "$PS1" ] && return Update III My full .bashrc: PKG_ROOT=/opt/companyhome/ NFS_ROOT=/share/install/companyhome/current/ LINKS_VERSION=3.0.0.0 # write to stdout (disabled for non interactive (e.g. scp) logins) print_msg() { if [ "$INTERACTIVE" = "1" ]; then echo "$1" fi } print_msg_debug(){ if [[ ! -z "$COMPANYHOME_INIT_DEBUG" ]]; then print_msg "$@" fi; } # check if this is an interactive session. # tty results with 1 if not terminal. But with ansible remote execution, # this does not apply, but PS1 env var is empty. INTERACTIVE=1 tty -s || INTERACTIVE=0 [ -n "$PS1" ] || INTERACTIVE=0 print_msg_debug "loading companyhome" # define_companyhome_root # check if we run against a packaged version or a nfs (legacy) version of companyhome CURRENT_ROOT="" if [ -d "$PKG_ROOT" ]; then CURRENT_ROOT=$PKG_ROOT elif [ -d "$NFS_ROOT" ]; then CURRENT_ROOT=$NFS_ROOT else print_msg "Error no companyhome installation found." print_msg "Companyhome could not be loaded." return 1 fi export "COMPANYHOME_ROOT=$CURRENT_ROOT" print_msg_debug "companyhome is installed in \"$CURRENT_ROOT\"" # include companyhome . "${COMPANYHOME_ROOT}/update/check_linksversion" . "${COMPANYHOME_ROOT}/bashrc_company" if [ -f /etc/bash_completion ]; then . /etc/bash_completion fi # Normal Colors Black='\e[0;30m' # Black Red='\e[0;31m' # Red Green='\e[0;32m' # Green Yellow='\e[0;33m' # Yellow Blue='\e[0;34m' # Blue Purple='\e[0;35m' # Purple Cyan='\e[0;36m' # Cyan LightGray='\e[0;37m' # Light Gray # Bold BBlack='\e[1;30m' # Black BRed='\e[1;31m' # Red BGreen='\e[1;32m' # Green BYellow='\e[1;33m' # Yellow BBlue='\e[1;34m' # Blue BPurple='\e[1;35m' # Purple BCyan='\e[1;36m' # Cyan BWhite='\e[1;37m' # White # Background On_Black='\e[40m' # Black On_Red='\e[41m' # Red On_Green='\e[42m' # Green On_Yellow='\e[43m' # Yellow On_Blue='\e[44m' # Blue On_Purple='\e[45m' # Purple On_Cyan='\e[46m' # Cyan On_White='\e[47m' # White NC="\e[m" # Color Reset # show git branch parse_git_branch() { # git branch 2> /dev/null | sed -e '/^\[^*\]/d' -e 's/* \(.*\)/|\1/' git rev-parse --abbrev-ref HEAD 2> /dev/null | sed 's/^/|/g' } PS1="\[$Green\]\u@\h \[$BBlack\]\w\[$Yellow\]\$(parse_git_branch)\[$NC\] $ " ${COMPANYHOME_ROOT}/bashrc_company : INTERACTIVE=1 tty -s || INTERACTIVE=0 [ -n "$PS1" ] || INTERACTIVE=0 # is_nfs_home returns 0 (success) if /home is nfs/network based, else 1 (local home) # the function does not guarantee the accessibility is_nfs_home(){ # if $HOME is an explicit mount -> nfs else its local -- export PS2='> ' fi } sp () { setps2 export PROMPT_COMMAND='PS1=`echo "\u@\h$PS2"`' export PS1 } dp () { setps2 if [ "$TERM" = "dtterm" ] || [ "$TERM" = "xterm" ] || [ "$TERM" = "xterm-color" ] || [ "$TERM" = "linux" ]; then export PROMPT_COMMAND='PS1="\[\033]0;\u@\h: \w\007\]\u@\h:\w$PS2"' elif [ "$TERM" = "sun-cmd" ] || [ ! -z $EMACS ] ; then export PROMPT_COMMAND='PS1=`echo "\u@\h:\w$PS2"`' else sp fi export PS1 } dp export ignoreeof=0
The problem is this line in your ${COMPANYHOME_ROOT}/bashrc_company file: export PROMPT_COMMAND='PS1=`echo "\u@\h$PS2"`' The PROMPT_COMMAND variable defines a command that should be run before a prompt is shown. In your case, this has been set to setting PS1. So, each time a prompt is shown, your PS1 is being reset to the dwefault value. I have no idea why anyone would want to do this, but it's simple enough to fix. Either delete that line from ${COMPANYHOME_ROOT}/bashrc_company or set PROMPT_COMMAND to something else in your ~/.bashrc: PROMPT_COMMAND=""
PS1 from .bashrc overridden somewhere else
1,482,553,843,000
I have a tab delimited file: qrs John tuv abcd Sam efgh ijk Sam lmnp abcd Sam efgh ijk Sam lmnp qrs John tuv I am trying to print the line in which the second field does not match the previous line's value in the second field and to print the line in which the second field does not match the next line. I've been playing with variations of the following but nothing is quite working like I expect: awk -F"\t" '{ name=$3; line=$0; getline; newname=$3; newline=$0; getline; nextname=$3; nextline=$0; if (newname != name || name != nextname)print line"\n"nextline }' input.txt
From you comment, I assume it's a log file with login and logout dates, for example: date1 John logout date2 Sam login date3 Sam work1 date4 Sam work2 date5 Sam logout date6 John login Use awk: awk 'NR!=1&&$2!=f{print p"\n"$0} {f=$2; p=$0}' file Where: NR!=1 is true when awk processes everyline but the first (NR contains the number of the line in the current file) $2!=f compares the second field $2 with the value of the variable f (f will be set later) If both confiftions apply, awk prints the value of p (the previous line, will be set later too), a newline \n and the current line $0. What now happens, is processed every line: The variable f is set to the second field $2 and the variable p to the current line $0. Both will be used in the next iteration (when the next line is processed). This now prints the first and last occurence of the second field, so the logout and login dates and names. The output would then be: date1 John logout date2 Sam login date5 Sam logout date6 John login
Print the first and last match of a field with awk
1,482,553,843,000
in one of my shell program, I want start an xterm window from terminal and then the control of next execution should go to the newly opened window (by default the control will be in the terminal). How to do this using command line arguments (not by moving mouse pointer to the new window :) ) ?
If you want to open a new xterm and run a sequence of commands in that window, you can use the -e option. If you want the xterm to remain open after the command is executed, you can include the -hold option. For example: xterm -hold -e 'pwd; ls'
linux terminal transfer control to new terminal [closed]
1,482,553,843,000
I had a problem on my DNS and after using this magical command, solved the problem. So I got curious: how this command, dns-fix, worked? I am using the Mint distribution. The command is standard.
The Mint dns-fix command (from searching) appears to be a simple shell script that changes /etc/resolv.conf to use a few pre-defined nameservers. You can confirm by using a file viewer to examine the script; e.g., less "$(command -v dns-fix)".
How the command "dns-fix" works?
1,482,553,843,000
I have two tmp files named tmp1 and tmp2 which contains some lines. tmp1 file, 1c\ datafile no. 23 2c\ datafile is ok tmp2 file, 3c\ datafile no. 24 4c\ datafile is ok I have a file (named wrong_file) which entries I want to correct from tmp files datafile no. 32 datafile is ok datafile no. 42 datafile is ok My output file (modified_file) will be like, datafile no. 23 datafile is ok datafile no. 24 datafile is ok I want use for loop which will run until the last tmp file and write data from tmp files to final file (Output) instead doing it manually. I have tried, sed -f tmp1 wrong_file > file1 sed -f tmp2 file1 > modified_file
I'm pretty sure you can just do: cat ./tmp[12] | sed -f - ./wrong_file >outfile At least, that will not cause any issues if all of sed's script instructions are specific to line number. There's no need to apply the scripts separately - you can chain them all together and run the script at once. That you have to do this at all, though, is indicative of duplicated work. Here is a copy of a sed script which would avoid writing any of those tempfiles in the first place and simply scan input for the lines which need changing before passing all of the script in a stream to the final sed in one go: { sed '/^#\.tid\.setnr/!d;=;g;G' | paste -d'c\\\n' - - - ./addfile } <./mainfile | sed -f - ./mainfile Its output is not identical to your sample data here because it is tailored to the samples you provided in the other question. But it avoids writing out the modification scripts at all and sends all editing commands to a sed process which can take action immediately. In general you can consider that a sed process is just as ready to handle its script input in all of the ways it might also be ready to handle its edit content input.-
How to write data one by one from tmp files to the final output file using for loop?
1,482,553,843,000
When I am running in the GUI, if I run the pass command to read a password such as pass -c Email/FooBar, a password prompt will appear for my passphrase. If I type my password, my password will be copied into the clipboard. If I subsequently run the pass command to read a different password without logging out and logging back in again, I do not need to type my password again. However, if I try the same thing in the virtual terminal Control-Alt-F1, I need to type my password each time. How can I make it so that I can type the password exactly once per login session in the virtual terminal?
The difference in experience between using pass in a console (what you call a virtual terminal) and within a (GUI) terminal has nothing to do with pass, but with the secret key management done for gpg (as used in the pass scripts) by the gpg-agent. This gpg-agent is, in modern distributions automatically started with X. You can see this by doing env | fgrep GPG_AGENT from a terminal and one of the consoles. On my Linux Mint 17, this is done by /etc/X11/Xsession.d/90gpg-agent. As the gpg-agent's man page tells you: If you don't use an X server, you can also put this into your regular startup file ~/.profile or .bash_profile. It is best not to run multi‐ ple instance of the gpg-agent, so you should make sure that only one is running: gpg-agent uses an environment variable to inform clients about the communication parameters. You can write the content of this envi‐ ronment variable to a file so that you can test for a running agent. Here is an example using Bourne shell syntax: gpg-agent --daemon --enable-ssh-support \ --write-env-file "${HOME}/.gpg-agent-info" The 90gpg-agent mentioned above is actually smart and tests whether the gpg-agent is already running, but it defaults to using ~/.gnupg/gpg-agent-info-$(hostname). If your distribution has a similar setup, then you should be able to add the above lines to your ~/.profile (but be sure to use the PIDFILE matching with your X started gpg-agent). You should, in also be able to use the same gpg-agent from multiple consoles by re-evaluating the .gpg-agent-info file. While trying to set this up, make sure you run pstree | grep -F pgp-agent to make sure there are not more agents running than you need, otherwise it depends on the environment whether "pass" asks for passwords agains and/or between different consoles
How can I cache the PGP unlock for unix pass when I am in the virtual terminal?
1,482,553,843,000
I would like to have an alias for the following code:- g++ *.cc -o * `pkg-config gtkmm-3.0 --cflags --libs`; but I want that when I enter the alias it should be followed by the file name *.cc and then the name of the compiled program *. for example: gtkmm simple.cc simple should run g++ simple.cc -o simple `pkg-config gtkmm-3.0 --cflags --libs`
What you need isn't an alias, but a function. Aliases do not support parameters in the way you want to. It would end just appending the files, gtkmm simple.cc simple would end like: g++ -o `pkg-config gtkmm-3.0 --cflags --libs` simple.cc simple and that's not what you try to achieve. Instead a function allows you to: function gtkmm () { g++ "$1" -o "$2" `pkg-config gtkmm-3.0 --cflags --libs` } Here, $1 and $2 are the first and second arguments. $0 is the caller itself: gtkmm simple.cc simple $0 $1 $2 You can test the function using echo. You can find more functionalities about functions in the Bash online manual.
Aliasing a command with special parameters [duplicate]
1,482,553,843,000
I want to change the elementary tweak themes and hot-corner settings on my Elementary Luna (built on Ubuntu 12.04 "Precise") from the commandline. With what command can I do this? I know how to make the changes from the GUI. Is there a way to capture settings made that way to feed into the commandline command?
There is a partial answer here: The command to use is 'gsettings` and the actual settings to use you can find by using: dconf watch / in the terminal, while you adjust the settings. You get a bunch of statements like this: /org/gnome/desktop/wm/preferences/theme 'elementary' /org/pantheon/desktop/gala/behavior/hotcorner-bottomleft 'custom-command' /org/pantheon/desktop/gala/behavior/hotcorner-bottomright 'none' /org/pantheon/desktop/gala/behavior/hotcorner-custom-command 'xset dpms force off' You want to reformat these lines into commands like this: gsettings set org.gnome.desktop.wm.preferences theme 'elementary' gsettings set org.pantheon.desktop.gala.behavior hotcorner-custom-command 'xset dpms force off' gsettings set org.pantheon.desktop.gala.behavior hotcorner-bottomleft 'custom-command'
Changing themes and hot-corner settings from command-line
1,482,553,843,000
I am trying to write up a command line interfaces that will removes a particular section / lines of codes within a list of json files. By the way, the json file are located within the sub-folders of the main directory I am pretty new to this but this is the code that I can come up with so far - find -name "*.json" | xargs sed -i "map" but some of the json files I had, its format is slightly different So far I am seeing the following 2 formats within my list: { "tags": {}, "map": { "KPA": { "State": True, "namespace": "KPA01" } } } or { "tags": { "type": [ "char" ], "dynamic": true }, "map": { "KPA01": { "State": True, "namespace": "KPA01" } } } and basically, I am trying to omit out the map section that it has, so that it will only display the tags section but the presence of commas and [] / {} are making it hard for me. So my output results should be like this: { "tags": {} } or { "tags": { "type": [ "char" ], "dynamic": true } } Will this be possible to do so in a command line interface? I heard that jq may be able to do it, however, as I tried executing jq '.map' test.json I am getting parse error: ':' not as part of an object at line 2, column 11 in my terminal. Likewise it also seems to be giving off error if I am using the jq play online.. Any ideas?
First of all, change True to true. As a whole, this works very well: #!/usr/bin/python import sys import json inputfile = sys.argv[1] with open(inputfile,'r') as myfile: obj = json.loads(myfile.read().replace('True','true')) if "map" in obj: del obj["map"] json.dump(obj,sys.stdout,indent=4,separators=(',',': ')) This writes to standard output. EDIT: the previous in-place version seemed to be somewhat dangerous. Better do it this way: #!/usr/bin/python import sys import json inputfile = sys.argv[1] with open(inputfile,'r') as myfile: obj = json.loads(myfile.read().replace('True','true')) if "map" in obj: del obj["map"] with open(inputfile,'w') as myfile: json.dump(obj,myfile,indent=4,separators=(',',': ')) Because the script is actually aware of what a valid JSON is, it will throw an exception if it encounters invalid code, instead of producing unpredictable output. This works on python 3, just so you know. EDIT2: You can modify the objects in any way you like, the purpose of Json is exactly serialization of objects. Treat them as associative arrays and give them any value you want. For instance, you can do this: #add a new string on the "ground" level obj["new_key"]="lol" #add a new subarray, with properties of different types obj["this_is_array"]={"a": 3, "b": 16, "c": "string", "d": False } #modify the value of existing field obj["new_key"]="new value" #insert into subarray (test if it exists first) if "this_is_array" in obj: obj["this_is_array"]["e"]=42
Removing multiple lines
1,482,553,843,000
the below command when I run from the terminal will keep posting the output to message.log cf logs broker-analytics > /var/www/cfbrokerlogs/message.log However, if i close my terminal it will stop working. How do i make it run in the background all the time ? Also If for any reason if it stops execution what would a good approach to check that and make sure it is always running ? The command simply prints the log when there is any entry made. I am on a Ubuntu server 14.04.
You could use nohup combined with &: nohup cf logs broker-analytics > /var/www/cfbrokerlogs/message.log & The nohup command causes the program to ignore hangup signals (i.e. those that are sent when closing the terminal), and the & of course runs it in the background. If you want to make sure it's still running or kill it, you can use ps: ps ax | grep cf logs broker-analytics You should then be able to see the process ID, which you can kill if necessary.
How to run a command in background always?
1,482,553,843,000
I'm using a Google Drive command-line script that can return a list of files such as: Id Title Size Created 0Bxxxxxxxxxxxxxxxxxxxxxxxxxx backup-2014-12-26.tar.bz2 569 MB 2014-12-26 18:23:32 I want to purge files older than 15 days. How can I execute the following command: drive delete --id 0Bxxxxxxxxxxxxxxxxxxxxxxxxxx with the Id of all the lines that have a Created date older than 15 days?
You can apparently use the Google api to list and sort the files to your needs specifically (from drive --help): list: -m, --max Max results -q, --query Query (see https://developers.google.com/drive/search-parameters) ...and from the link... Search for files modified after June 4th 2012 modifiedDate > '2012-06-04T12:00:00' // default time zone is UTC modifiedDate > '2012-06-04T12:00:00-08:00' Note that the example searches for files newer than a certain date... So this isn't very difficult at all, though, for whatever reason, drive seems capable of handling only a single argument per invocation: mdate_list() { drive list -nq \ "modifiedDate $1 '$(date -ud"$2" '+%FT%T')' and \ mimeType != 'application/vnd.google-apps.folder'" } rmdrv() for dfile do drive delete -i "$dfile" || return; done set -f; unset IFS #no split data mangling, please while set -- $(mdate_list \< '15 days ago'|cut -d\ -f1) [ "$#" -gt 0 ] do rmdrv "$@" || ! break done I only instituted the while loop at all in case you have too many drive files to handle in a single listing - most of the time you will easily do without it, but if there are a great many, this will keep populating the list until there are no more. The rest just happens as a result of the data you feed it. Note that I specifically excluded folders here, but you will probably want to look at the link mentioned as well if there is anything else you might want to tweak.
Extract lines with specific dates and execute a command on each of them
1,482,553,843,000
I have a DELL server R610, on this server there is a RHEL 6.4 This server has an idrac entreprise. I would like to configure the idrac from command line, avoiding reboot. I read on the man page of ipmitool that I can use a command like : #ipmitool lan set 1 mode dedicated but the command return man page : usage: lan set <channel> <command> <parameter> I check another command from the man page, which is not existing on my server neither : #ipmitool lan get Others commands are working without issue like : #ipmitool lan print I am running ipmitool : ipmitool-1.8.11-13.el6.1.x86_64 I am wondering why I don't have all of the command available from man page ? Any idea ?
Up until a few years ago, ipmitool was undergoing rapid development. On some Linux distributions from around that time, the man page may not describe all the commands supported by the executable. In your case, setting Dell DRAC and iDRAC parameters is supported by ipmitool 1.8.11, and is done using the ipmitool delloem command. So you could use these commands: ipmitool delloem lan get ipmitool delloem lan set dedicated
ipmitool set idrac mode
1,482,553,843,000
I'm developing a custom embedded barebones distro. I have console access over serial to my machine. I would like to control what tty my user sees on their framebuffer. Currently the machine boots and sits at the splash screen while my program writes things to tty0. The user has to press Alt+[F1..F10] to get to the desired terminal but I would prefer they not need to know that command. I'm willing to install packages but would rather not.
NAME chvt - change foreground virtual terminal SYNOPSIS chvt N DESCRIPTION The command chvt N makes /dev/ttyN the foreground terminal. (The corresponding screen is created if it did not exist yet. To get rid of unused VTs, use deallocvt(1).) The key combination (Ctrl-)LeftAlt-FN (with N in the range 1-12) usually has a similar effect.
Is there a console command that replicates Alt+[F1..F10] to change terminal?
1,482,553,843,000
I have a daemon daemon running in Server A. There, there's an argument based script to control the daemon daemon_adm.py (Server A). Through this script I can insert "messages" to daemon coming from user input. Free text, whatever you like it to be. Then, there's a web interface in Server B for daemon_adm.py in PHP using the phpseclib's SSH2 class. I know that is strongly discouraged to pass user input to the command line, but well, there's must be a way to pass the text from the web Server B to daemon_adm.py in Server A. How can I securely pass a text as argument to a command line utility? Even if I echo the arguments and pipe them to daemon_adm.py like this: <?php $command = '/path/to/daemon_adm.py "'.$text.'"'; ssh->exec($command); // or whatever other library or programming language ?> since this command is executed by a ssh interface with a formatted string, code could be injected <?php $text = 'safetext"; echo "hazard"'; $command = '/path/to/daemon_adm.py "'.$text.'"'; ssh->exec($command); // command sent: /path/to/daemon_adm.py "safetext"; echo "hazard" ?> My current option in mind is encoding every user input to base64 (which as far as I know doesn't use quotes and spaces in its character set) and decode it inside daemon_adm.py like this: <?php $text = 'safetext"; echo "hazard"'; // Enconding it to base64 $command = '/path/to/daemon_adm.py '.$encoded_text; ssh->exec($command); // command sent: /path/to/daemon_adm.py c2FmZXRleHQiOyBlY2hvICJoYXphcmQi ?> Is this safe enough or convoluted? -- EDIT -- One indirect solution as indicated by Barmar would be to made daemon_adm.py accept the text data from stdin, and not as a shell parsable argument.
To insert a string in a shell snippet and arrange for the shell to interpret the string literally, there are two relatively simple approaches: Surround the string with single quotes, and replace each single quote ' by the 4-character string '\''. Prefix each ASCII punctuation character with \ (you may prefix other characters as well), and replace newlines with '␤' or "␤" (newline between single or double quotes). When invoking a remote command over SSH, keep in mind that the remote shell will expand the command, and in addition, if you're invoking SSH via a local shell, the local shell will also perform expansion, so you need to quote twice. PHP provides the escapeshellarg function to escape shell special characters. Snce exec performs expansion, call it twice on the string you want to protect. Note that this is fine for text strings, but not for byte strings. Most shells won't let null bytes through. Another approach which is less error-prone and allows arbitrary byte strings through, but requires changing what runs at the other end, is to pass the string on the remote command's standard input.
Securely passing user input to command
1,405,003,621,000
Here's the script. It is successful when I run it from the BASH prompt, but not in the script. Any ideas? When I say "fails," I mean the sed regex doesn't match anything, so there is no replaced text. When I run it on the command line, it matches. Also, I might have an answer to this. It has to do with my grep alias and GREP_OPTIONS having a weird interplay. I'll post back with the details on those. #!/bin/bash for ((x = 101; x <= 110; x++)); do urls="${urls} www$x.site.com/config" done; curl -s ${urls} | grep -i "Git Commit" | sed -r "s/.*Git Commit<\/td><td>([^<]+).*/\1/g"
I was actually able to figure this out, and I figure I'd add it here for the next googler who bangs their head against the same wall. I had a grep alias and GREP_OPTIONS set. This caused color highlighting to remain on in the script, even when piping to another command. That usually doesn't play nicely with sed. Here's my .alias and options: alias grep='grep -i --color' export GREP_OPTIONS="--color=always" So when running from the script, it doesn't use the aliased command and so forces color to always be on. So when I checked my alias and saw the --color option (which means auto, which means "don't color output that gets piped to another command" (like sed). I was confused because I forgot I had set GREP_OPTIONS as well, so I expected the grep in the script to have color set to auto by default (as it would if I hadn't set the global GREP_OPTIONS). But not so. Here are my new settings (I believe the --color flag to GREP_OPTIONS is redundant, but I leave it there as a reminder): alias grep='grep --color=always' export GREP_OPTIONS="--ignore-case --color" That way, any time I am on the command line, I'll have highlighting on for all my greps (which is usually what I want). But in scripts it will default to coloring only when not piped to another command. I'll still have to add --color=always to many of my scripts (since I tend to prefer highlighting in most cases, even when piping to another command, unless I don't ever see the output).
Why does the same sed regex (after grep) fail when run in a bash script vs bash command line?
1,405,003,621,000
Let's say there's a specific time and date I have in mind. All files last edited before this date I want to keep in the directory but all files that have been edited since this date I want to mv somewhere else. The man page of mv doesn't show this being possible with mv directly. I thought some form of the the following should work: ls -t | head -n $number Where $number specifies the number of files that have been edited since the time and date I had in mind. I could then somehow feed this to mv to mv those files (haven't thought up exactly how to do that). The disadvantage of this is that I would have to count up how many files have been edited since the date and time I had in mind. Is there a way where I can just specify a date and time and let my computer figure out which files need to be mved and mv them for me? If not, then how would I complete the command I have written above to feed those file names to mv to have them all mved to the same location?
find /path/to/dir -mtime +5 -exec mv {} /target/path/ ';' will move all files in /path/to/dir that are older than five days to /target/path. You can try this to see what will actually be executed: find /path/to/dir -mtime +5 -exec echo mv {} /target/path/ ';' Note that the -mtime parameter checks the file's modification time. Have a look at -ctime or -atime in find's manpage for more detail. If you want to specify your times in minutes, use one of -mmin, -cmin and -amin instead. To find files younger than a specific amount of time, use - instead of +, e.g. -mmin -30. Another method would be to use xargs (which will execute a command with each of its input lines; manpage): find /path/to/dir -mtime +5|xargs -i echo mv {} /target/path (remove the 'echo' to actually move stuff)
With mv, possible to put a time dependence on files mv'ed?
1,405,003,621,000
I have a log file that has lines that look like this: blah blah blah Photo for (702049679 - blah blah blah Now I know I could get all the lines like that from the file by doing: grep "Photo for" logFile But how can I take those lines and get a comma seperated list of each number after the parenthesis in a single output line (these are going to be pasted into an SQL query)? The numbers in question will be the first occurrence of a string of numeric characters 9 or more digits long. Ideally it could be matched using that criteria, or the criteria of the first number occurring after the "Photo for (" text.
A regex this complicated is better handled with Perl, e.g. grep "Photo for" logFile | perl -pe 's/.*Photo for ((\d+).*/\1/' | tr '\n' ',' If Perl is out of the question: grep "Photo for" logFile | awk '{sub(/.*Photo for \(/,"",$0);sub(/[ ].*/,"");print $0}' | tr '\n' ','
Awk/grep/sed get comma separated list of numbers from lines of text
1,405,003,621,000
is it possible to list all the ".php" files located into a direcotry and their octal permissions? I would like to list them like this: 775 /folder/file.php 644 /folder/asd/file2.php etc...
find /folder -name '*.php' -type f -print0 | perl -0 -lne 'printf "%o %s\n", (lstat $_)[2]&07777, $_' See also this related question: Convert ls -l output format to chmod format. -print0 is a GNU extension also supported by BSDs like OS/X. GNU find also has a -printf predicate which could display the mode, but that one has not been added to BSD's find. (Tested on OS/X 10.8.4 and Debian 7 but should work on any system that has any version of perl and find -print0 which includes all GNU systems and all recent BSDs)
MacOsx - Shell - list all .php files and their octal permissions, inside a specifc folder
1,405,003,621,000
I've found this code in a tweet: :(){ :|: & };: It said something about fork, but I don't completely get how it works. Could anybody please explain in detail what it does and how it works? Thanks in advance.
That is a "fork bomb", as you've heard. There's a whole wikipedia page about it. The fork bomb in this case is a recursive function that runs in the background, thanks to the ampersand operator. This ensures that the child process does not die and keeps forking new copies of the function, consuming system resources. -Wikipedia In short, what it's doing is it's creating more and more processes (by calling the same function recursively) thereby overloading the system. You'll note that the function identifier is ":()" which you could replace with a name and indent the code to make it more legible: By replacing the function identifier and re-indenting, the code reads: bomb() { bomb | bomb & }; bomb "Don't try this at home kids." -Mr. Wizard
What does this code do? [duplicate]
1,405,003,621,000
I have some gnome shell widgets that I need to be closed. I am unable to find which processes are behind them. Any idea how can I kill them?
You can always try using ps to determine what processes are running out of everything, e.g.: ps -ely | grep -i $PROCESSNAME Guessing at what the widget names will be: ps -ely | grep -i gnome Is very likely to list them all.
How can I kill gnome shell widgets?
1,405,003,621,000
I know that the finger command used to display information about local and remote users. finger --> display users log in on local machine, even if remotely. finger @hostname --> display users log in on the remote machine. finger user@hostname --> I don't know what is it used for? and who command used to know info about the local machine users only, is it true?
who "Displays information about the users currently logged in to the local machine" - who man page. You can also specify a file for who to read from, like old logins file. finger Displays information about the user (being local or remote if a host is supplied) or all users if no username is supplied. finger provides much more information about user login, when he logged-in, idle time, etc. .. It is much more informative than who.
finger and who commands usage
1,405,003,621,000
I am asked to check and shut down the processes that I am not familiar with. So when I ls under bin folder, I see multiple process .sh. But I want to know which process is associated with which tomcat process. Is there any easy way to find out that? Example startmyprocess1.sh, but when I do ps -ef | grep startmyprocess1 doesn't return the running process. But actually the running tomcat process name is myprocess, so when I do ps -ef | grep myprocess, I can see the running process. To know that I have to ask the responsible person. So, the names are different. If like that, I need to ask him several times. Any better way to figure this out?
Starting myprocess from within startmyprocess.sh does not name the process after the underlying shell script, that is why your ps -ef | grep startmyprocess1 does not return a result. This is also why many processes, especially daemons, write their pid out to file so that you can easily reference it's process. This can be done with: #!/bin/sh pid=`myprocess` echo $pid > /tmp/myprocess.pid or you can query $! which contains the last pid: #!/bin/sh myprocess echo $! > /tmp/myprocess.pid and query/list the process by it's pid: ps --pid $PID
check running process
1,405,003,621,000
For backup MySQL MyISAM database, I'm using: backupdb=siteone mysqldump -u root -pthepass --lock-tables --add-locks --disable-keys --skip-extended-insert --quick $backupdb > /var/www/html/db2.sql Even after long time using it, I still dont know what is the name of using nameofmychoice=anyname. After typing backupdb=siteone I need to know how long it will kept in memory? If I running many lines of command which takes very long time, do I need to type backupdb=siteone again and again from time to time to regenerate the alias? Is using this way really safe for almost anything? I'm worried of data loss since I'm dealing with database.
In your example, backupdb is called a variable. A shell variable will be there until you change it to a new value, or the shell exits (most likely because you type exit, or close the terminal). In your case, backupdb will be there for a long time, so you don't need to type that again and again. My guess is that you are using the BASH shell because it's quite popular. If you want to find out more you can read the BASH programming intro.
How reliable of using "nameofmychoice="anyname""
1,405,003,621,000
I set a variable TEMPP='-I ../dir1 -I ../dir2' then I run gcc -M $TEMPP somefile.c It seems not to include ../dir1 and ../dir2 in the search list of the include file, and if there is a space at the beginning of the variable, like TEMPP=' -I ../dir1 -I ../dir2' it reports an error: gcc: -I ../common1 -I ../encrypt: No such file or directory so it seems the variable was treated as a file. Maybe it will let me separate the directory to avoid this promblem, but those included dirs are generated by another command, and the amount is not constant. So how can I let a variable in a command be treated as literal seem like a manual input in command line, not an integral string or a file name? OK...I find this situation only happened in the zsh, but in bash it works well... may be in zsh the expansion of variable is special. So if anyone can tell me how can this work well in zsh I will appreciate, or I can only do this in bash.
In zsh, unlike other Bourne-style shells, the results of a variable substitution are not split into words that are interpreted as wildcard patterns. So in zsh, if you write a='hello *.txt' echo $a then you see hello *.txt, unlike other shells where you'll see something like hello bar.txt foo.txt. You can turn on word splitting for one expansion with $=TEMPP. You can turn on word splitting for all expansions with setopt sh_word_split or emulate sh. You should stick these commands in a makefile and let a real Bourne-style shell (whatever is installed as /bin/sh) run them. Keep zsh for interactive use and your own scripts.
How to change the interpretation of a variable in `zsh`?
1,405,003,621,000
Possible Duplicate: How can I close a terminal without killing the command running in it? I'm using Debian for my Server. I just installed MediaCore, which works well. Now I want to have it always started and want to ask, how it'd be possible to start it as a service or in the background. I know how to start but then the shell is useless as long as the program runs. So how can I run it silently, as I run services/daemons?
You have a couple options. First, you can launch it in screen and then Ctrl-A out of the screen after it launches. You can later reattach to the screen with a screen -RR {screen number}; you can figure out the screen number with a screen -ls. (If you only have one active screen a simple screen -RR will reattach). Second, you can launch it from the shell and background it by appending an & after the command. However, you also want to redirect stdout and stderr to appropriate files so the shell isn't interspersing output of the command with your shell. I think something like $ command > command.stdout 2> command.stderr & is what you are looking for. I've never used MediaCore so I don't know what it outputs. If you just want to capture all output to a file whether from stdout or stderr, this will work $ command &> command.output & However, in the long run, since you are using Debian, the right thing to do is add an init script for it (as @user606723 mentioned). There is a skeleton script in /etc/init.d that would be a good starting point.
How to run regular programs as daemons/services [duplicate]
1,405,003,621,000
When using Git VCS, I execute all of the git commands on the directory that contains a .git repository. I want to execute a git-pull through an SSH trigger but how do I define the path to the repository to perform the action on?
If you set the GIT_DIR environment variable, git will use it as a path to the repository. In general, you can start a subshell like this: (cd /some/other/directory/; git pull) The subshell will have its own current directory and environment variables.
Execute a command in a different path
1,405,003,621,000
Is there a good, simple commandline player that uses gstreamer?
gst123 is command line music player that uses gstreamer. I have not messed with it, I generally use MOC.
Commandline gstreamer player
1,405,003,621,000
I'm trying to download a large number of files from a remote server. Part of the path is known, but there's a folder name that's randomly generated, so I think I have to use wildcards. The path is something like this: /home/myuser/files/<random folder name>/*.ext So was trying this: rsync -av [email protected]:~/files/**/*.ext ./ This is giving me following error: bash: /usr/bin/rsync: Argument list too long I also tried scp instead of rsync but got the same error. It seems bash interprets the wildcard as the full list of files. What's the right way to achieve this?
Instead of letting the remote shell expand a glob that results in a too long list of arguments, use --include and --exclude filters to transfer only the files that you want: rsync -aim --include='*/' --include='*.ext' --exclude='*' \ [email protected]:files ./ This would give you a directory called files in the current directory. Beneath it, you will find the parts of the remote directory structure that contain the .ext files, including the files themselves. Directories without .ext files would not appear on the target side as we use -m (--prune-empty-dirs). With the --include and --exclude filters, we include any directory (needed for recursion) and any name matching *.ext. We then exclude everything else. These filters work on a "first match wins" basis, which means the --exclude='*' filter must be last. The rsync utility evaluates the filters as it traverses the source directory structure. If you then want to move all the synced files into the current directory (ignoring the possibility of name clashes), you could use find like so: find files -type f -name '*.ext' -exec mv {} . \; This looks for regular files in or beneath files, whose names match the pattern *.ext. Matching files are moved to the current directory using mv.
Retrieve large number of files from remote server with wildcards
1,405,003,621,000
I've configured openvswitch virtual switch and can list it with ip command as follows: # Show all interfaces ip link Output: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 <snip> 5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 8a:94:11:48:01:db brd ff:ff:ff:ff:ff:ff 6: ovsbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether e6:db:3f:88:4b:48 brd ff:ff:ff:ff:ff:ff The openvswitch from this output is named ovsbr0 Now I want to use the ip command to list only this virtual switch and exclude other interfaces, for example: # List only bridges ip link show type bridge Expected output: 6: ovsbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether e6:db:3f:88:4b:48 brd ff:ff:ff:ff:ff:ff Actual output: <no output> Why do I expect this command to output ovsbr0? This problem is specific to openvswitch because if I use the same command to list bridges that are not openvswitch then it works fine. Example with a bridge that is created with ip command: # Create bridge named "br0" sudo ip link add br0 type bridge # Show the newly created bridge called "br0" ip link show type bridge Provides expected output: 7: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 6a:76:6f:50:da:35 brd ff:ff:ff:ff:ff:ff As you can see the command works if the bridge is created with ip command. But it doesn't work for openvswitch Question: How do I use the ip command to list only openvswitch interfaces (virtual switches)? Why the ip command does not work to list openvswitch interfaces (virtual switches)? Additional context: The openvswitch was not created with ip command, instead it was created with the ovs-vsctl command that is part of openvswitch package: sudo ovs-vsctl add-br ovsbr0 This openvswitch bridge can however be deleted with ip command just fine even though it was not created with ip command: # Delete it with ip command sudo ip link delete ovsbr0 # Alternative and conventional method sudo ovs-vsctl del-br ovsbr0 What have I tried: # List openvswitch only but specifying type other than TYPE bridge ip link show type TYPE What are other interface types to test listing? # See TYPE := section from this output for types other than "bridge" ip link show help
The Open vSwitch interface is not a kernel bridge interface, but a kernel(-accelerated) openvswitch interface, with its own separate driver. In case of doubt, any interface type will be displayed with the -details option (edited to match OP): $ ip -details link show dev ovsbr0 6: ovsbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether e6:db:3f:88:4b:48 brd ff:ff:ff:ff:ff:ff promiscuity 1 allmulti 0 minmtu 68 maxmtu 65535 openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 gso_ipv4_max_size 65536 gro_ipv4_max_size 65536 $ ip -details -json link show dev ovsbr0 | jq -r '.[].linkinfo.info_kind' openvswitch So naturally the command to display only this type is: ip link show type openvswitch This resource being developed separately from iproute2 one shouldn't be surprised if the help doesn't include it. For example, likewise, wireguard doesn't appear in the help, but a (kernel-based) WireGuard interface would be displayed using ip link show type wireguard.
List openvswitch virtual switch with ip command
1,405,003,621,000
I could find the ip address of my board by using ifconfig but I need the port number as well to set Linux TCF agent. I tried netstat with several options but could not find as it returns too many and cannot scroll the screen to the top. So is there a way to find the port number given the ip address?
Prefer using ss netstat is deprecated (no update since 2011), as said in the man page, in the NOTE section. So you probably should use ss, which is part of the iproute2 package. You can use the following command: ss -nltp -n will prevent names resolution -l displays listening sockets -t is used for TCP transport (if you have UDP instead, use -u, or you may even use both at same time) -p will show you the process name which uses this socket You may still use netstat Note that if you can't use ss for any reasons, the command line arguments are exactly the same for netstat, eg.: netstat -nltp Filter the output To look for a specific program or port numbers, you can apply filter on the output using grep: ss -nltp | grep "<process_name or port_number>" Or as said by @davidt930 in the comments, use less to easily browse the output: ss -nltp | less Other tool that may help you Just in case someone needs this, I wanted to say that you can also use tools for ports scan. So you can even run the command from outside the system. The most famous one is probably Nmap. I know it's a bit overkill in this situation ^^, but I add that just in case someone is in trouble and needs a different solution.
How to find the port number given the ip address
1,405,003,621,000
Without giving more details, the only commands I know are file, stat and mediainfo. While both give some idea about an .epub document, not all. For e.g. file just gives the filename and declares it to be an epub document. Mediainfo is slightly better that it gives the following info. Format : ZIP File size : 93.9 MiB FileExtension_Invalid : zip docx odt xlsx ods So apart from the name of the document, I know the above. The most crucial bits though are missing. When was the epub book published, what version of epub version was used, what app. was used to make the .epub document and so on and so forth. There are so many versions from version 2, 2.0.1, 3, 3.2 and so on. Having all the above info. would make things easier to troubleshoot. Something on the lines of pdfinfo.
Try exiftool which is the go-to tool to extract file metadata: $ exiftool Downloads/accessible_epub_3.epub ExifTool Version Number : 12.57 File Name : accessible_epub_3.epub Directory : Downloads File Size : 4.1 MB File Modification Date/Time : 2023:05:05 08:13:22+01:00 File Access Date/Time : 2023:05:05 08:13:33+01:00 File Inode Change Date/Time : 2023:05:05 08:13:22+01:00 File Permissions : -rw-r--r-- File Type : EPUB File Type Extension : epub MIME Type : application/epub+zip Identifier Id : pub-identifier Identifier : urn:isbn:9781449328030 Title Id : pub-title Title : Accessible EPUB 3 Language Id : pub-language Language : en Date : 2012:02:20 Meta Property : dcterms:modified Meta : 2012:10:24 15:30:00Z Creator Id : pub-creator12 Creator : Matt Garrish Contributor : O’Reilly Production Services, David Futato, Robert Romano, Brian Sawyer, Dan Fauxsmith, Karen Montgomery Publisher : O’Reilly Media, Inc. Rights : Copyright © 2012 O’Reilly Media, Inc Manifest Item Id : htmltoc Manifest Item Properties : nav Manifest Item Media-type : application/xhtml+xml Manifest Item Href : bk01-toc.xhtml Spine Itemref Idref : cover Spine Itemref Linear : no (here on a sample epub file from http://idpf.github.io/epub3-samples/30/samples.html)
Getting more metadata about epub documents
1,405,003,621,000
I can download extensions from https://extensions.gnome.org/ or https://cinnamon-spices.linuxmint.com/ using curl. However, I am unable to do it from https://www.gnome-look.org To be specific, I am trying to download the zip files from https://www.gnome-look.org/p/1309239 and https://www.gnome-look.org/p/1308808 I found that the download links look like https://files03.pling.com/api/files/download/j/eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpZCI6MTY3OTUwODg0MywidSI6bnVsbCwibHQiOiJkb3dubG9hZCIsInMiOiI1YWI5OWZkYjg3ZDMxYTlmYzEwZTdjNGE5YTQ0YTE2ZTdmNDI5ZDVhOTFkYzI3MGVhNzZhYTE1OTg2ZTQ1YmIwYjNjZDM3NTYwMjU4ZDJkMzQ1YWY2ZDNiMWFmZTcxNDA4NTU2OTc4Zjg3ZWFkN2EyMzgyOTJjNjM0YmEyMDllMCIsInQiOjE2ODIzMjI2ODUsInN0ZnAiOm51bGwsInN0aXAiOm51bGx9.wHIMzT4EdVvErh625hc2cOlVfge51l0_lcC067fEqWM/Solarized-Dark-Cyan-3.0.3.tar The issue is the links can not be found in the page source. They are hidden behind tabs (triggered via js). I am not understanding how to get the links of these files via cli. I am not hung up on curl. Any too will work as long as I can get the links via CLI.
To download the latest versions of the themes or icons, use the following command. curl -Lfs https://www.gnome-look.org/p/1308808/loadFiles | jq -r '.files | first.version as $v | .[] | select(.version == $v).url' | perl -pe 's/\%(\w\w)/chr hex $1/ge' | xargs wget or: curl -Lfs https://www.gnome-look.org/p/1308808/loadFiles | jq -r --arg version "$(curl -Lfs https://www.gnome-look.org/p/1308808/loadFiles | jq -r '.files | .[0] | .version')" '.files | .[] | select(.version==$version) | .url' | perl -pe 's/\%(\w\w)/chr hex $1/ge' | xargs wget To download all files use curl -Lfs https://www.gnome-look.org/p/1308808/loadFiles | jq -r '.files | .[] | .url' | perl -pe 's/\%(\w\w)/chr hex $1/ge' | xargs wget
Download files from gnome-look.org via CLI
1,405,003,621,000
Im using grep for searching pattern in huge log files, since it's a log file and i want to see what's going on around the matches, i usually do grep -C 3 -color ... however, after reading a lot of logs i find a little annoying to distinguish between every 7 lines, which line belongs to which match, so i want to find a way to distinguish between each matches (and the 6 lines surround it) the thing i think would be helpful and easier for my eyes to read is to make each match in a different color - of course not really a new color for each match but just pick a few colors, let's say 3 colors such that matches will be colored in that sequence of colors, for example if I got 4 matches they will be colored as the following: prefix1 match1 // purple color: start prefix2 match1 prefix3 match1 match1: suffix1 match1 suffix2 match1 suffix3 match1 // purple color: end prefix1 match2 // blue color: start prefix2 match2 prefix3 match2 match2: suffix1 match2 suffix2 match2 suffix3 match2 // blue color: end prefix1 match3 // green color: start prefix2 match3 prefix3 match3 match3: suffix1 match3 suffix2 match3 suffix3 match3 // green color: end prefix1 match4 // purple color: start prefix2 match4 prefix3 match4 match4: suffix1 match4 suffix2 match4 suffix3 match4 // purple color: end 7 lines each color So what essentially im looking for is a way to tell grep to do that, or a command to redirect grep to so that it will be colored as above
Like this using Perl and core Term::ANSIColor (installed by default): <COMMAND> | perl -MTerm::ANSIColor=:constants -pe ' BEGIN{ our @colors = (MAGENTA, BLUE, GREEN); our @cols; } @cols = @colors if not scalar @cols; my $color = shift @cols if /^$/ or $. == 1; print $color; END{ print RESET } ' Or even shorter, Thanks @Terdon: <COMMAND> | perl -MTerm::ANSIColor=:constants -00 -pe ' BEGIN{ @colors = (GREEN, MAGENTA, BLUE) } print $colors[$.%($#colors+1)]; END{ print RESET } ' Check color capabilities: perldoc Term::ANSIColor
Highlight grep results in different colors, a color per match (up to 3 colors)
1,405,003,621,000
I have a debian installation, where the OEM has a bunch of processes I dont recognize running, and I want to figure out if any of these things are dialing home. I ran sudo tcpdump | grep ^e <ssh_ip> but the pipe keeps breaking. I am connected via ssh, and the grep ^e is to omit the ssh ip of my client. Is there a way to continuously monitor the network connection for outbound requests? I don't know that this thing dials home, or where specifically that is, I just suspect that it does, from other behaviour not related to this question. broken pipe usualy means the connection breaks, but my ssh connection is fine, it seems to be something else todo with tcpdump
In: sudo tcpdump | grep ^e <ssh_ip> First, that ^e has different meanings depending on the shell. in the Bourne shell, ^ is an alias for | (^, originally rendered more like ↑ being the pipe operator in its predecessor, the Thompson shell), so you'd pipe grep to e command there. in fish versions up to 3.2, ^ was to redirect stderr (short for 2>). in rc and derivatives, ^ is a sort of concatenation operator so grep ^e would become grepe. in zsh -o extendedglob, ^ is a negation glob operator, so ^e would expand to all the non-hidden files in the current directory other than the one called e. in csh/tcsh/bash/zsh, ^ is only special at the start of the line and only when history expansion is not disabled. So not here. in most other shells, ^e would be passed literally to grep. So assuming one of those latter shells, that command would start sudo tcpdump and grep ^e <ssh_ip> concurrently with the output of one connected to the input of the other via a pipe. The syntax of grep arguments is grep [<options>] <regular-expression> [<files>] ([...] indicating optional parts). If there's no file, grep greps its stdin¹. Options start with the - character. So here ^e is the regular-expression, and <ssh_ip> is the file to look for lines matching that regexp into. As a regexp, ^e matches the start of the line followed by e, so that reports lines starting with e. Here, most likely the <ssh_ip> file doesn't exist so grep will return immediately with an error. At that point, tcpdump's stdout, inherited from sudo's will become a broken pipe, so the first time tcpdump tries to output something, it will receive a SIGPIPE signal and die or if it ignores that signal, its write() will fail with a EPIPE error. Here, tcpdump managing to write an error message suggests it either ignores that signal and it reports its failing write(), or it has a handler on it and reports its reception before terminating. If you wanted to filter out the lines containing an IP address from the output of tcpdump, you'd rather need grep -Fvw <ssh_ip> where -F, -v and -w here bundled together in a single argument and are respectively for matching on Fixed strings rather than a regexp (for . to match a literal . rather than any single character as it does in regexps). reverse the match: return the lines that don't match. matching on words only, (to not find <ssh_ip> == 10.10.10.10 in 210.10.10.109 for instance). By default, when tcpdump detects that its output doesn't go to a terminal it reverts to block buffering, so if you want to see the output as soon as it comes, you'd likely want to add the -l option to tcpdump to reenable line buffering. But in any case, tcpdump can filter packets by itself. Here, to exclude the packets from/to a given IP address, you just need: sudo tcpdump not host <ssh_ip> Or to only filter out the ssh traffic from that IP address: sudo tcpdump 'not (host <ssh_ip> and tcp port ssh)' (assuming the ssh connection is on the default port 22). Those filters (Berkeley Packet Filters, BPF) are compiled into a binary filter expression passed to the kernel (setsockopt(fd, SOL_SOCKET, SO_ATTACH_FILTER, filter) on Linux at least). The filtering is happening there, so it's actually less work for tcpdump and the system in general than if tcpdump asked for all traffic, decoded all of it and passed it to grep. wireshark, and its command line interface tshark is a much more advanced traffic analysing application which you may want to consider here as well. They support two types of filter: the BPF one as above called capture filter (passed with -f) and a display filter (passed with -Y) further applied on the packets returned by the kernel using wireshark's own filter syntax and more advanced capabilities. ¹ or recursively into the current working directory if the -r/-R options are given in recent versions of the GNU implementation of grep.
how can I parse tcpdump stream live?
1,405,003,621,000
I have two commands, as follows: This one gives repo names: az acr repository list -n registry -o tsv output looks like: name1 name2 ... This one gives digest codes for one repo: az acr manifest list-metadata -r ${REGISTRY} --name ${REPO} --query "[?tags[0]==null].digest" -o tsv output looks like: digest1 digest2 ... I want to output both repo names and digest codes. Tried: az acr repository list -n registry -o tsv \ | xargs -I% az acr manifest list-metadata -r ${REGISTRY} --n % --query "[?tags[0]==null].digest" -o tsv \ | xargs -I% echo "%: %" Actual output: digest_code: digest_code Expected output: repo_name: digestcode
You'd need something like: export REPO az acr repository list -n registry -o tsv | while IFS= read -r REPO; do az acr manifest list-metadata -r "$REGISTRY" --n "$REPO" --query '[?tags[0]==null].digest' -o tsv | awk '{print ENVIRON["REPO"]": "$0}' done Calling awk to prefix the output of each manifest command with the corresponding repo name. Or if you need to run other commands on each repo/digest pair: az acr repository list -n registry -o tsv | while IFS= read -r repo; do az acr manifest list-metadata -r "$REGISTRY" --n "$repo" --query '[?tags[0]==null].digest' -o tsv | while IFS= read -r digest; do other-cmd --repo "$repo" --digest "$digest" done done With zsh, you could also do: for repo ( ${(f)"$(az acr repository list -n registry -o tsv)"} ) { digests=( ${(f)"$(az acr manifest list-metadata -r $REGISTRY --n $repo --query '[?tags[0]==null].digest' -o tsv)"}) print -rC1 -- $repo': '$^digests } for repo ( ${(f)"$(az acr repository list -n registry -o tsv)"} ) for digest ( ${(f)"$(az acr manifest list-metadata -r $REGISTRY --n $repo --query '[?tags[0]==null].digest' -o tsv)"}) other-cmd --repo $repo --digest $digest In a Makefile, that'd look like: target: az acr repository list -n registry -o tsv | \ while IFS= read -r repo; do \ az acr manifest list-metadata -r "$$REGISTRY" --n "$$repo" --query '[?tags[0]==null].digest' -o tsv | \ while IFS= read -r digest; do \ other-cmd --repo "$$repo" --digest "$$digest"; \ done; \ done While that's on several lines in the Makefile, those lines are joined together, trailing \s removed and the $$s changed to $s before passing the result to sh -c, hence the need to add a few ;s to separate commands in that inline shell script. You may want to put the code in a script instead to make it cleaner.
How to use output of multiple commands in last execution command while chaining commands?
1,405,003,621,000
Using ss, we can identify the local address, peer address, and process. Is there a command that returns how long these connections have been active?
Unfortunately, you have to work at it. The following script is a hack to attempt giving you what you are looking for. #!/bin/sh BASE=`basename "$0" ".sh" ` TMP="/tmp/tmp.$$.${BASE}" #Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process #u_str ESTAB 0 0 * 38100 * 37262 users:(("(sd-pam",pid=1688,fd=2),("(sd-pam",pid=1688,fd=1),("systemd",pid=1683,fd=2),("systemd",pid=1683,fd=1)) ps axo stat:6,user:15,tty:8,sess,ppid,pid,pcpu,etime,command:60 --columns 256 >${TMP}.pids ss -p >${TMP}.ss cat ${TMP}.ss | awk '{ PIDs="" ; #p=index( $0 , "users:"(("at-spi2-registr", p=index( $0 , "users:" ) ; rem=substr( $0, p ) ; c=index( rem, "," ) ; head=substr( rem, 1, c-1 ) ; pname=substr( head, 9 ) ; rem=substr(rem, c) ; while( index( rem, "pid=" ) != 0 ){ n=index( rem, "pid=" ) rem=substr( rem, n ) ; c=index( rem, "," ) ; head=substr( rem, 1, c-1 ) ; PID=substr( head, 5 ) ; PIDs=sprintf("%s|%s", PIDs, PID ) ; rem=substr(rem, c) ; } ; if( PIDs != "" ){ printf("%s|%s%s\n", $1, pname, PIDs ) ; } ; }' | while [ true ] do read line if [ -z "${line}" ] ; then exit ; fi ID=`echo "${line}" | awk -F \| '{ print $1 }' ` NAM=`echo "${line}" | awk -F \| '{ print $2 }' | sed 's+\"+\\\"+g' ` PID=`echo "${line}" | awk -F \| '{ print $3 }' ` cat ${TMP}.pids | awk -v pid=${PID} -v nam="${NAM}" '{ if( $6 == pid ){ printf("%s %s %s %s\n", $6, $2, $8, nam ) ; } ; }' done | sort -n | uniq The output looks like this: 3014 ericthered 08:06:14 "WebExtensions" 3180 ericthered 08:05:57 "Isolated Web Co" 3834 ericthered 07:50:46 "gvfsd-network" 3841 ericthered 07:50:46 "gvfsd-smb-brows" 3856 ericthered 07:50:44 "gvfsd-dnssd" 4270 ericthered 07:47:12 "caja" 4364 ericthered 07:46:22 "Isolated Web Co" 5274 ericthered 07:07:06 "mate-terminal" 7319 ericthered 04:20:03 "RDD Process" 7487 ericthered 04:18:15 "Utility Process" 12290 ericthered 02:33:00 "Isolated Web Co" 12558 ericthered 02:24:08 "Isolated Web Co" 13947 ericthered 01:10:28 "Isolated Web Co" 14064 ericthered 01:05:48 "Isolated Web Co" 14116 ericthered 01:05:32 "Web Content" 14152 ericthered 01:05:31 "Web Content" 14509 ericthered 38:02 "Web Content" It is up to you to decide which parameters to keep in play and how you want those presented.
Duration of connections returned by ss
1,405,003,621,000
I've set up so that I can SSH securely into my home PC from my laptop. Today, I tried running an update just to test, but I couldn't do it from the laptop. What I basically wanted to do, was to pass the command to the machine without SSH-ing into it while not keeping the connection alive. I tried looking around online to see if I could find something, but nothing seems to work. These are the commands I tried: $ ssh myip 'sysupg &' bash: sysupg: command not found echo "mypassword" | ssh myip 'sudo -S dnf upgrade -y &' [sudo] password for myuser: sudo: no password was provided sudo: a password is required echo "mypassword" | ssh myip 'sudo -S -k dnf upgrade -y &' [sudo] password for myuser: sudo: no password was provided sudo: a password is required When it asks for my password ([sudo] password for myuser:), it doesn't let me enter anything, just proceeds, giving me the last two sudo errors. tldr: pass a command that requires sudo through ssh to another machine, "automatically" close the connection as soon as the command is passed and, if possible, execute the command in x amount of minutes.
Possible way: ssh -t myip 'sudo -b dnf upgrade -y >/tmp/aa 2>&1' You want your task to run whilst disconnected; use the 1> and 2> redirections You want your task to run in background; use sudo -b option You want ssh to provide interactivity to sudo; use ssh -t option Other possible but not recommended way: echo PASSWORD | ssh myip 'sudo -S -b dnf upgrade -y >/tmp/aa 2>&1' You want sudo to read password on stdin; use sudo -S option instead of ssh -t Having cleartext password in command history is a fault You missed the sudo -b option, because the & makes password prompt to run in background and not your only desired task. Also, missing 1> or 2> redirections retains the ssh connection. However, provided that you use PubkeyAuthentication, a safe way to do the trick is: ssh root@myip 'dnf upgrade -y >/tmp/aa 2>&1 &' Set PermitRootLogin prohibit-password in sshd_conf Copy your public key in /root/.ssh/authorized_keys Have always your private key passphrase protected, and use ssh-agent But, the best way is to use screen at the server side: ssh -t myip screen -RD sudo -s ...whatever... Ctrl-A d (detach and disconnect) Later, reattach the session: ssh -t myip screen -RD You meet your requirements You reattach your session alive You learn screen
How can I pipe an alias that requires sudo through SSH?
1,405,003,621,000
Ansible CLI allows to run a custom shell command (done implicitly using the default module "ansible.builtin.command"): ansible myhost -a "/bin/mycommand" Is it possible to do the same via tower-cli aka awx-cli? EDIT: answers that use the awx CLI tool are acceptable; however, I'd prefer solutions that use tower-cli/awx-cli.
In short, the answer is yes. According the Ansible Tower CLI Reference Guide, it is possible to do an awx ad_hoc_commands create resulting into a Job similar as others. The parameter for --module_name would be shell or command and if configured for ANSIBLE MODULES ALLOWED FOR AD HOC JOBS List of modules allowed to be used by ad-hoc jobs. under Settings / Jobs. The parameter for --module_args whould be your command as usual.
Run a shell command via tower-cli
1,405,003,621,000
Before turning off my Laptop (which runs Fedora 36) I like to run sudo dnf offline-upgrade download -y && sudo dnf offline-upgrade reboot || sudo shutdown now So that all pending updates get installed automatically and I don't have to worry about using the Software Center or shutting down via GNOME. The only problem is by running sudo dnf offline-upgrade reboot my Laptop reboots as the command states and I would like it to shutdown and install the rest of the updates the next time I start my Laptop. Is there a way (maybe using systemd) to shutdown into the upgrade process via Command Line?
Update (2023-04-08) - this is available as of dnf-plugins-core 4.4.0: dnf offline-upgrade reboot --poweroff It looks like it's hard-coded — see plugins/system_upgrade.py: def transaction_upgrade(self): Plymouth.message(_("Upgrade complete! Cleaning up and rebooting...")) self.log_status(_("Upgrade complete! Cleaning up and rebooting..."), UPGRADE_FINISHED_ID) self.run_clean() if self.opts.tid[0] == "upgrade": reboot() And reboot() is this: def transaction_upgrade(self): Plymouth.message(_("Upgrade complete! Cleaning up and rebooting...")) self.log_status(_("Upgrade complete! Cleaning up and rebooting..."), UPGRADE_FINISHED_ID) self.run_clean() if self.opts.tid[0] == "upgrade": reboot() ... but that variable is only for testing, and really meant to prevent the initial reboot. What you want seems like a very reasonable enhancement -- maybe a --poweroff-after flag or something to change that systemctl reboot call to systemctl poweroff.
Fedora Offline-Upgrade shutdown instead of reboot from CLI
1,405,003,621,000
About the pkill command, I know is possible kill processes - for specific scenarios - based through tty[1-6] and pts/[0-N]. I tested and works as expected. Until here all is ok. But now, according with this answer and solution: What is the difference between kill , pkill and killall? it indicates (extraction): pkill and killall are also wrappers to the kill system call, (actually, to the libc library which directly invokes the system call), but can determine the PIDs for you, based on things like, process name, owner of the process, session id, etc. Observe the session id part. I did do check both man and only exists this feature for pkill according with any of: pkill from commandlinux.com pkill from linux.die.net as follows respectively: -s sid,... Only match processes whose process session ID is listed. Session ID 0 is translated into pgrep's or pkill's own session ID. -s, --session sid,... Only match processes whose process session ID is listed. Session ID 0 is translated into pgrep's or pkill's own session ID. As you can see the content is the same with a minor variation in the options/parameters names. If I use: directly the console, therefore is possible use pkill based on tty[1-6] to kill something a remote connection through ssh, therefore is possible use pkill based on pts/[0-N] to kill something. The reason of this post: Question What does Session ID mean within the pkill context? Extra Questions How was a Session ID created? How to know/retrieve a list of Sessions ID to be used for pkill?
The session id is the identifier of a process’ session. Sessions are a concept tied to shell job control, at a level above process groups; all processes in a given session share the same controlling terminal. In non-graphical environments, sessions can be thought of as login sessions (at least, that’s part of the original idea; they mustn’t be confused with systemd sessions which track login sessions in systemd-based environments). Sessions are created with a call to setsid; see also their description in man credentials. Those links are to Linux-specific documentation, but this isn’t Linux-specific; see also the POSIX specification for setsid. ps can show session ids: ps -eo pid,sess,args
What does Session ID mean within the pkill context?
1,405,003,621,000
I have a command whose output I want filtered by the results (plural; multiple lines) of another command. So far I've been sending the results of the first command to a file and filtering the second command with grep -f: command1 > /tmp/output command2 | grep -f /tmp/output rm /tmp/output I'd like to put that in a single command and not need the temp file.
If your shell offers "process substitution", try command2 | grep -f <(command1) If not, you can also pass the list of regexps on the command line using command substitution: command2 | grep -e "$(command1)" That will have a lower limit on the maximum size of that list of regexps, and also means that it won't work if command1's output contains NUL characters (many implementations of grep would choke on them with -f as well anyway).
how to grep the results (plural) of another command
1,405,003,621,000
Given e.g. pdf (in any size) how to enlarge it to A2 size and print on four A4 pages? (if possible it would be great if pdf having multiple pages we could split like this, so if there is pdf with posters to print multiple at once) (of course command line solutions proffered ;) )
Starting from the pdf A2 format object, crop it with either pdftilecut or with pdfposter. See the pdftilecut github page for examples and source code, or consult the man page for pdftilecut once installed. If you decide to use pdfposter to tile your pdf file: $ pdfposter -s4 infile.pdf outfile.pdf Both solutions are cli.
A2 on four A4 pages? | Convert A4 PDF to "A2 on four A4" PDF?
1,405,003,621,000
I want to make a shell script which copies the past outputs of the GUI terminal emulator (for example, last 20 lines). The motivation is as following: When I execute a procedure which requires long time (for example, downloading a very large file, or converting a very large movie file), I sometimes remember another job, and I have to leave the room. In such a case, I press ctrl+z to stop the procedure. And I type fg; echo $? >> log.txt; date >> log.txt; systemctl poweroff then I leave the room. This way works and is not bad. But it has a disadvantage that I cannot read the outputs of the procedure. I can know only the status ($?). So I want to copy last 20 or 40 lines and save them in log file.
Run your command with nohup, screen or tmux in the first place. Of course this won't help if you already started your process. If that is the case, you can capture output of your command using strace: strace -p<PID> -s9999 -e write 2>&1 | grep -o '".\+[^"]"' (replace <PID> with the PID of your process) If strace cannot attach to the process, you might need to run as root / with sudo or change your ptrace settings to 0 (and be aware of the security implications of it!): echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope You can redirect that output to a file then. There are other options, e.g. gdb or reredirect. See here or here.
Shell scripts: How to copy past outputs of terminal emulator?
1,405,003,621,000
I currently extract text from image using: import png:- | convert png:- -units PixelsPerInch -resample 300 -sharpen 12x6.0 png:- | tesseract -l eng stdin stdout | xsel -ib However, import png:- command to take screenshot is not working well for me. It somehow do not quite suit Linux Mint. Is there any other command which I can use to directly send screenshot to STDOUT for further processing.
I remember having similar issues with scrot. In that case I added a sleep and it was fine! Worked fine for me, but I'm not on Linux Mint. { import png:-; sleep 0.1 ;} | convert png:- -units PixelsPerInch -resample 300 -sharpen 12x6.0 png:- | tesseract -l eng stdin stdout | xsel -ib Also, you could try out scrot with something like: scrot -s aoeu.png -e 'tesseract -l eng $f stdout | xsel -ib; rm -f $f' A version incorporating input in comments and the answer from J. Cravens scrot -s -f -q 100 --silent - | convert - -units PixelsPerInch -resample 300 -sharpen 12x6.0 - | tesseract -l eng stdin stdout | xsel -ib
How to send Screenshot to STDOUT
1,405,003,621,000
Is there a way to blacken parts of a pdf file (i.e. personal data that I don't want to send with the pdf)? Maybe from the command line where I can say make everything black on page 2 from pixel X455 to X470 and Y300 to Y320.
In the end I managed to do it with GIMP. You can open .pdfs in Gimp and edit them. I took a black paintbrush to redact some text. Then I clicked on export and used a .pdf ending and it was exported to PDF (with some quality loss).
Edit PDF On The Command Line
1,405,003,621,000
For a MacOS and Ubuntu Server 20, with the command man less I can read the following: -n or --line-numbers Suppresses line numbers. The default (to use line numbers) may cause less to run more slowly in some cases, especially with a very large input file. Suppressing line numbers with the -n option will avoid this problem. Using line numbers means: the line number will be displayed in the verbose prompt and in the = command, and the v command will pass the cur‐ rent line number to the editor (see also the discussion of LESSEDIT in PROMPTS below). -N or --LINE-NUMBERS Causes a line number to be displayed at the beginning of each line in the display. The reason of this post is about the -n (lowercase) parameter that contains the The default (to use line numbers) part. For the both OS mentioned if I did do: less /path/to/filename.txt It displays the data without the line numbers, it is the contrary as indicated above. Of course If I want see the line numbers I use: less -N /path/to/filename.txt It works as indicated. Therefore: less /path/to/filename.txt less -n /path/to/filename.txt Is practically the same. Am I missing something? BTW with less --help -n -N .... --line-numbers --LINE-NUMBERS Don't use line numbers. Not very clear, is confuse. I created this post due the following valuable post: Is tail -f more efficient than less +F? Where indicates in the solution: You can, however, run "less -n +F", which causes "less" to read only the end of the file, at the cost of **not** displaying line numbers
less displays line numbers in two ways: at the start of each line, if -N is used; in the status line at the bottom of the screen, when verbose prompts are enabled (less -M; this will show the number of the first line shown, the last line shown, and the total number of lines). -n disables the latter, as well as the former. In particular, determining the total number of lines can be expensive; that’s why the option is useful. When line numbers are disabled using -n, less shows the position in the file in bytes. My version of less has the following in less --help: -n ........ --line-numbers Don't use line numbers. -N ........ --LINE-NUMBERS Use line numbers. The behaviour of -n is described in the information you show: Using line numbers means: the line number will be displayed in the verbose prompt and in the = command, and the v command will pass the current line number to the editor This doesn’t mention line numbers at the start of each line.
less -n default behavior, is not the same as indicated through man
1,405,003,621,000
I tried How can I list all files which have been installed by an APT package?. But the problem is, for example: when I run sudo apt install libvirt-daemon-system it does not only install one package (in this case libvirt-daemon-system). It also installs the packages mentioned under The following NEW packages will be installed: $ sudo apt install libvirt-daemon-system Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: cpu-checker ibverbs-providers ipxe-qemu ipxe-qemu-256k-compat-efi-roms libcacard0 libfdt1 libibverbs1 libiscsi7 libpmem1 librados2 librbd1 librdmacm1 libslirp0 libspice-server1 libusbredirparser1 libvirglrenderer1 libvirt-clients libvirt-daemon libvirt-daemon-driver-qemu libvirt-daemon-driver-storage-rbd libvirt-daemon-system-systemd libvirt0 msr-tools ovmf qemu-block-extra qemu-kvm qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86 qemu-utils seabios Suggested packages: libvirt-daemon-driver-lxc libvirt-daemon-driver-vbox libvirt-daemon-driver-xen libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-zfs numad auditd nfs-common open-iscsi radvd systemtap zfsutils samba vde2 debootstrap The following NEW packages will be installed: cpu-checker ibverbs-providers ipxe-qemu ipxe-qemu-256k-compat-efi-roms libcacard0 libfdt1 libibverbs1 libiscsi7 libpmem1 librados2 librbd1 librdmacm1 libslirp0 libspice-server1 libusbredirparser1 libvirglrenderer1 libvirt-clients libvirt-daemon libvirt-daemon-driver-qemu libvirt-daemon-driver-storage-rbd libvirt-daemon-system libvirt-daemon-system-systemd libvirt0 msr-tools ovmf qemu-block-extra qemu-kvm qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86 qemu-utils seabios 0 upgraded, 33 newly installed, 0 to remove and 1 not upgraded. Need to get 22.4 MB of archives. After this operation, 93.9 MB of additional disk space will be used. Do you want to continue? [Y/n] So, I am not getting the full picture by running dpkg -L libvirt-daemon-system One option to get list of all the files that has been created after the apt install command can be to run dpkg -L libvirt-daemon-system dpkg -L cpu-checker dpkg -L ibverbs-providers dpkg -L ipxe-qemu .... But I assume it will be a lengthy process. Another option can be to run the following after installing the packages: sudo find / -xdev -mtime -5 -type f ! -path '/home/blueray/*' ! -path '/timeshift/*' Is there any better solution to get list of all the files that has been created after an apt install command.
After the fact, you can feed the list of installed packages to dpkg -L: dpkg -L libvirt-daemon-system cpu-checker ibverbs-providers ipxe-qemu \ ipxe-qemu-256k-compat-efi-roms libcacard0 libfdt1 libibverbs1 \ libiscsi7 libpmem1 librados2 librbd1 librdmacm1 libslirp0 \ libspice-server1 libusbredirparser1 libvirglrenderer1 \ libvirt-clients libvirt-daemon libvirt-daemon-driver-qemu \ libvirt-daemon-driver-storage-rbd libvirt-daemon-system \ libvirt-daemon-system-systemd libvirt0 msr-tools ovmf \ qemu-block-extra qemu-kvm qemu-system-common qemu-system-data \ qemu-system-gui qemu-system-x86 qemu-utils seabios This won’t be lengthy in most cases (although the result can be, and is in this case). With some preparation, you can list all the files installed by packages (excluding changes made by their maintainer scripts). Before you run apt, store the list of all the files that the packaging system knows about: sort -u /var/lib/dpkg/info/*.list > files-before After you run apt, store it again, in a different file: sort -u /var/lib/dpkg/info/*.list > files-after You can then compare the two files to see what changed, e.g. with comm files-{before,after} or meld files-{before,after} This will also work for package removals, and file removals during package upgrades; your find approach wouldn’t be able to determine what got removed. In scenarios where you’re only interested in files installed by new (or upgraded) packages, you can look at the file lists modified in the last x minutes, e.g. 10: find /var/lib/dpkg/info -name \*.list -mmin -10 -exec sort -u {} + or, if you’re using Zsh: sort -u /var/lib/dpkg/info/*.list(mm-10)
How can I list all files which have been installed by the APT command
1,633,012,780,000
Similar to this question, but instead of adding a new line to the end of the prompt, add a new line to the beginning of the long command (when a command reaches to the right side of the command line window). I believe I saw such behavior in fish as shown in this video. It only adds newline to the line containing the prompt. I'm using zsh (v5.8) on Linux (kernel: v5.10) Edit: How can I implement such behavior in zsh or bash?
In zsh, you could do something like: zle-line-pre-redraw() { (( BUFFERLINES == 1 + ${#BUFFER//[^$'\n']} )) || PREDISPLAY=$'\n' } zle -N zle-line-pre-redraw Which prepends a newline if the number of lines to display to render the buffer is greater than the number of newline characters plus 1 (meaning at least one line overflowed or PREDISPLAY was already set to newline for that buffer).
Insert newline to the beginning of the command if it's too long
1,633,012,780,000
I redirected standard error of bash command to a file and bash prompt got redirected. But when i print the content of file, it was empty. Where did the bash prompt go? and again when i redirect stdout of bash to a file, it redirected the output and and not the prompt as expected but while printing the content of file, there were some characters form the prompt too. How? value of $PS1 and $PROMPT_COMMAND: please explain this to me.
In the first one, it looks to me that Bash goes in non-interactive mode if stderr is connected to a file when it starts. In that mode, it unsets PS1, and hence doesn't print the prompt. It also shows in $-, it doesn't contain the i signifying interactive mode. $ bash 2> file.txt echo ${PS1-unset} unset echo $- hBs Well, the man page says it too: An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option whose standard input and error are both connected to terminals (as determined by isatty(3)), or one started with the -i option. PS1 is set and $- includes i if bash is interactive, allowing a shell script or a startup file to test this state. However, if you redirect stderr after the shell has started, you get the prompt in the file. As well as any input you write: main$ PS1='\$ ' bash $ exec 2> file.txt hello ^D main$ cat file.txt $ echo hello $ exit In your second case, the inverse-colored ^G hints at the terminal bell control character, and that's used to end the escape sequences that set the terminal window title in e.g. xterm. Probably your prompt contains something like that, you can check with e.g. printf "%q\n" "$PS1" to see the prompt with special characters encoded with backslashes. Debian's /etc/bash.bashrc contains this part with the title escape: # Commented out, don't overwrite xterm -T "title" -n "icontitle" by default. # If this is an xterm set the title to user@host:dir #case "$TERM" in #xterm*|rxvt*) # PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD}\007"' ... It's in PROMPT_COMMAND and not the prompt itself, but the idea is the same. The part between \033]0; and \007 is what's set to the title.
Weird output when redirecting bash prompt to a file
1,633,012,780,000
I want to change the key binding in tmux copy mode. This is my tmux config: set-window-option -g mode-keys vi So I use the vi keybindings for copy mode. But since I use the colemak keyboard layout which has the keys n,e,i,o instead of j,k,l,o I want to bind the following: bind n down bind e up bind h left bind i right I know how binding keys works but I don't know how the key command for down, up, left, right.
See tmux list-keys: bind-key -T copy-mode Up send-keys -X cursor-up bind-key -T copy-mode Down send-keys -X cursor-down bind-key -T copy-mode Left send-keys -X cursor-left bind-key -T copy-mode Right send-keys -X cursor-right So in your case you can do: bind-key -T copy-mode-vi n send-keys Down bind-key -T copy-mode-vi e send-keys Up bind-key -T copy-mode-vi h send-keys Left bind-key -T copy-mode-vi i send-keys Right
Change key bindings in tmux copy mode
1,633,012,780,000
Im fairly new to linux, and until now, always thought sudo <command> was the same as exectuing <command> as root. I recently played around a bit with the ls command, and noticed a slight, but confusing difference. When executing sudo ls -lap (in the root directory), i get the following output: vs. when i execute ls -lap as root (or as a regular user without sudo): Apart from the obvious but not important color difference, if you actually look closer, you see that the -p option (showing a / after directories) didn't work on links when executing the command with sudo. Is there an actual difference between the two? Or is that a bug? And either way, doesnt that mean that both commands are processed differently?
Your ls is an alias, and sudo doesn’t know about it. When you switch users to become root, your interactive shell runs its startup scripts and sets up the relevant aliases. Try running alias ls as root, not via sudo, to see the corresponding command. The difference in the output for symbolic links seems to be a side-effect of adding colours to the output: ls -lp --color=tty / v. ls -lp --color=never / will show the same difference.
What is the difference between executing a command with sudo vs doing so as root user?
1,633,012,780,000
I could get system environment variables by using printenv command, but I need separate variables data: How could I lists of get local (session-wide), user (user-wide) and system (system-wide, global) environment variables separately? OS: Debian-like Linux (x64), kernel: 4.19.
I think you have a misconception about how Linux environment variables work. Environment variables for a running shell are only defined for that instance of the shell that is running. They have no meaning or relevance outside of that. If you change the $PATH variable in a shell you are using, that change will only have an effect on that instance of the shell, not on others that you may have running. When a shell starts up and the user logs in, environment variables can be set by various shell scripts, which may define default environment variables on a system-wide or per-user basis. For bash, these are scripts like /etc/profile (system wide) or ~/.bash_profile or ~/.bashrc (specific to the user). As far as I know, there is no way to determine from a running shell where a particular variable was set - you would need to check those files. Another concept you should be aware of is of exporting variables. The export command in bash can be used to flag which variables should be exported to new sub-shells that the running shell may create. Also, be aware that environment variables are specific to specific shell programs, they are not global for the Linux system. So, variables for bash (which I have been using as an example) may be different to those used in csh (although there may be some similarities) and/or they may be set to different defaults.
Linux - Get List of Local-User-System Environment Variables Separately
1,633,012,780,000
I'm very new to scripting and makefiles, and am curious about the passing of command line arguments. So, let's say I have a makefile which compiles and runs something in C, for example CompileAndRun: CompileFile RunFile CompileFile: (Compiling code) RunFile: ./Program I would call this with make CompileAndRun What I want to have happen is, if I type make CompileAndRun Argument Then it would compile, and then do ./Program Argument How would I go about this?
The idiomatic way to do this is to pass a variable which you can then refer to in the Makefile, for example: CompileAndRun: CompileFile RunFile CompileFile: (Compiling code) RunFile: ./Program $(ARGUMENTS) You can now make RunFile to run it without any arguments or make ARGUMENTS="foo bar" RunFile to run it with two arguments foo and bar. Beware that you can't pass arguments containing whitespace characters this way.
Makefile - Providing Optional Arguments
1,633,012,780,000
Is there a tool that can be used to monitor the traffic a web server is processing in real-time from the command line? I'm looking for a cli ncurses tool like nload, but one that can show the requests per second going to a web server like nginx or apache (or a cache like varnish) via mod_status or stub_status.
It doesn't look like nload, but you an get a ton of useful information from your web server's access logs (NCSA, W3C, squid,or any user-defined custom log format) in an ncurses-based tool called goaccess In Debian, run: sudo apt-get install goaccess goaccess /path/to/access.log -c It will look something like this
cli real-time monitoring of web server traffic per second over time (ncurses)
1,633,012,780,000
I am not able to find a way to append a line in yaml file after exact match of string but ignoring similar string having other values in it in a line. There are some example but that is not my case. I have a yaml file and I am automating its configuration instead of adding values manually by going to line number, I am trying to find out a string and then after that, I am adding the value using sed command. Here is the example My current file a.yaml rules: - name: Block PH by GeoIP country script: ./rules/Block PH by GeoIP country.js stage: login_success enabled: false ........ ........ ........ - name: Preview-1 API (Test Application) allowed_clients: [] app_type: non_interactive callbacks: [] <<<<< ------- see string "callbacks:" with brackets client_aliases: [] ........ ........ ........ allowed_logout_urls: - 'http://local.example.com:/login' allowed_origins: [] callbacks: <<<<< ------- see string "callbacks:" where I want to append - 'http://local.example.com:/callback' Command sed -i "/callbacks:/a \ \ \ \ \ \ - 'https://d1.example.com/callback'" > a.yaml Results of the Command My Updated File a.yaml rules: - name: Block PH by GeoIP country script: ./rules/Block PH by GeoIP country.js stage: login_success enabled: false ........ ........ ........ - name: Preview-1 API (Test Application) allowed_clients: [] app_type: non_interactive callbacks: [] - 'https://d1.example.com/callback' <<<<< ------- see problem here client_aliases: [] ........ ........ ........ allowed_logout_urls: - 'http://local.example.com:/login' allowed_origins: [] callbacks: - 'https://d1.example.com/callback' <<<<< ------- I appended - 'http://local.example.com:/callback' What I want rules: - name: Block PH by GeoIP country script: ./rules/Block PH by GeoIP country.js stage: login_success enabled: false ........ ........ ........ - name: Preview-1 API (Test Application) allowed_clients: [] app_type: non_interactive callbacks: [] <<<<< ------- I don't want to append here client_aliases: [] ........ ........ ........ allowed_logout_urls: - 'http://local.example.com:/login' allowed_origins: [] callbacks: - 'https://d1.example.com/callback' <<<<< ------- I want my addition only after this - 'http://local.example.com:/callback' I don't want to append my line after every matching string, in my above example there are 2 lines having 2 values callbacks: [] and callbacks: so I am trying to append after callbacks: only. Scenario I wanted to use sed command in my bash script file where I will pass variable value for this line like this - 'https://$1.example.com/callback' and command would be like this sed -i "/callbacks:/a \ \ \ \ \ \ - 'https://$1.example.com/callback'" > a.yaml In this way I can reuse this script by passing any value and append line which will look as below callbacks: - 'https://z.example.com/callback' - 'https://b.example.com/callback' - 'https://d1.example.com/callback' - 'http://local.example.com:/callback'
You could use $ to match the end of the line: sed -i "/callbacks:$/a\ \ - 'https://d1.example.com/callback'" a.yaml
How to append a line using sed in Linux yaml file after matching exact string
1,633,012,780,000
For example: curl -s 'https://api.github.com/users/lambda' |\ jq -r '.name' return value of "name" json attribute. If this attribute is empty or not exist command return null or '': curl -s 'https://api.github.com/users/lambda' |\ jq -r '.blabla' I need to run command like python main.py when command above return not null or not empty value. Like: curl -s 'https://api.github.com/users/lambda' |\ jq -r '.name' | ..... python main.py I plan to setup it using cron and parse external json like flag to run local script..
Since the command is successful, regardless of the output, you'll have to save it in a variable and pass it to your script if it isn't empty. It looks like you only get empty when you request data for a known field that has no value (e.g. .gravatar_id) and you get null if you pass an unknown field (e.g. .blabla). To accommodate both, you can do: var=$(curl -s 'https://api.github.com/users/lambda' | jq -r '.name') [ "$var" != "null" -a -n "$var" ] && printf '%s\n' "$var" | python main.py
Execute command when command-line pipe result is not null or not empty
1,633,012,780,000
I have several folders within a parent folder, which all have the structure below, and am struggling to create a specific loop. parentfolder/folder01/subfolder/map.png parentfolder/folder02/subfolder/map.png parentfolder/folder03/subfolder/map.png parentfolder/folder04/subfolder/map.png etc... so each subfolder contains a file called map.png (i.e. same filename in all subfolders, but they are different files). I would like to copy each map.png file and place it into the overall Parentfolder, but at the same time I want the copy to be renamed based on the Folder above 'subfolder'. So for example, I want to copy map.png from parentfolder/folder01/subfolder to parentfolder whilst renaming it folder01.png (and for this then to be done accordingly for all others, using a loop). I have tried something along these lines but am obviously sruggling to get it to do what I want it to: for i in parentfolder/*; do cd $i cd subfolder cp map.png ../../"$i".png cd - done I am still a beginner and very new to this so would appreciate any help. Thanks so much.
You may try something like the following for loop, for d in parentfolder/* ; do cp "$d/subfolder/map.png" "$d.png" done You should run it when your current directory is on the same level of the parentfolder.
cp command based on parent directory
1,633,012,780,000
The Bind related tools (host, dig, nslookup) don’t seem to be capable of encoding their output as JSON, judging from the man pages. I’m looking for a CLI tool that is, preferably one that does not depend on an interpreter or language runtime. (DOH is not an option as most DNS servers don’t support it.)
ogham/dog: A command-line DNS client has JSON output, as described in the section Output options: -J, --json Display the output as JSON I haven't tried it myself yet. Possible caveats: at the time of writing (2021-06-14), there has been only the initial release v0.1.0 (2020-11-07), and there is no widespread distribution support yet. a statically compiled binary for x86-64 is available, but unfortunately requires GLIBC_2.32. This limits it to very recent distributions; it won't run e.g. on Ubuntu 20.04 LTS which has only GLIC_2.31.
DNS lookup with JSON output
1,633,012,780,000
I'm trying to search through the files and directories of a specific folder. For example, I was looking in /usr/bin for my python binaries. To do so, I used ls | grep python. When I did this, I was able to find, for example, python3, python3-config, etc. While this works fine, I know that there are easier ways to do that: I shouldn't have to pipe to grep. But when I tried find . -name python, as per my understanding of the find man page, it yielded no results. I know that grep searches through a file. What is the correct way to search through a given directory?
you can do several things, using "globbing" In a nutshell: the shell tries to match ? to any character, (unless it is "protected" by single or double quotes * to any string of characters (even empty ones), unless protected by single or double quotes [abc] can match either 'a', 'b' or 'c' [^def] is any single character different than 'd', 'e' or 'f' So to match under /usr/bin anything with python in it: ls -d /usr/bin/*python* # just looks into that directory or with find, you can also use globbing. However you need to surround it in quotes so that the shell does not expand them, but instead give them to the find command with the '*' intact: find /usr/bin -name '*python*' # could descend into subfolders if present
Alternative to ls | grep [duplicate]
1,633,012,780,000
I have a data-set of Unix commands as input into terminal and I want to use them to compare user behaviours. Different users interact with different directories and files (they are all on separate computers). I want to look at which users use the same commands, with the same arguments/parameters (but I am happy to have different filenames/directories as arguments). Logically to achieve this I should remove the file names and directories from the data-set, then look for similarities, but this is my problem. How do I identify filenames and directories as command line arguments/parameters? e.g. Given ls -F thesis How can I identify thesis as a file/directory (I understand the semantics of ls in this case, but I'm looking more broadly, where I will not know semantics of the command issued)
You can't. You need to know the semantics of every command executed. Any argument given to a command on the command line is just passed to the program which is then free to implement however it feels like. The program doesn't even have to be consistent in how it interprets arguments (it is probably not very usable if it does this). You also need to consider that some users might have adressed non-existing files - whether as a typo (and those might also occur in the program name), an attempt to see if a certain file exists, to mislead you or for some other reason. I think I've even seen programs that behaved differently based on whether an given argument was the name of an existing file or not, but did something in both cases. The tab-completion data that ctrl-alt-delor suggests using is basically a way of encoding the semantics of a lot of (frequently used) commands, but they might (I haven't spend much time looking at those) depends on what shell the user had, and might be changed since. So while the might provide a way forward, it's not without problems.
How can I identify file names and directories from logs of Unix commands?
1,633,012,780,000
Using a live linux distribution we can install some package , it is not a persistent install, the package will be removed next boot. On a fully installed system, is there any command line tool or an apt configuration allowing a non-persistent install?
As far as the apt side of things goes, I’m not aware of anything; in fact apt and dpkg go out of their way to ensure that the system remains in a consistent state, and that as far as possible, changes made to the package selection are “permanent” (at least until the next apt or dpkg invocation). There is something you could do to get you part of the way: install the package, then mark it as automatically installed (apt-mark auto). That way, if nothing else depends (even weakly) on the package, it will be removed the next time you run apt autoremove. I don’t know about “any command line tool”, but who knows, there might well be something out there. One could consider that debootstrap in a tmpfs-based chroot counts as a temporary installation, but I don’t think that’s really what you’re looking for!
Is there any way to install a transient package?
1,633,012,780,000
I need to rename a file like a link, but if I try to rename it with mv file.gif http://link/123/file.gif it won't work. I've tried to escape the Slash / with a Backslash \ , but with no success. The Error that comes up tells me, that he didn't find the directory, because he sees the Slash as a layer of the directory tree.
/ is the character that delimit components in a Unix file path. That character cannot occur in a directory entry's name. http://link/123/file.gif is the file.gif file inside a 123 directory itself inside the link directory itself inside the http: directory, itself in the current working directory. To rename it to that file at that path, you'd need to create the directories first: mkdir -p http:/link/123 && mv file.gif http://link/123/file.gif To rename the file.gif entry for that file in the current directory to that URL but with the /s replaced with \s, in Bourne/csh/rc-like shells: mv file.gif 'http:\\link\123\file.gif' In the fish shell, you'd still need to escape the \ inside single-quotes: mv file.gif 'http:\\\\link\\123\\file.gif' Another option could be to use a character that looks like / (U+002F solidus) such as ⁄ (U+2044, the fraction slash): mv file.gif 'http:⁄⁄link⁄123⁄file.gif'
how to rename a file like a weblink (http://...)
1,633,012,780,000
How can I separate the following high-resolution single-page PDF into multiple pages (page count is unimportant), so that each page is the size of standard printer paper (8.5"x11"). The map should be zoomed to 200% before it's separated, so that I can see the small details. At 100% resolution lots of details are missed. https://parks.ny.gov/documents/parks/HarrimanTrailMap.pdf I've tried some of the solutions here, even though that question doesn't pertain to this one, but had no success.
The requirement can be thought of as of tile cropping the original page. I think the following command does what you want: convert -density 288 HarrimanTrailMap.pdf -crop 20% +repage HarrimanTrailMap-tiled.pdf You'll need imagemagick and ghostscript installed. Also, you may encounter authorisation error when converting PDF files, have a look at https://stackoverflow.com/a/52863413/1921546 to resolve that. If you feel that the resulting tiled PDF has less resolution, increase the -density value. You can specify the -crop parameter by considering the number of tiles you want to split the original page into. Here, the original page is split into 5x5 tiles so it is cropped at 1/5=20% horizontally and vertically. For more information about the command see the following link https://legacy.imagemagick.org/Usage/crop/#crop_tile
Separate high-resolution single-page PDF into multiple pages
1,633,012,780,000
I'm using git-annex in version 7.20190129 as it is provided on my Debian Stable (Buster) machine to keep big files under version control and have them distributed over multiple machines and drives. This works well as long as I have at least one "real" git-annex repository (not a special remote). What I'd be interested in is using just one git annex repository on my local machine and additionally special remotes (e.g. the bup special remote or the rsync special remote or, as soon as it lands on Debian Stable, the borg special remote). My workflow is as follows: cd /path/to/my/local/folder git init git annex init git annex add myawesomefile git commit -m 'this works on my local repository' git annex initremote mybupbackuprepo type=bup encryption=none buprepo=/path/to/my/special/remote/location git annex sync git annex copy files --to mybupbackuprepo Then I'm able to use my bup special remote as I would use an additional repository. But now I'd like to access my bup repo without using the first, local repo (e.g. in case my local machine would break down). As far as I understood (from following the official guide, the following should work: cd /path/to/new/folder/to/extract/the/backup git init git annex init git annex initremote mybupbackuprepo type=bup encryption=none buprepo=/path/to/my/special/remote git annex enableremote mybupbackuprepo git annex sync But I'm still not able to see any files (or even some broken symlinks) and, obviously, also not able to get any of my data when using git annex sync --content or git annex get myawesomefile. Any ideas? What am I missing?
A special remote is just storing the file data, not the git repository. Think of it as a a library's cellar: A library may build an additional room to store its books there, but if you want to build a library back from the cellar, you don't have any index, don't know which book is in which catalogue, and you don't have a librarian that can help you find your books. So in practice, you will need another git repository to replicate the master branch, which contains all the information about what goes where. In cases like yours (where you host that storage yourself), you don't need any special remote then -- the regular (typically but not necessarily bare) git repository you use as your origin can also store the large files, and can be used by a later checkout just as $ git clone ssh://host/path/repo $ cd repo $ git annex init $ git annex get --from origin (where the --from origin is more for illustration; if you leave it off, git annex will know what to do as well). In many cases you don't even need a special remote then; reasons to use a special remote are: You want to split the (small but often needed) git access from data access (large amounts of data), and your data hoster gives you just rsync (or webdav or s3 or whichever protocol) access, not full shell access Your git hoster gives you just bare git, and does not have git-annex installed (eg. GitLab) -- then you need an extra data hoster You need any special properties of the backend (like deduplication across repositories, which only works as long as you don't use encryption) In most cases (like, it seems, yours), just using a regular git remote and annex-copying data there is just as good, less a hassle to set up, and most importantly you need one anyway to recover your data.
accessing git-annex special remote from new repository
1,633,012,780,000
I have directory structure like the following: . ├── ParentDirectory │   ├── ChildDirectory1 │   │   ├── 01-File.txt │   │   ├── 02-File.txt │   │   ├── 03-File.adoc │   │   ├── 04-File.md │   │   └── 05-File.txt │   ├── ChildDirectory2 │   │   ├── 01-File.txt │   │   ├── 02-File.txt │   │   ├── 03-File.adoc │   │   ├── 04-File.txt │   │   ├── 05-File.txt │   │   └── 06-File.md │   ├── ChildDirectory3 │   │   ├── 01-File.txt │   │   ├── 02-File.txt │   │   ├── 03-File.adoc │   │   ├── 04-File.md │   │   ├── 05-File.md │   │   └── 06-File.txt You may have noticed that some file and directory has leading whitespace. What you may not have noticed is some file and directory has trailing whitespace as well. For trailing whitespace I tried: % find . -exec rename 's/ *$//' {} + I have to run this command multiple times. On first run it rename parent directory. On second run it rename the child. My directory structure actually goes much deeper. So, running a command multiple times is not a good solution. For leading whitespace find . -exec rename 's/^ *//' {} + does not work at all. How can I remove all leading and trailing whitespace from both file and directory names recursively?
With zsh: autoload -Uz zmv # best in ~/.zshrc zmv -n '(**/)(*)' '$1${${2##[[:space:]]#}%%[[:space:]]#}' (remove -n (dry-run) if happy). Add a (#qD) at the end of the pattern, if you also want to process hidden files and files in hidden directories. With rename, that would be something like: find . -depth ! -name . -exec rename -n ' my ($dir, $base) = m{(.*)/(.*)}s; $base =~ s/^\s*//; $base =~ s/\s*$//; $_ = "$dir/$base";' {} + Though note that \s (contrary to zsh's [[:space:]]) only matches on ASCII whitespace, not the other whitespace characters in your locale. rename also doesn't have any of zmv's safeguards to check for conflicts before starting the renaming. In any case, note that on several systems, including GNU ones, the non-breaking-space character (U+00A0) is not considered whitespace on the ground that it's not meant to delimit words, so it won't be removed, even with the zmv approach. If you have such characters, you could strip them with: zmv -n '(**/)(*)' $'$1${${2##[[:space:]\ua0]#}%%[[:space:]\ua0]#}'
remove all leading and trailing whitespace from both file and directory names recursively
1,633,012,780,000
It is known that the character (!) Is reserved, the curious thing is that it works on the common user, but does not work as root, I tried some approaches to solve my problem, but without success, I would like to know if it is possible to escape these characters . The command I'm trying to execute is simple, a move ignoring some specific files. mkdir --parents www/; mv !(init.sh|environment.sh|docs|www) $_
This has nothing to do with escaping the ! character. The ksh shell implements some extended globbing patterns, and !(...|...|...) is one of these. The pattern matches anything not matched by any of the patterns in the parenthesis. Some shells are able to use ksh globbing patterns. For example, setting the extglob shell option in the bash shell (shopt -s extglob) enables these, as does setting the KSH_GLOB shell option in the zsh shell (setopt KSH_GLOB). The shell that your root user is using obviously does not enable the use of ksh globbing patterns, and it is unclear whether it's even able to (the dash shell, for example, don't have this ability). Your ordinary user, on the other hand, seems to have enabled these patterns by default, either by virtue of running the actual ksh shell, by explicitly enabling the correct shell option in a shell initialization file, or by using some third-party extension that quietly enables the shell option by default.
How to escape reserved character on command line or shell script in linux?