date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,592,644,129,000 |
I think this would best be done with AWK, but not sure. Its been stumping me all day how to do this. I have a text file with * delimiters between the fields on the lines. I need to search for lines beginning with L11*1Z and save the value to a variable or buffer starting with 1Z up to but not including the next * (ie the 2nd field on the line), in the first case this would be 1ZXDF430. Then I need to go to the next line that begins with BGN and replace the string QVD (ie the 3rd field on that line) with the value of the variable. I need to do this for all L11*1Z and following BGN lines found in the file. It would be good to output a new file as a result, if possible rather than overlay the input file.
Input file
xxx
L11*123456*CR
yyy
L11*1ZXDF430*2I*04
zzz
BGN*00*QVD*123456
fff
L11*768907*CR
L11*12345678*CR
xxx
L11*1ZXDF499*2I*04
zzz
BGN*00*QVD*123456
xxx
Resulting output file
xxx
L11*123456*CR
yyy
L11*1ZXDF430*2I*04
zzz
BGN*00*1ZXDF430*123456
fff
L11*768907*CR
L11*12345678*CR
xxx
L11*1ZXDF499*2I*04
zzz
BGN*00*1ZXDF499*123456
xxx
|
Assuming there's a BGN after each L11*1Z, then you should be able to use
$ awk 'BEGIN{OFS=FS="*"} /^L11\*1Z/ {x = $2} /^BGN/ {$3 = x} 1' file
xxx
L11*123456*CR
yyy
L11*1ZXDF430*2I*04
zzz
BGN*00*1ZXDF430*123456
fff
L11*768907*CR
L11*12345678*CR
xxx
L11*1ZXDF499*2I*04
zzz
BGN*00*1ZXDF499*123456
xxx
| Text file: find string, save string field to var, find 2nd string, replace field with var, repeat to end |
1,592,644,129,000 |
I have files in the following manner
ar01440_1775_17_vc00_00.png
ar01440_1775_17_vc00_01.png
ar01440_1775_17_vc00_02.png
ar01440_1775_17_vc00_03.png
ar01440_1775_17_vc00_04.png
ar01440_1775_17_vc00_05.png
ar01440_1775_17_vc00_06.png
ar01440_1775_17_vc00_07.png
ar01440_1775_17_vc00_08.png
ar01440_1775_17_vc00_09.png
ar01440_1775_17_vc00_010.png
ar01440_1775_17_vc00_011.png
ar01440_1775_17_vc00_012.png
ar01440_1775_17_vc00_013.png
ar01440_1775_17_vc00_014.png
ar01440_1775_17_vc00_015.png
ar01440_1775_17_vc00_016.png
ar01440_1775_17_vc00_017.png
ar01440_1775_17_vc00_018.png
ar01440_1775_17_vc00_019.png
I need to sort them into this order.
Desired output:
ar01440_1775_17_vc00_00.png
ar01440_1775_17_vc00_01.png
ar01440_1775_17_vc00_010.png
ar01440_1775_17_vc00_011.png
ar01440_1775_17_vc00_012.png
ar01440_1775_17_vc00_013.png
ar01440_1775_17_vc00_014.png
ar01440_1775_17_vc00_015.png
ar01440_1775_17_vc00_016.png
ar01440_1775_17_vc00_017.png
ar01440_1775_17_vc00_018.png
ar01440_1775_17_vc00_019.png
ar01440_1775_17_vc00_02.png
ar01440_1775_17_vc00_03.png
ar01440_1775_17_vc00_04.png
ar01440_1775_17_vc00_05.png
ar01440_1775_17_vc00_06.png
ar01440_1775_17_vc00_07.png
ar01440_1775_17_vc00_08.png
ar01440_1775_17_vc00_09.png
|
Using the 'en_US.UTF-8' locale caused the the '010' to appear before '01' when sorting. Forcing the C locale for the sort works here:
$ LC_ALL=C ls -1
ar01440_1775_17_vc00_00.png
ar01440_1775_17_vc00_01.png
ar01440_1775_17_vc00_010.png
ar01440_1775_17_vc00_011.png
ar01440_1775_17_vc00_012.png
ar01440_1775_17_vc00_013.png
ar01440_1775_17_vc00_014.png
ar01440_1775_17_vc00_015.png
ar01440_1775_17_vc00_016.png
ar01440_1775_17_vc00_017.png
ar01440_1775_17_vc00_018.png
ar01440_1775_17_vc00_019.png
ar01440_1775_17_vc00_02.png
ar01440_1775_17_vc00_03.png
ar01440_1775_17_vc00_04.png
ar01440_1775_17_vc00_05.png
ar01440_1775_17_vc00_06.png
ar01440_1775_17_vc00_07.png
ar01440_1775_17_vc00_08.png
ar01440_1775_17_vc00_09.png
The C locale is explained here: What does “LC_ALL=C” do?
| How to sort files in unix |
1,592,644,129,000 |
I wanted to move large amount of files from many directories which are under the parent directory in who I'm positioned.
I used following command with backticks:
mv -t directory1/directory2/directory3/ `ls -R | grep _2_3`
So I wanted to move the source of command in backticks to the destination directory which is 'directory3' which are find Recursively under my current directory ( parent one )
Is there any solution to do this with the current command? And what does this error mean exactly?
|
You will notice that ls -R outputs filenames. That is, it does not output pathnames. Therefore, if a file that contains the string _2_3 in its name is found in one of your subdirectories, there is no information about where that file is found in the output of ls -R (on the same line as the filename). This makes your command fail (the filename is not found in the current directory). It would also fail for any file that contains a space, tab or newline in its name, and would potentially also produce strange results if any filename contained filename globbing characters.
Instead, assuming you'd want to move files whose filenames end in _2_3 to a directory /directory1/directory2/directory3 (and that this directory is not a subdirectory of the current directory), then
find . -type f -name '*_2_3' -exec mv -t /directory1/directory2/directory3 {} +
would do that. This would find the pathnames of all regular files (not directories or named pipes, or symbolic links etc.) whose names end with _2_3 anywhere in or under the current directory, and would execute mv -t /directory1/directory2/directory3 with as many of these pathnames as possible in batches.
In the bash shell, you could possibly also do
shopt -s globstar
mv -t /directory1/directory2/directory3 -- **/*_2_3
unless the pattern expands to many thousands of names. The globstar shell option in bash enables the ** globbing pattern. It works like * but will match across / in pathnames. It would therefore find all names matching *_2_3 anywhere in or below the current directory. Note that this command does not care what type of name is matched, and might match directory names too, for example (but so would your ls -R approach do).
In the zsh shell, you could be more precise with the matching:
mv -t /directory1/directory2/directory3 -- **/*_2_3(.)
The (.) modifies the behaviour of the preceding pattern to only match regular files. The ** pattern is enabled by default in zsh.
If you wish to find files whose names contain _2_3, then simply change the *_2_3 bit of the filename pattern in the above commands to *_2_3*.
| mv: cannot stat 'filename_1_2_3': No such file or directory |
1,592,644,129,000 |
I have file which I want to extract and rearrange certain data , Old file contains a raw data this file is Input
reference:cve,2017-8962
sid:45885
reference:cve,2016-10033
reference:cve,2016-10034
reference:cve,2016-10045
reference:cve,2016-10074
sid:45917
reference:cve,2017-8046
sid:45976
reference:cve,2018-6577
reference:cve,2018-6578
sid:46062
and the below file is the New file contains the required output reference:cve,2017-8962
sid:45885
reference:cve,2016-10033
sid:45917
reference:cve,2016-10034
sid:45917
reference:cve,2016-10045
sid:45917
reference:cve,2016-10074
sid:45917
reference:cve,2017-8046
sid:45976
reference:cve,2018-6577
sid:46062
reference:cve,2018-6578
sid:46062.
Explanation:for eample sid:45917 there are four references they are (reference:cve,2016-10033
reference:cve,2016-10034
reference:cve,2016-10045
reference:cve,2016-10074), we need to split each reference and append sid one below the other (note: sid is always followed by reference), like this there are repetitive blocks, so if there are multiple references we need to append them in New file order.
|
As you seem to use post-ponned sid:s (multipe references: followed by their single sids: => pairs of references: and sid:), two solution.
Solution 1 : reversing
Simple use the tac command (it's cat in the reverse order) to reverse the input and the output : tac input | awk | tac > output
For awk part, just duplicate the sid:s:
gawk '/^sid:/{sid=$0};/^reference:/{print sid "\n" $0}'
Solution 2 : array
Store the reference:s in an array as they come and then spit them back out when encountering corresponding sid:
gawk 'BEGIN{r=0};/^reference:/{ref[r++]=$0};/^sid:/{for(n=0;n<r;n++){print ref[n] "\n" $0};r=0}' /tmp/test.txt
/^reference:/{ref[r++]=$0} : for each line which begins by ref... store the line in an array and move the 'r' pointer to the next element.
/^sid:/{for(n=0;n<r;n++){print ref[n] "\n" $0};r=0} : whenever a line begins with sid, walk the whole array until the r pointer (for...) and for each element, print the stored ref and the current line (=sid), then reset the r back to beginning so we begin again with the next references.
| Extract and rearrange from file |
1,592,644,129,000 |
A HDD, which is older than 10 years, is getting read using a SATA-to-USB adapter.
When using sudo hdparm -y /dev/sdj, the HDD does not shut down.
But when using the Eject option in the File Manager, the HDD stops rotating.Side fact: The eject option in Microsoft Windows does shut the HDD down as well.
Why does hdparm not make the HDD spin down while the File Manager does?
|
The hdparm command only does one thing, namely issuing a specific ATA command which tells the drive to transition to a standby state. This doesn't prevent anything from immediately waking up the drive with a new command however so depending on the drive itself, it may not even try to spin down (the smart ones wait a short period of time for incoming commands, and only spin down if there are none). Note that the hdparm man page does not guarantee that this will spin down the drive, it only says it will 'usually' do so.
In contrast, the Eject option in a file manager usually does a lot more than that. At minimum, it does the following (though not necessarily in this exact order):
It makes sure that there are no open files on the drive.
It forcibly flushes all filesystem buffers for all filesystems mounted from the drive.
It unmounts all mounted filesystems from the drive.
It flushes any block-layer caches for the device, and may tear down any intermediary block layers running on top of the device (for example, if FDE is being used, that will get shut down cleanly).
It flushes the device's write cache, if the device has a write cache enabled.
If the device can be put into a low or minimal power state programmatically, it does so.
If the device has physically removable media that can be ejected by software (for example, a CD drive), it issues the appropriate eject command. Otherwise, it may dissociate the block-level drivers for the device from the device itself, effectively shutting off communications with the device.
Those first five steps functionally ensure that nothing in userspace will issue any commands to the device that would wake it from the low power state triggerd in the sixth step, and the final step ensures that the device is properly removed from the system, and treated as a newly connected device the next time it is connected.
| Why does hdparm -y not spin down a HDD while the file managed does? (using Ejection option) |
1,592,644,129,000 |
I have removed the guest account from the command-line using the command
sudo sh -c 'printf "[Seat:*]\nallow-guest=false\n" >/etc/lightdm/lightdm.conf.d/50-no-guest.conf'
How can I restore the guest account?
|
Just remove config file which you created before:
sudo rm /etc/lightdm/lightdm.conf.d/50-no-guest.conf
| How to restore the guest account in Ubuntu 18.04.1 LTS Bionic Beaver? |
1,592,644,129,000 |
I have many text files prepended with digits like this:
12 some text here
some text here
some text here
Or sometimes like this:
123 text here
some more not-so-interesting text here
some text here
even more not-so-interesting text here
And I need them to appear like so:
12
some text here
some text here
some text here
Is this possible using sed or awk or some command line tool? I just need the digits to be on a new line, isolated from the other text on the line.
|
Just remember the number and replace the space after it with a newline:
sed 's/^\([0-9][0-9]*\) /\1\n/'
If your sed supports it, you can use an extended regex to improve readability:
sed -E 's/^([0-9]+) /\1\n/'
[0-9] matches a digit
* means "zero or more times"
+ means "at least once"
\(...\) or (...) create a "capture group", the first capture group can be referenced as \1, etc.
\n represents a newline
| sed/awk file data manipulation |
1,592,644,129,000 |
When I analyze the output from Racon, which I got on GitHub, it has dynamic "animated" text as the output from the STDERR.
For example, when I cat the file, it looks like this:
[racon::Polisher::initialize] aligned overlap 624/2265116
The text then "animates" and overwrites itself to say the next number:
[racon::Polisher::initialize] aligned overlap 1954/2265116
The end result is that there are 220 megabytes worth of data stored in 7 lines.
I would like to get each of these steps listed individually, but when I analyze the text with any text editor, it crashes.
The only tools I have available to me are command line tools.
|
It might be sufficient to just remove carriage return ( <CR> / ^M / 0x0D / \r) chars (unless we get more info on the input). Pipe it through
tr -d $'\r'
| How to convert "animated" text in a document to static text? [duplicate] |
1,533,751,823,000 |
I want to use a time command in a script and put it into a variable (I will have to use it for many commands) so than I can modify just the single variable.
Simplified, this is how I tried it:
PROFILING="/usr/bin/time -f 'time: %e - cpu: %P'" ; $PROFILING ls /usr
I would expect that to be translated into:
# /usr/bin/time -f 'time: %e - cpu: %P' ls /usr
bin games include lib local sbin share src
time: 0.00 - cpu: 0%
However I get this:
/usr/bin/time: cannot run %e: No such file or directory
Command exited with non-zero status 127
'time:
Any suggestion?
Thanks
|
Word splitting doesn't understand quotes in the expanded variables. Use an array instead:
profiling=(/usr/bin/time -f 'time: %e - cpu: %P')
"${profiling[@]}" ls /usr
Or alias:
shopt -s expand_aliases # needed in scripts
alias profiling="/usr/bin/time -f 'time: %e - cpu: %P'"
profiling ls /usr
Or function:
profiling() { /usr/bin/time -f 'time: %e - cpu: %P' "$@"; }
profiling ls /usr
| bash variable with quotes and percentage [duplicate] |
1,533,751,823,000 |
I am looking into the nohup command and I am not sure which shells it support. It seems as if this program works differently in bash and tcsh. What I tried was something very simple.
nohup --help
When I start it from bash it works just fine, but for tcsh is says,
--help: Command not found.
This does definitely not mean the command not works, but this is a confusing indicator. Due to that the settings on the machine I run on seems to keep programs a fairly long time before they terminate, it is hard to actually verify that nohup is working.
Another indicator that things no works as expected is that when running the script I plan to run it does also work differently. When running in bash it will output to nohup.out, while in tcsh it does not (it outputs to the terminal I run the nohup command in).
Any ideas?
Tested on both rhel6 and rhel7
|
When in bash, using nohup will use the external utility nohup. The GNU coreutils' version of nohup does indeed have a --help flag that will output some information.
When in tcsh, using nohup will use the shell's built-in command nohup, even in an external utility of the same name exists. See the tcsh manual for more information about the built-in nohup in that shell.
To use GNU coreutils' nohup in tcsh, use the utility with its full path, e.g., /usr/bin/nohup --help.
| Can I run the nohup command from tcsh |
1,533,751,823,000 |
In "Learning the Bash Shell" by O'reilly (third edition), it is written in page 7:
lp -d lp1 -h myfile has two options and one argument.
How come?
I see what I reckon as two options, each one with an argument:
-d lp1
-h myfile
Notes
lp prints a file (concretely, via a printer, and not on the terminal).
|
While writing this question I understand my mistake, I should read the command this way:
lp
-d lp1
-h
myfile
The word myfile is just a file name that we print with lp it is not an argument of the -h option.
| lp -d lp1 -h myfile has two options and one argument or 2 options and 2 arguments? |
1,533,751,823,000 |
I have some lines which look like:
function( "((2 * VAR(\"xxx\")) - VAR(\"yyy\"))" ?name "name" ?plot t ?save t ?evalType 'point)
function("value(res VAR(\"zzz\"))" ?name "othername" ?plot t ?save t ?evalType 'point)
And, I would like to find a command which would output the string defined in the VAR function, i.e. something like:
xxx yyy
zzz
I have tried in sed but as I understand I have no way to do it in a non-greedy way.
|
If you have a grep that supports Perl Compatible Regular Expression (PCRE) then you can use
grep -Po 'VAR\(\\"\K[^\\]*'
or (more symmetrically - using lookbehind and lookahead)
grep -Po '(?<=VAR\(\\").*?(?=\\")'
Ex.
$ grep -Po 'VAR\(\\"\K[^\\]*'
function( "((2 * VAR(\"xxx\")) - VAR(\"yyy\"))" ?name "name" ?plot t ?save t ?evalType 'point)
function("value(res VAR(\"zzz\"))" ?name "othername" ?plot t ?save t ?evalType 'point)
xxx
yyy
zzz
Ex.
$ grep -Po '(?<=VAR\(\\").*?(?=\\")'
function( "((2 * VAR(\"xxx\")) - VAR(\"yyy\"))" ?name "name" ?plot t ?save t ?evalType 'point)
function("value(res VAR(\"zzz\"))" ?name "othername" ?plot t ?save t ?evalType 'point)
xxx
yyy
zzz
| Extract one or more patterns from a string |
1,533,751,823,000 |
I installed Ubuntu 16.04 LTS and I was wondering, is there a way to start Ubuntu without starting the display manager (or exiting the display manager like Unity after it starts up) to basically go to the "headless" command-line mode where you just see something like:
Welcome to Ubuntu 16.04 LTS
blah blah
blah blah
john@doe$ _
|
As mentioned by Thomas Ward in the comments, normally you should use the Server install ISO when you want a setup like this. If you've already got a system installed and what to convert it though, it's not too hard. The link given in the aforementioned comment has some of the best advice I"ve seen for simply disabling graphical bootup. Unfortunately, if you want to completely remove the GUI, it's a bit more complicated, since the installer for some reason does not properly mark dependency packages as automatically installed (if it did, you could just apt-get purge ubuntu-desktop && apt-get autoremove --purge and be done with it).
Now, as to the confusion you expressed in the comments about the terminology:
X is a display server. In essence, it's the bit of software that sits between all your applications and the OS itself and mediates access to the graphics and input drivers. Wayland is an example of an alternative display server. In comparison, On Windows, this component is part of the kernel instead of being independent software (because modern Windows was designed from the ground-up to use graphical interfaces.
Unity is a desktop environment. It's most of what's responsible for the different appearance of various Linux distributions graphical interfaces. In actuality, a desktop environment is not a single piece of software, but multiple separate programs which work together to provide most of the stuff that makes a desktop interface a desktop interface. More specifically, they usually include:
A window manager, which is responsible for controlling the placement of the windows on the screen, as well as things like how big they are, and the display of the title bars (and usually also handling of workspaces/virtual desktops).
A file manager, which in addition to the regular stuff you expect from a file manager is what displays icons on the desktop (and usually the desktop backgrounds, though that may be handled by a different component).
Some means of starting programs, usually a menu of some sort.
Optionally a panel, often called a taskbar in the Windows world.
Examples of alternative desktop environments include GNOME, KDE, CDE, XFCE, and LXDE, Pantheon, Sugar, Cinnamon, MATE, Enlightenment, and Budgie.
gnome-desktop is a particular component of the GNOME desktop environment, more specifically the bit that's responsible for displaying the desktop background. Unless I'm mistaken, it's also used by Unity for the same purpose.
LightDM is a display manager. It's responsible for handling the actual initial login to the system, switching users, and in some cases may be responsible for handling logging back in after the system has been locked (though this is often the job of the desktop environment or screensaver). It also handles initial startup of the display server on most systems, and the startup of the desktop environment after you log in. The display manager itself is usually what you disable if you want to boot to text mode. Alternative display managers include GDM, KDM, LXDM, SDDM, SLIM, and to a limited extent Qingy.
| How do I exit out of a display manager like Unity |
1,533,751,823,000 |
I like using the traditional ex editor for simple command line operations, to re-arrange text within files. For example for a simple moving lines across in a file I would use something like
foo
bar
zoo
dude
to move the text dude after foo I would just do
printf '%s\n' '/dude' 'd' '/foo' '-' 'put' 'wq' | ex file
which means move to pattern dude, put the line in buffer and paste it after foo and wq to close the file.
This works fine so far, but I want to insert my custom text to the file, given for example
example
//commented
abc
def
I need to add another text above //commented if pattern abc matches i.e. in a file if abc is present and above it if a line starting with // exists add another line //new text, so it should look like
example
//new text
//commented
abc
def
I tried to do below, using itextESC to insert text, but it is not working.
printf '%s\n' '/abc' '-' '/\/\/' 'itextESC' 'wq' | ex file
I would like to make this work in ed or ex to explore more about this tool. Would appreciate insights if ed/ex can be used for such trivial tasks like this.
|
Don't expect visual commands to work in ex mode. Do use the actual ex commands for inserting text, a[ppend] and i[nsert].printf '%s\n' '/abc' '-' '/\/\/' 'i' 'text' '.' 'wq' | ex file
Further reading
Dale Dougherty and Tim O'Reilly (1987). "Advanced Editing". Unix Text Processing. Hayden Books.
| How to use 'itextESC' when using ex as a text editor in command line? |
1,533,751,823,000 |
I am a new to shell script. This is just a variation of some previous questions, but I still cannot find answers. I have the following txt file (one line and no space):
;100=Raspberry;101=Apple;102=Orange;103=Kiwi;104=grape;
I like to pick anything with 101=* or 103=* as an output. So the output needs to look like the following:
;101=Apple;103=Kiwi
I was trying to write a command by modifying
grep -o -m 1 ';101=.*;\|;103=.*' file.txt
But it picks everything after 101=.*. I know that's what the commands says, but still not sure how to change it. I am using Ubuntu 16.
|
.* tries to match as many characters as possible, that's why you get the rest of the line. So instead of . (any char) you have to use [^;] (any char except the semicolon):
grep -o ';101=[^;]*\|;103=[^;]*'
I'm not sure what to tried to achieve with -m 1 in a one-liner. Anyhow you need to combine your output in one line to get the desired result. But I bet you can find out how to do it yourself (in the end it's your exercise, not ours).
| Picks multiple parts of string line |
1,533,751,823,000 |
I am attempting to use a for loop to run command line arguments, I have never attempted this and I am having some trouble.
I am using the following commands:
for((a=1; a<20; a++));
do
./a.out -N 10000 -D .25*a -E 0.7788007831;
done
I am using the getopt function in c to read in values. I want to try different values of D (called Delta in the output). However when I run this command I get:
Acceptance rate is: 0.928400
Estimate is: 0.758704
Error is : 0.020097
Delta used: 0.250000
Acceptance rate is: 0.928400
Estimate is: 0.758704
Error is : 0.020097
Delta used: 0.250000
Acceptance rate is: 0.928400
Estimate is: 0.758704
Error is : 0.020097
Delta used: 0.250000
Acceptance rate is: 0.928400
Estimate is: 0.758704
Error is : 0.020097
Delta used: 0.250000
...
Acceptance rate is: 0.928400
Estimate is: 0.758704
Error is : 0.020097
Delta used: 0.250000
I'm not sure what the problem is though.
|
For one, if you want to refer to a shell variable, you need to use the $foo notation. a is just the letter "a" (in the same way 10000 is just the five digits), but $a expands to whatever the variable contains at the time.
Two, to do arithmetic in the shell, the syntax for arithmetic expansion is $(( expression )), so you could write $(( 25 * $a )) to get the value of a times 25. (as a base 10 number.)
Though the problem you would face here, is that Bash (and the POSIX shell) can only do arithmetic on integers, so multiplying with 0.25 isn't going to work.
In zsh, the floating point arithmetic works, so you could do e.g.
for ((a=1; a<20; a++)); do
echo $((.25 * $a))
done
But in Bash or standard shell, you'll need to use an external program to do the maths:
for ((a=1; a<20; a++)); do
echo $( echo ".25 * $a" | bc -l )
done
Or in your command:
for ((a=1; a<20; a++)); do
./a.out -N 10000 -D $( echo ".25 * $a" | bc -l ) -E 0.7788007831;
done
Of course, if the program you're running can do the multiplication and you just want to pass the strings .25*1, .25*2 etc to it, then you'd use
... -D ".25*$a"
with the quotes around it, since otherwise the * is taken as a filename match (glob). (Actually you'll usually want to put quotes around most places where you use variable expansions or command substitutions, but let's just refer to When is double-quoting necessary? on that.)
There's a number of ways for doing floating point math in the shell presented here: How to do integer & float calculations, in bash or other languages/frameworks?
| Using a loop to generate command line arguments |
1,533,751,823,000 |
I have a directory of files with similar names, but with incrementing digits as a suffix. I want to remove the lower suffixed files and only keep the files with the highest suffix. Below is an example file listing:
1k_02.txt
1k_03.txt
1l_02.txt
1l_03.txt
1l_04.txt
2a_05.txt
2a_06.txt
4c_03.txt
4c_04.txt
The above list needs to be reduced to the files below:
1k_03.txt
1l_04.txt
2a_06.txt
4c_04.txt
I don't even know where to start with this, but if possible I would like a single bash command.
|
Complex pipeline:
Files list:
$ ls
1l_04.txt 2a_05.txt 4c_03.txt 1k_03.txt 1l_02.txt 4c_04.txt 2a_06.txt 1l_03.txt 1k_02.txt
printf "%s\n" * | sort -t'_' -k1,1 -k2nr | awk -F'_' 'a[$1]++' | xargs rm
Results:
$ printf "%s\n" *
1k_03.txt
1l_04.txt
2a_06.txt
4c_04.txt
| Remove files with smallest filename suffixes |
1,533,751,823,000 |
I have a bunch of files named Linux in various sub-folders where the whole line
DSY_OS_Release=`lsb_release --short --id |sed 's/ //g'`
needs to be replaced with
DSY_OS_Release="RedHatEnterpriseWorkstation"
How can I achieve this using the command line?
I know this sounds like a duplicate question, but I could not find any answer which works for my rather complex string.
|
If you don't need to match the whole line, then just use
sed 's/^DSY_OS_Release=.*/DSY_OS_Release="RedHatEnterpriseWorkstation"/'
Depending on your sed implementation, you may use sed -i '...' file, or you may have to redirect to a new file and replace the original afterwards.
As for how to run this on a set of files: If all files match a particular pattern, like *.config, then (assuming GNU sed):
find /some/path -type f -name '*.config' \
-exec sed -i '...as above...' {} \;
| replace a line of complex text within a number of files |
1,533,751,823,000 |
Is there a way or a tool to get percentage of difference between two strings (no new line characters, no files)?
For example, if there are 2 strings and each of them is 10 characters long and differ only in 1 character, then the difference should be 10%.
The strings may have different lengths and can hardly become longer than 30 characters.
|
The Levenshtein distance is a useful metric to give an idea of the amount of difference between two strings. It measures the number of insertions, deletions and substitutions needed to get from one string to the other.
For instance, if you compare abcdef and bcdef, all characters are different if you compare them one to one, but only one deletion is need to get from one to the other.
So you could make your percentage like: distance / max_length:
perl -MList::Util=max -MText::LevenshteinXS -le '
($x, $y) = @ARGV
print 100 * distance($x, $y) / max(length $x , length $x)
' -- "$string1" "$string2"
Or in awk:
awk '
function min(x, y) {
return x < y ? x : y
}
function max(x, y) {
return x > y ? x : y
}
function lev(s,t) {
m = length(s)
n = length(t)
for(i=0;i<=m;i++) d[i,0] = i
for(j=0;j<=n;j++) d[0,j] = j
for(i=1;i<=m;i++) {
for(j=1;j<=n;j++) {
c = substr(s,i,1) != substr(t,j,1)
d[i,j] = min(d[i-1,j]+1,min(d[i,j-1]+1,d[i-1,j-1]+c))
}
}
return d[m,n]
}
BEGIN {
print 100 * lev(ARGV[1], ARGV[2]) / max(length(ARGV[1]), length(ARGV[2]))
exit
}' "$string1" "$string2"
That would give 100 for a vs b or bc, but 50 for ab vs ac or a or b or abcd. Beware you'll get a division-by-zero error if you try to compare the empty string against itself.
Those are limited by the maximum length of a command argument (128KiB on modern Linux systems), though you could work around that by getting the strings some other way (like reading them from a file) if need be.
A different metric that you may want to consider is the Damerau-Levenshtein distance (Text::Levenshtein::Damerau module in perl). That's the same as the the Levenshtein distance, except that transposition of contiguous characters (as in ab vs ba) counts as 1 instead of 2.
That's the distance used for instance by zsh approximate matching (as in [[ abcd = (#a2)acbe ]] to check that abcd is the same of acbe within a maximum distance of 2) and is common when it comes to consider human misspellings or DNA mutations.
| diff percentage between two strings |
1,533,751,823,000 |
I want to use file A (fileA.txt) with 233 IDs (four numbers, first column) to extract the 23th column from file B, but only from rows (first column also) that match the file A ID
I've tried:
awk 'NR==FNR{ a[$0]++; next }{ if ($23 in a) {$0=$23; print}}' FileA.txt fileB.txt > fileC.txt
|
If I understood correctly, you are going to match IDs in column1 from fileA.txt with IDs in column23 in fileB.txt and if matched print the column23 from fileB.txt, if no, please edit your question with more details.
I assume your files are looks like this:
==> fileA.txt <==
1111 column2 column3 column4 ...
2222 c1 c2 c3 c4 ...
4444 co1 co2 co3 co4 ...
3333 col1 col2 col3 col4 ...
==> fileB.txt <==
c11 ... c22 3333 c24 c25
co11 ... co22 0000 co24 co25
col11 ... col22 4444 col24 col25
then the command would be as follow:
awk 'NR==FNR {seen[$1]++;next;} ($23 in seen){print $23}' fileA.txt fileB.txt
the output which is from fileB.txt
c11
col11
| Use file A with number IDs to extract 23th column from file B rows matching the IDs [closed] |
1,533,751,823,000 |
I'd like to find the most similar line pair contained within a file, using something like levenshtein distance. For instance, given a file along the lines of:
What is your favorite color?
What is your favorite food?
Who was the 8th president?
Who was the 9th president?
…it would return lines 3 & 4 as the most similar line pair.
Ideally, I would like to be able to calculate the top X most similar lines. So, using the example above, the second most similar pair would be lines 1 & 2.
|
I wasn't familiar with Levenshtein distances, but Perl has a module for computing Levenshtein distances, so I wrote a simple perl script to compute the distances of each combination of pairs of lines in the input, then print them in increasing "distance", subject to a "top X" (N) parameter:
#!/usr/bin/perl -w
use strict;
use Text::Levenshtein qw(distance);
use Getopt::Std;
our $opt_n;
getopts('n:');
$opt_n ||= -1; # print all the matches if -n is not provided
my @lines=<>;
my %distances = ();
# for each combination of two lines, compute distance
foreach(my $i=0; $i <= $#lines - 1; $i++) {
foreach(my $j=$i + 1; $j <= $#lines; $j++) {
my $d = distance($lines[$i], $lines[$j]);
push @{ $distances{$d} }, $lines[$i] . $lines[$j];
}
}
# print in order of increasing distance
foreach my $d (sort { $a <=> $b } keys %distances) {
print "At distance $d:\n" . join("\n", @{ $distances{$d} }) . "\n";
last unless --$opt_n;
}
On the sample input, it gives:
$ ./solve.pl < input
At distance 1:
Who was the 8th president?
Who was the 9th president?
At distance 3:
What is your favorite color?
What is your favorite food?
At distance 21:
What is your favorite color?
Who was the 8th president?
What is your favorite color?
Who was the 9th president?
What is your favorite food?
Who was the 8th president?
What is your favorite food?
Who was the 9th president?
and showing the optional parameter:
$ ./solve.pl -n 2 < input
At distance 1:
Who was the 8th president?
Who was the 9th president?
At distance 3:
What is your favorite color?
What is your favorite food?
I wasn't sure how to print the output unambiguously, but the strings are there to be printed however you'd like.
| Compare similarity or levenshtein distance between each line pair within a file? |
1,533,751,823,000 |
I found somehow interesting command do specify disk usage which I am using without actually knowing what does the exclude pattern do. Instead excluding some locations or file names or just globs, the exclude pattern consists of the regular expression '*[0-9]G*'
Complete command is du --exclude='*[0-9]G*' -hax / | grep '[0-9]G\>' but the first exclude parameter and the last grep parameter are a little bit confusing me as I thought on the first that I am excluding the exactly same parameter whih I am grepping later. Any help with these arguments would be appreciated.
|
'*[0-9]G*' is in fact a glob expression - not a regular expression.
The command excludes input filenames matching '*[0-9]G*', and then greps for du output lines matching '[0-9]G\>' such as would be produced due to the -h (--human-readable) du option - for example
3.3G /usr/lib
| Exclude parameter in du command |
1,533,751,823,000 |
I saw a video tutorial for the paste command, in which three files foo,bar,baz were connected horizontally with a "+" sign between.
cat foo
51
33
67
cat bar
10
1
13
cat baz
7
100
15
So, he used a paste command to make each line a whole addition and piped this into a while-loop which iterates through each line and puts it into the bc calculator:
paste -d+ foo bar baz | while read bla;do echo $bla|bc;done
I was confused why he used the complicated while-loop since
paste -d+ foo bar baz|bc
worked as well,
however this made me thing "Are there situations in which piping into the while-loop makes sense or is even the only way to achieve something?"
|
In this case it was just for outputting what is progressed at the moment, and that line for line. Piping while loops is sometimes really useful e.g. displaying a progress bar.
Progress Bar Example:
for i in $(seq 1 100)
do
sleep 0.1
echo $i
done | whiptail --title 'Test script' --gauge 'Running...' 6 60 0
| When do you need "...|while read..."? |
1,533,751,823,000 |
I have a list of "tasks" which I go through to learn the shell-code, I need to use grep to isolate the line in /etc/passwd that contains “ubu”.
I know that the command less /etc/passwd is used to access /etc/passwd, and that grep is used to find/search for a certain string pattern, but that's about it
|
With grep:
$ grep -F "ubu" /etc/passwd
This uses grep -F to search for the literal string ubu in the file /etc/passwd. Without the -F, grep would treat ubu as a regular expression. In this case it wouldn't make a difference, but if the string contained characters, like *, which is "special" in regular expressions, then this is how you could make them "less special".
grep will return all lines that contains the match.
If ubu is a username (a complete username, not just a part of one), then the following will additionally do a lookup in any directory service (like LDAP or NIS/YP) that the system may be using:
$ getent passwd ubu
| Isolating the line in /etc/passwd that contains "string" using grep |
1,533,751,823,000 |
I'm trying to do something that looks pretty simple but I haven't been able to solve.
I have a directory with a bunch of subdirectories all of them with multiple files (jpg files). I want to execute a command which keep only 4 or N files inside of these directories. The order of the files isn't important as I have seen related questions depending on the time they were created
I have played with ls + head and trying to put find in a loop with the -delete option but still no luck.
|
Maybe something like:
for dir in /target/dir/*/; do
(cd -- "$dir" && set -- *.jpg && [ "$#" -gt 4 ] && shift 4 && rm -f -- "$@")
done
Which with zsh, you could shorten to:
for dir (/target/dir/*(/)) rm -f $dir/*.jpg(N[5,-1])
| keep N files in subdirectories |
1,533,751,823,000 |
The data in my text file and the output format expected is as shown.
I tried:
cat test2.txt | tr -d "\t"
But that's not working. Please help. As I have to read the file in the expected output format and do further loop processing.
|
Using awk:
BEGIN { RS = "" ; }
{
printf "%s %s %s %s %s %s %s\n", $1, $2, $3, $4, $5, $6, $7
}
produces:
WEBA 30-MAR-17 NA NOT_STARTED 01-APR-17 25-MAR-17 Target_Not_Started
WEBA 29-MAR-17 NA NOT_STARTED 01-APR-17 25-MAR-17 Target_Not_Started
WEBA 28-MAR-17 NA STARTED 01-APR-17 25-MAR-17 Target_Started
| Read multiple lines stored in a text file and format it in bash [closed] |
1,533,751,823,000 |
I have a named fifo and I am writing random number to this fifo.
When I write to fifo I want to find written text in numbers.txt file and write the result row to stdout.
Content of numbers.txt file is:
1 one
2 two
3 three
... and so on
I want to search text which piped to grep in this file.
For example If I write 1 to named pipe, grep should print 1 one to stdout.
I am running command shown as below in first terminal:
cat <> myfifo | grep -f - numbers.txt
And I am writing to fifo in another terminal show as below:
echo 1 > myfifo
But I can't see any output in stdout in first terminal window.
I want to see 2 two in first terminal output when I execute echo 2 > myfifo in second terminal window. How to achive this?
|
cat <> myfifo opens the named pipe for both reading and writing. As long as the pipe is open for writing, the reader doesn't reach the end of the file. So cat never reaches the end of its input, so it never closes its output which is the pipe to grep, so grep never reaches the end of the input for the -f option.
Grep can't start searching until it knows what pattern to search. So it remains blocked forever without even starting to read from numbers.txt.
If you want to search for the patterns coming through myfifo then just use
grep -f myfifo numbers.txt
You can also write it cat <myfifo | grep -f - numbers.txt but that's needlessly complicated.
Note that a pattern like 1 matches any line containing 1, such as 11 eleven. If you want to match only lines that begin with one of the numbers coming through the pipe, use something like
<myfifo sed 's/^0*/^0*/; s/$/ /' | grep -f - numbers.txt
| Piping continious stream to grep as search term for search in a file |
1,533,751,823,000 |
I'm using this command to watch youtube videos from command line
youtube-dl https://www.youtube.com/watch?v=19jv0HM92kw -o - | mplayer -vo caca -
and I find it very amusing. However the player only shows on portion of my screen and I can't figure out on how to change the screen dimentions, as the mplayer arguments dont work (probably need to find how to pass arguments to libaa caca driver). Anyone?
|
Set CACA_GEOMETRY environment before run mplayer, like:
youtube-dl https://www.youtube.com/watch?v=19jv0HM92kw -o - | CACA_GEOMETRY=80x25 mplayer -vo caca -
(google power, 1st hit: http://www.mplayerhq.hu/DOCS/HTML/en/caca.html )
| mplayer text mode set screen dimentions |
1,533,751,823,000 |
when I'm executing my bash-script, I'm getting the wrong PID. I'm needing the PID to kill the process at the end of it. This is a simplified script that is affected by the issue:
echo 'PASSWORD' | sudo -S ping -f '10.0.1.1' &
PING_PID=$BASHPID;
echo $PING_PID;
The output is for example
[1] 14336
PC:~ Account$ PING 10.0.1.1 (10.0.1.1): 56 data bytes
.
.PC:~ Account$..Request timeout for icmp_seq 18851
...
But when I'm looking to activity monitor (on Mac) I see that the ping-process has PID 14337, but why does the variable contains then 14336 and how to fix it?
|
$BASHPID is the PID of the current bash process. You are looking for $!; see man bash, especially special parameters and job control. Also, ping needs sudo only if you are using -f (flood). Using sudo may complicate things, because as far as bash knows you are running sudo, not ping, therefore $! will return the PID of sudo.
$ ping -c 5 www.example.com & echo "The PID of ping is $!" ; sleep 6
[1] 4022
The PID of ping is 4022
PING www.example.com (192.168.218.77) 56(84) bytes of data.
64 bytes from 192.168.218.77: icmp_seq=1 ttl=64 time=0.260 ms
64 bytes from 192.168.218.77: icmp_seq=2 ttl=64 time=0.329 ms
64 bytes from 192.168.218.77: icmp_seq=3 ttl=64 time=0.382 ms
64 bytes from 192.168.218.77: icmp_seq=4 ttl=64 time=0.418 ms
64 bytes from 192.168.218.77: icmp_seq=5 ttl=64 time=0.434 ms
--- www.example.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.260/0.364/0.434/0.066 ms
[1]+ Done ping -c 5 www.example.com
$
| Bash-Script returning wrong PID |
1,533,751,823,000 |
I need to delete recursive subfolders in a single line.
For one subfolder:
find folder -name "subfolder" -exec rm -r "{}" \;
or
find folder -name "subfolder" -type d -exec rm -r "{}" \;
But in the case of several subfolders in a single line? (subfolder1, subfolder2 or foo, bar, dummy…)
|
What I would do :
find folder -name "subfolder[0-9]*" -exec rm -r {} \;
using a glob
or
find folder \( -name 'foo' -o -name 'bar' -o name 'base' \) -exec rm -r {} \;
| Delete recursive subfolders with find |
1,533,751,823,000 |
I recently came across this command in bash:
cat > filename << EOF
I do not understand the << EOF part. Googling the << operator, I only came across regular shift arithmetic left. Playing around with it gave me no insight either.
Any explanation would be appreciated!
|
It's here document, described in man bash:
Here Documents
This type of redirection instructs the shell to read input from
the current source until a line containing only delimiter (with
no trailing blanks) is seen. All of the lines read up to that
point are then used as the standard input for a command.
The format of here-documents is:
<<[-]word
here-document
delimiter
No parameter and variable expansion, command substitution,
arithmetic expansion, or pathname expansion is performed on
word. If any characters in word are quoted, the delimiter is
the result of quote removal on word, and the lines in the
here-docu- ment are not expanded. If word is unquoted, all
lines of the here-document are subjected to parameter
expansion, command substitution, and arithmetic expansion, the
character sequence \<newline> is ignored, and \ must be used to
quote the char- acters \, $, and `.
If the redirection operator is <<-, then all leading tab
characters are stripped from input lines and the line
containing delimiter. This allows here-documents within shell
scripts to be indented in a natural fashion.
Example usage:
$ cat > filename << EOF
> Write this line to filename
> And this line
> And this
> EOF
$ cat filename
Write this line to filename
And this line
And this
| What does this bash operation do? [duplicate] |
1,533,751,823,000 |
I wanted to know the frequency of A and B in column $3 and $4 for each different character present in column $1.
Command line in linux.
Example my input:
ID01 a1 A B
ID01 a2 A B
ID01 a3 A B
ID02 a1 B B
ID02 a2 B B
ID02 a3 B B
OA03 a1 A A
OA03 a2 A A
OA03 a3 A A
EA04 a1 -- --
EA04 a2 -- --
EA04 a3 -- --
I want this output:
ID01 A 0.50
ID01 B 0.50
ID02 A 0.00
ID02 B 1.00
OA03 A 1.00
OA03 B 0.00
EA04 A 0.00
EA04 B 0.00
How I can do this?
Thank you!
|
One way to adapt your associative array based awk solution would be to concatenate the contents of $3 and $4 for each $1, and then at the END make use of the fact that gsub returns the number of replacements to count occurrences of A and B in the respective strings. For example:
awk '{
a[$1]=a[$1]$3$4;
next;
}
END{
for (i in a) {
n = length(a[i]) == 0 ? 1 : length(a[i]); # avoid div-by-zero
printf "%s A %.1f\n", i, gsub(/A/,"",a[i])/n;
printf "%s B %.1f\n", i, gsub(/B/,"",a[i])/n;}
}' input
EA04 A 0.0
EA04 B 0.0
OA03 A 1.0
OA03 B 0.0
ID01 A 0.5
ID01 B 0.5
ID02 A 0.0
ID02 B 1.0
| Frequency of "A and B" for each specific character of other column |
1,533,751,823,000 |
How do I understand what the various options/flags mean?
For example:
1) uname -a - What does -a denote here?
2) pyang -f - What does -f denote here?
I just want to understand if there is some reference/doc that tells the usage of these? Please clarify.
|
With almost all Linux commands, I think the fastest and easiest first course of action is to append "--help" to the command. This gives you a good summary, which is often enough.
If you need more details, the man command is a good second choice.
For example:
$ uname --help
Usage: uname [OPTION]...
Print certain system information. With no OPTION, same as -s.
-a, --all print all information, in the following order,
except omit -p and -i if unknown:
-s, --kernel-name print the kernel name
-n, --nodename print the network node hostname
-r, --kernel-release print the kernel release
-v, --kernel-version print the kernel version
-m, --machine print the machine hardware name
-p, --processor print the processor type (non-portable)
-i, --hardware-platform print the hardware platform (non-portable)
-o, --operating-system print the operating system
--help display this help and exit
--version output version information and exit
| What do the options after a specific command mean? |
1,533,751,823,000 |
It turned out that our production machine, running under CentOS, didn't have any timeout command, and we need to upgrade its core utilities (current version is coreutils-5.97).
Is it safe to upgrade this package?
Applications deployed on this machine are ran by Apache Tomcat Web server.
|
coreutils try really hard to be backwards compatible, though there isn't much point updating all utilities as that would add extra risk for no gain. You should be able to add just timeout using something like:
tar -xf coreutils-8.25.tar.xz && cd coreutils-8.25 &&
./configure --quiet && make && make check &&
cp src/timeout /usr/local/bin
| Risks of Updating CentOS' coreutils-5.97 |
1,533,751,823,000 |
I have a CSV file that I need to parse, but the first n lines of this file are worthless garbage.
Fortunately, I know the proper header line starts with foo, and that every line before the first appearance of foo at position 0 can be deleted.
tl;dr How do I make this
an unknown
number of lines
with worthless junk
that's breaking
my CSV parsing
foo,this,is,the,header,line,always,starts,with,foo
[ legit records to follow ]
Turn into this
foo,this,is,the,header,line,always,starts,with,foo
[ legit records to follow ]
I am expecting a sed-powered response to be the right course of action, but any solution that I can run from the command line is sufficient.
|
Per comments and further research, here's what ultimately worked for me
sed -i '/^foo/,$!d' path/to/file
| Delete lines from pattern match backwards |
1,533,751,823,000 |
I have a long list of folder as follow:
001_bat_3513
002_mon_3213
003_bat_3515
004_btt_3540
005_bat_4513
055_bpt_8523
056_bot_3513
058_bat_1513
.
.
From this list:
How can I copy the folders ( and all its content) that has the part " bat" in its name?
|
You can use shell globbing for this:
cp -rp *bat*/ /destination/
Here *bat*/ will expand to directories having bat in their names.
Or using find, which will work even if there are so many files that you get an error because the command line is too long:
find . -maxdepth 1 -type d -name '*bat*' -exec cp -rpt /destination {} +
| Copy folders has specific part of name and it is content |
1,533,751,823,000 |
I am using a Debian Virtual Box with the Bash shell and I am trying to use the ps command with the -c switch to find an ID of a process by searching its name. This is what I write:
ps -c processname
It then tell me:
error: unsupported option (BSD syntax)
This is the URL for the website that told me to use the syntax I am currently using: Understanding the kill command, and how to terminate processes in linux
Any help?
|
Try this syntax instead.
ps -A | grep processName
If your results include the process grep, remove it with:
ps -A | grep processName | grep -v grep
In my experience most linux programs work the same (ps) but there are differences that will always creep up.
Check YOUR current version with the manual pages for the correct syntax for your installation.
man ps
btw: Check the man page for grep to make it not case sensitive.
man grep
| ps command not working properly? [closed] |
1,533,751,823,000 |
I want to run a command every several minutes until I turn it off.
After some searching, I found lots of ways of doing the first half (running a command periodically and indefinitely), what about I want to turn it off?
Edit
By "turn it off" probably I mean "turn it off using another shell command".
Edit
Just to clarify want I really want to do, actually I want to run a Python program P periodically until I give some command to stop it.
And just in case that P happens to be running while I'm giving out the stop command, I want it to finish the current run first, and then not to run ever since.
|
You have two options: Modify the Python script, or write a shell script wrapper.
To modify the Python script:
You should loop around what it is you want to be doing.
Install a signal handler to catch the INT signal (sent by Ctrl-C) and TERM signal (sent by plain kill). When the signal is caught, set a variable telling the Python script that it should no longer loop. I'm not familiar enough with with Python to be able to tell you how to do this.
Alternative solution: Shell script wrapper, which does the same as the above, but outside the Python script:
#!/bin/sh
loop_again=1
trap 'loop_again=0;wait' INT TERM
while (( loop_again )); do
echo "Kill me with 'kill -9 $$'"
./python_script &
echo "Kill the Python script with 'kill $!'"
wait
done
This wrapper script starts your Python script in the background, and then waits for it to finish. It then restarts it, indefinitely. It also tells you how to terminate it and the Python script in each iteration.
If you kill the script (using plain kill, not kill -9), or press Ctrl-C, the loop_again variable is set to zero (which will cause the loop to terminate at the end of the current iteration), and then it waits for the currently running background process to finish before exiting.
If you kill the wrapper script with kill -9 (which is the same as kill -s KILL, sending a KILL signal that can't be ignored or caught by a signal handler), it will exit, leaving your Python script running in the background until it finishes by itself.
| How can I run a command periodically and indefinitely till it's turned off? |
1,533,751,823,000 |
I understand why it could be less than best practice if I write C code that executes shell commands by calling system() and that it's better to use exec and fork but then a very experienced C programmer told me that it's wrong to make a shell by forking and execing but he never answered why. Can you tell me? I could have misunderstood by my code for my custom new shell uses fork and exec to execute a pipeline that I can enter at a prompt.
Did he mean that the best shell also implements the programs from /bin ? I quote the experienced C programmer but I don't understand why he told me this.
having C code which forks & pipes several programs inside your shell is IMHO quite wrong.
|
The two main reasons to run a program directly without calling the shell are:
Performance: Most programs that you would call from your C program are likely much smaller than the shell, which makes them start much more quickly.
Environment control: Dealing with an additional layer of environment variables to deal with can be more complex to configure and troubleshoot.
| Can all of fork(), exec() and system() be wrong? [closed] |
1,460,992,633,000 |
For example : To build a kernel we can use make it will take more than one hour , to accelerate the process we can use make -j4 all this may finish almost four times as quickly.
Generally : How to force an application to use a specific nombre nof cpu ?
|
There is no general solution to this problem - cpu usage of applications is entirely down to what the application does, and how it works internally.
In order to parallelism tasks for efficiency reasons, you often need to restructure the process flow. This varies greatly, depending on algorithm, so it's never as simple as pressing a 'use more processors' button.
| How to force an application to use a specific cpu? [duplicate] |
1,460,992,633,000 |
My problem: I cannot set my default shell for user 'student' on CentOS 7 to fish-shell. I installed fish-shell by downloading the .gz, configuring, make, make install.
output of which fish
/usr/local/bin/fish
running su from the standard 'student' account, I am able to escalate myself to root, which fish is set as the default shell. But when I run
student@localhost ~> whoami
root
student@localhost ~>
/root
student@localhost ~> sudo chsh -s /usr/local/bin/fish student
Changing shell for student.
chsh: user attribute not changed: Invalid contents of lock `/etc/passwd.lock`
looking at /etc/passwd I can see no changes occured.
Any idea what I can do? It looks like the lock files are preventing me from proceeding.
|
Looks like one of your previous attempts to change the /etc/passwd file has left some rubbish in the lock file.
That lock file prevents multiple updates that would cancel each other out. If you are the only one using that system, remove the file /etc/passwd.lock and try again.
| Can't set my shell as fish shell due to error when using chsh due to lock file [closed] |
1,460,992,633,000 |
If I execute the below rsync command in command-line , I am getting the proper return status
/usr/bin/rsync -azv -p /home/zaman x11server:/home/zamanr &> rsyncjob/output."$datetime"
echo $?
255
The hostname is unreachable , so I am getting a return value of 255
This is fine for me. But If I put the same command in a bash script , then I am not getting any return value
#!/bin/bash
datetime=`date +%Y.%m.%d`
ret_value= `/usr/bin/rsync -azv -p /home/zaman x11server:/home/zamanr &> rsyncjob/output."$datetime"`
echo $ret_value
The output of script just shows blank . The $ret_value is not printed.
What I am missing here to get the return value of the rsync command printed via script.
|
try this.
#!/bin/bash
datetime=`date +%Y.%m.%d`
ret_value=`/usr/bin/rsync -azv -p /home/zaman x11server:/home/zamanr &> rsyncjob/output."$datetime"; echo $?`
echo $ret_value
The issue is that the command you are running doesn't produce an output since you are piping it to a file. Appending the command to echo the last command exit code should give you the result you are seeking.
| return value of command not displayed in script |
1,460,992,633,000 |
I know that whatis command is used to output a brief description about an executable program (Command).
So both
whatis cd
whatis type
Will print: nothing appropriate (Because from my understanding they are both shell builtins). However how come, it works for
whatis echo
even though, echo is a shell builtin, is there any explanation for that ?
|
This works for echo because is both a shell builtin and a command. By default, the builtin is used.
$ type echo
echo is a shell builtin
$ type -P echo # ignores builtins
/bin/echo
$ echo foo # builtin
foo
$ /bin/echo foo # external command
foo
| Whatis command (shell builtin vs executable programs) |
1,460,992,633,000 |
I am trying to set up a LAN chat with two users using Linux server and none of them is root.
I have tried this two methods:
write account_name on both computers
And:
nc -l port_number on first computer
nc IP_adress port_number on second computer
But the problem is whenever I write something and person on the other side hits enter it breaks also my line e.g:
I am typing: "This is just a simenterple text". And this enter from another person breaks my line.
Is there way how can I fix that? Or another way I can set up this chat?
|
Have a look at talk and talkd.
See https://wiki.archlinux.org/index.php/Talkd_and_the_talk_command and http://linux.die.net/man/1/talk for details.
| Chat over LAN in Linux |
1,460,992,633,000 |
I have a file which has the names of many files in a directory in the following format:
A20150824.0950-0955_jambala_CcnActiveSessionCounterJob
A20150824.0945-0950_jambala_CcnActiveSessionCounterJob
A20150824.0940-0945_jambala_CcnActiveSessionCounterJob
A20150824.0935-0940_jambala_CcnActiveSessionCounterJob
A20150824.0955-1000_jambala_CcnActiveSessionCounterJob
A20150824.0000-0005_jambala_CcnActiveSessionCounterJob
A20150824.0100-0105_jambala_CcnActiveSessionCounterJob
A20150824.0105-0110_jambala_CcnActiveSessionCounterJob
A20150824.0110-0115_jambala_CcnActiveSessionCounterJob
A20150824.0115-0120_jambala_CcnActiveSessionCounterJob
A20150824.0120-0125_jambala_CcnActiveSessionCounterJob
A20150824.1400-1405_jambala_CcnActiveSessionCounterJob
The naming convention of the above files is A<YYYYMMDD>.HHMM-HHMM_<city>_CcnActiveSessionCounterJob.
These files are created for every 5 minute for all the hours each day such that, for every hour i get 12 files and for every day 12X24 files. I have generated a file which has got the names of all the 12x24 files and have a while loop in bash script where I am trying to do some processing per hour. I want to find a way by which I could create another file which contains the 12 files for each hour. For this, I am having a while outer loop which gives the hour value and inner minute loop which gives the minutes value. These files has the time information in their names. Ex:
A20150824.0950-0955_jambala_CcnActiveSessionCounterJob
gives the information of 09-50 AM to 09-55 AM.
How do I use grep to extract the filenames from the file which contains all the 12X24 file names and put them in a separate file such that a new file contains the following file names:
A20150824.0900-0905_jambala_CcnActiveSessionCounterJob
A20150824.0905-0910_jambala_CcnActiveSessionCounterJob
A20150824.0910-0915_jambala_CcnActiveSessionCounterJob
A20150824.0915-0920_jambala_CcnActiveSessionCounterJob
.
.
.
A20150824.0955-1000_jambala_CcnActiveSessionCounterJob
I already have a variable hour which has the currently being processed hour information, I was trying to use the following but it doesn't work:
grep -E '.$hour' FILE_with_ALL_FILENAMES
where .$hour is targeted for .09 in the above file name. How do i fix this?
|
Assuming:
hour=09
Just use that:
grep "\.$hour" file
With the single quotes in your example, the variable is not interpreted as variable. Therefore the pattern searches for $hour. Also the dot has to be escaped, else it would match any character.
| How to grep the following lines from a file? |
1,460,992,633,000 |
Objective
I'm trying to execute the following command in my Python script:
rdiff-backup --terminal-verbosity=5 --remote-schema "ssh %s -p1019 -i
C:/Users/Adam/.ssh/private-passphrase rdiff-backup --server"
C:/Users/Adam/Desktop [email protected]::/media/exthdd1/backup
My source directory is from a Windows OS and my target directory is a Debian based system.
Problem
I get this output:
`Using rdiff-backup version 1.2.8
Executing ssh [email protected] -p1019 -i C:/Users/Adam/.ssh/private-passphrase r
diff-backup --server
Enter passphrase for key 'C:/Users/Adam/.ssh/private-passphrase':
Found interrupted initial backup. Removing...
Hardlinks disabled by default on Windows
Unable to import module xattr.
Extended attributes not supported on filesystem at C:/Users/Adam/Desktop
Unable to import module posix1e from pylibacl package.
POSIX ACLs not supported on filesystem at C:/Users/Adam/Desktop
escape_dos_devices not required by filesystem at C:/Users/Adam/Desktop
-----------------------------------------------------------------
Detected abilities for source (read only) file system:
Access control lists Off
Extended attributes Off
Windows access control lists On
Case sensitivity Off
Escape DOS devices Off
Escape trailing spaces Off
Mac OS X style resource forks Off
Mac OS X Finder information Off
-----------------------------------------------------------------
POSIX ACLs not supported by filesystem at /media/exthdd1/backup/rdiff-backup-dat
a/rdiff-backup.tmp.0
Unable to import win32security module. Windows ACLs
not supported by filesystem at /media/exthdd1/backup/rdiff-backup-data/rdiff-bac
kup.tmp.0
escape_dos_devices not required by filesystem at /media/exthdd1/backup/rdiff-bac
kup-data/rdiff-backup.tmp.0
-----------------------------------------------------------------
Detected abilities for destination (read/write) file system:
Ownership changing Off
Hard linking On
fsync() directories On
Directory inc permissions Off
High-bit permissions Off
Symlink permissions Off
Extended filenames On
Windows reserved filenames Off
Access control lists Off
Extended attributes On
Windows access control lists Off
Case sensitivity On
Escape DOS devices Off
Escape trailing spaces Off
Mac OS X style resource forks Off
Mac OS X Finder information Off
-----------------------------------------------------------------
Backup: must_escape_dos_devices = 0
Symbolic links excluded by default on Windows
Starting mirror C:/Users/Adam/Desktop to /media/exthdd1/backup
Processing changed file .
Processing changed file Git Shell.lnk
Sending back exception [Errno 1] Operation not permitted: '/media/exthdd1/backup
/rdiff-backup.tmp.4' of type <type 'exceptions.OSError'>:
E File "/usr/lib/python2.7/dist-packages/rdiff_backup/connection.py", line 335,
in answer_requestxception '[Errno 1] Operation not permitted: '/media/exthdd1/b
result = apply(eval(request.function_string), argument_list)Traceback (most
recent call last):up\Main.pyc", line 304, in error_check_Main
File "rdiff_backup\Main.pyc", line 324, in Main
File "rdiff-backup", line 30, in <module>n take_action
File "/usr/lib/python2.7/dist-packages/rdiff_backup/backup.py", line 232, in p
atchle "rdiff_backup\backup.pyc", line 38, in Mirror
File "rdiff_backup\Main.pyc", line 304, in error_check_Main
ITR(diff.index, diff)ection.pyc", line 370, in reval
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rorpiter.py", line 281, in
__call__ File "rdiff_backup\Main.pyc", line 324, in Main
last_branch.fast_process(*args) File "rdiff_backup\Main.pyc", line 280, in
File "/usr/lib/python2.7/dist-packages/rdiff_backup/backup.py", line 529, in f
ast_process File "rdiff_backup\Main.pyc", line 346, in Backup
if self.patch_to_temp(mirror_rp, diff_rorp, tf):
File "/usr/lib/python2.7/dist-packages/rdiff_backup/backup.py", line 559, in p
atch_to_temp File "rdiff_backup\connection.pyc", line 450, in __call__
rpath.copy_attribs(diff_rorp, new)OSError0, in reval
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 189, in co
py_attribs:
rpout.chmod(rpin.getperms())Errno 1] Operation not permitted: '/media/exthdd
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 927, in ch
mod
self.conn.os.chmod(self.path, permissions & Globals.permission_mask)
Fatal Error: Lost connection to the remote system`
Attempts to resolve
I thought it was permissions but my target directory is 777
I've tried running CMD as admin
Adding my Windows user account to the user group Users
Got half way through setting up Cygwin to have sshd service but kept getting "Error 1053: Could not start service" so gave up. I tried to do this because I thought SSH aliases would simplify the command and to see if it got me anywhere, something about the placeholder %s makes me feel sick so I thought if I could get around using it then I might get somewhere. Does anyone think SSH aliases is even worth still pursuing?
|
The problem was indeed in /etc/fstab - I added the gid and uid values to the target drive's line of the user pi.
| rdiff-backup operation not permitted [closed] |
1,460,992,633,000 |
I like the elinks browser and I would like to know how does it render the HTML into text, using ANSI styles.
I suppose there is a library behind elinks to handle the rendering, or there should be. Is it possible to use that library in another project (e.g. to create a bridge to NodeJS)?
I would like to know where to start. :-)
|
I've taken a brief look at the source code. The HTML parsing and rendering code is a core part of elinks, and while it appears to be somewhat modular, it is not a separate library. It might be possible to separate it, but not without a good deal of work.
If you're curious, the src/README file provides an overview of how the various parts depend on each other. The HTML parsing and rendering code is under src/document/, but also depends on src/viewer/, src/config/, and other parts of the code.
But to start, see if you can get elinks -dump to do what you want. Good UNIX tools are designed to work together with other tools, and this is how elinks provides its rendering service without being an interactive browser. You will probably want to use a custom config file to control how you want the dump to look. Take a look at man 5 elinks.conf, in the document.dump section... and of course man elinks to read up on the -dump and -config-file options.
| How does Elinks render HTML? [closed] |
1,460,992,633,000 |
I need to run a command to grep one of these strings from command line as web host disabled account to run website from browser until virus files are removed and I managed to collect few strings that can find those files.
I would appreciate if someone knows if I can run a command for this.
Text to find:
bigdeal777
Goog1e_analist_certs.*
tevq\(ucyq\)
GR_HOST_ID.*
\['cmd'\]
ejppy.*
eval\(gzinflate.*
eval\(base64_decode.*
FilesMan.*
Web Shell by.*
Goog1e_analist_up.*
palcastle.*
shell_exec
google_analytics_obh.*
udb=1
createCSS.*
base64_decode\(str_replace
exit;move_uploaded_file
msgz.*
iskandar.*
\.sterling.*
CLaW.*
feoMEN.*
Hacke.*
into [a-z0-9\-_]{1,}orders
gagal
JSinj
linkonline
SUKS.*
\@system\(
\@passthru\(
\@popen\(
Mohajer22
\@extract\(
likecinema
mp3aim
mixmenow
lyricsoasis
PGlmcmFtZSBzcmM9Imh0dHA6Ly93d3cubC1jb3VudGVyLmNvbS9zdGF0cy5waHA\/aWQ9
\$_POST\[\'skip\'\]
u0058
urlencode\(strrev
viagra.*
I just need to know full path to the matching file.
Thank You.
|
If you can put those in a file you can use grep's -f flag to read the patterns from a file and you can use -l to show just the files that have a match
Putting those together you can do something like
grep -R -l -f scanner.txt *
So the -R will cause it to search recursively (I'm assuming you want that), -l will print just the names of the files that contain a match, and -f says to read the search patterns from the file scanner.txt
| How to grep from command line for theses multiple strings? |
1,460,992,633,000 |
I run grep some-string -r . &. While it is running in bkg, I cd to another directory. It seems that grep interprets the hard link . differently then. What happens before and after I change the current directory? Will both the original and the new directories not be searched completely?
I wonder if . as a command line argument to a command is only dereferenced at the start of running the command, or is dereferenced whenever it is used by the program during its running?
|
Each process has its own "current working directory", which can't be changed from outside the process.
So when you do
grep some-string -r . &
your shell starts grep in the background, and grep's current working directory is initialised to the same value as the shell's at that moment. grep's definition of . here is its own current directory, not anything else's; the shell has no part in the argument's interpretation.
Subsequently changing the shell's directory using cd has no impact on grep...
| . as a command line argument to a command running in background |
1,460,992,633,000 |
i want to move myfile.tar.gz to folder
My command is
mkdir backup
mv myfile.tar.gz /backup
arrrgh, and the file gone
in /backup directory not exist, try using find command not shown
how to find it?
Thank you
|
You created a directory called backupunder the directory where you were at that moment.
However you moved the file myfile.tar.gz to /backup. The / means that you moved the file to a new file called backup under directory /.
The only thing you did was rename myfile.tar.gz to backup and put it under /.
| use `mv` command and my file gone [closed] |
1,460,992,633,000 |
I used to use gconftool-2 to edit keys in this way (here I change the cursor shape in gnome-terminal):
gconftool --type string --set /apps/gnome-terminal/profiles/Profile0/cursor_shape ibeam
But it doesn't work anymore, and I feel like there is a problem with the DBus daemon, even though I can't explain why.
This command does change the key in the ~/.gconf/.../Profile0/%gconf.xml, where I can now read:
<entry name="cursor_shape" mtime="1419267709" type="string">
<stringvalue>ibeam</stringvalue>
</entry>
But it has no effect on my cursor shape anymore: it is still a block.
Now, here is an interesting fact: if I use gconf-editor and navigate to this key, I find it set to block.
And if I now edit this key with the gui, it does change my cursor shape.
Everything behaves like the keys stored in memory and the keys stored in the .xml files are not updated together with the gconftool-2 command.
I also noticed that gconftool-2 --ping doesn't return anything.
I have tried reinstalling gconf2 gconf2-common gconf-service gconf-default-service with no success. I also tried erasing the whole ~/.gconf folder, but the same thing keeps happening.
I have had a look at gsettings but my gnome-terminal doesn't seem to be supported with it since the schema org.gnome.terminal doesn't exist and since I can't find any folder gnome-terminal nor gnome/terminal under dconf-editor.
This is driving me mad, did it happen to anyone? How is the gconftool-2 supposed to refresh and get instant changes in the running apps?
|
Got it! Credits to this answer. I added the following lines to my .zshrc or .bashrc:
sessionfile=`find "${HOME}/.dbus/session-bus/" -type f`
export `grep "DBUS_SESSION_BUS_ADDRESS" "${sessionfile}" | sed '/^#/d'`
And the settings are now refreshed as soon as I use gconftool-2.
| gconftool-2 doesn't refresh with the dbus anymore? |
1,460,992,633,000 |
I was messing around with a log4j properties file and accidently made a folder with the following text ${foo} however I also have an environment variable named foo that points to a folder so thus if I do rm -rf "${foo}" it removes the folder $foo is pointing to, instead of the folder ${foo}. How can I specify to delete the folder in my current directory using a relative path instead of deleting the folder the environment variable points to?
Here is the layout to better help understand
$foo = /home/user/bar
${foo} = /home/user/${foo}
|
String interpolation causes this. There are a number of ways to selectively prevent this from happening.
The bash hackers wiki has some good examples, though the specifics may vary if you're not actually using bash.
In short, you can prevent interpolation with single quotes, or you can escape the characters.
[me:~/work]$ export foo=bar
[me:~/work]$ echo $foo
bar
[me:~/work]$ echo "\${foo} is set to ${foo}"
${foo} is set to bar
| Remove a folder with the same name as an environment variable |
1,460,992,633,000 |
I have a directory(INPUTDIR) with sample names as subdirectories(508_C,540_C,570_D etc).Within those each subdirectories there is another directory called FASTQ which contains two kinds of files.
e.g.
540_Ct_1.fastq.gz
540_Ct_2.fastq.gz
I want to create two lists,the first having all _1.fastq.gz filenames with paths and the other having _2.fastq.gz filenames with paths.
The directory structure is
INPUT DIR > 508_C >FASTQ > 508_1.fastq.gz 508_2.fastq.gz
INPUT DIR > 540_C >FASTQ > 540_Ct_1.fastq.gz 540_Ct_2.fastq.gz
INPUT DIR > 570_D >FASTQ >570_Ct_1.fastq.gz 570_Ct_2.fastq.gz
The INPUTDIR is the main directory.I want to create TWO lists in this directory.
One list has :
/home/user/INPUT DIR > 508_C >FASTQ > 508_1.fastq.gz
/home/user/INPUT DIR > 540_C >FASTQ > 540_Ct_1.fastq.gz
/home/user/INPUT DIR > 570_D >FASTQ > 570_Ct_1.fastq.gz
The second list has:
/home/user/INPUT DIR > 508_C >FASTQ >508_2.fastq.gz
/home/user/INPUT DIR > 540_C >FASTQ > 540_Ct_2.fastq.gz
/home/user/INPUT DIR > 570_D >FASTQ > 570_Ct_2.fastq.gz
Thanks,
Ron
|
cd INPUTDIR
find . -name \*1.fastq.gz > list1
find . -name \*2.fastq.gz > list2
The paths in the "list" files will be relative to the current directory. If you want absolute paths, use
find "$PWD" -name \*1.fastq.gz > list1
| Creating a list containing filenames with paths |
1,460,992,633,000 |
I know that is not a exciting question, but yet I don't understand why some programs needs
program -h
and other
program --help
sometimes is very boring recognize it
|
In practice, programs should have both options. The -h is the "short form" and --help is "long form".
Short form command options are usually one or two characters while long form options are more descriptive (such as yum update -y and yum update --assume-yes meaning "assume yes to all questions").
Programs that don't use both usually are non-standard utilities.
| why some programs needs -h and other no |
1,460,992,633,000 |
I'm training myself to use XMONAD or something of the sort, but in order to do that, I need to know, from terminal, how to open the following programs:
Bluefish Editor
Settings (I'm using Ubuntu but will install XMONAD ENV.)
Ubuntu Software Center
View time
|
Like this?:
$ bluefish &
$ software-center &
$ unity-control-center &
and one of
$ date
$ cal
$ xclock &
In practice, some programms write out some or many warning messages to stdout or stderr, which may clutter the terminal too much to use one terminal running multiple background programs, because you may see lots of mixed-up output, which you did not want to see in the first place.
So, if you had run a programm, it was writing errors and warnings (often in some subcomponent unknown to the programs author), but did work well enough, so you will not need the output actually use it, it makes sense to discard all output, from both output streams:
$ software-center >/dev/null 2&>1 &
If you may want to close the starting shell while the programms are still running in background, you could disown them from the shells job control, or use nohup to prevent the signal HUP ("hangup"), that would trigger the termination, from reaching the programm.
$ nohup xclock >/dev/null 2&>1 &
| Need to know how to start some GUI programs from terminal to use XMoand |
1,460,992,633,000 |
Probably it would be easier to do just with a script, but I wonder why can't I do that with one command. What I have tried so far:
$ (ls >/dev/null &) && echo $!
3135
$ (ls >/dev/null &) ; echo $! #bad idea, but if that worked, I could just add `sleep 0.1`
3135
$ ls >/dev/null & ; echo $!
bash: syntax error near unexpected token `;'
$ `ls >/dev/null &` && echo $!
3135
$ `ls >/dev/null &` ; echo $!
3135
The 3135 number is a PID of a last backgrounded process which holds the variable $!, and it remains unchanged (also I am using Konsole and for backgrounding a process succeed, it would print something like [1] PID). Alas, neither of the executed commands sends the process to a background.
|
This should do what you want:
$ ls > /dev/null & echo $!
Because you were forcing the background command into a subprocess in all sorts of ways, the first shell did not have any background process and hence $! was not updated.
Now the background process is started by the same shell that gets to do echo $! so now it does what you want.
| Background a process and execute something with one command |
1,460,992,633,000 |
The unix program pal is one of a variety of command-line calendar programs.
Based on the man page, each line refers to an event, and it does not mention event descriptions that span multiple lines at all.
I am trying to insert a multiline event description, and tried with Line1\nLine2, as well as Line1\rLine2, both of which were printed literally by pal.
Does anyone know if there's a way to get multi-line event descriptions?
|
This isn't possible. I dug through the source code and you can force a line break (CTRL+V,CTRL+M), but this actually messes up the display. The event stays on the same line but the line break starts over at the beginning and overwrites the characters.
Given the following two examples:
00000325 Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
outputs
Wed 26 Mar 2014 - Tomorrow
* History: Popeye statue unveiled, Crystal City TX Spinach Festival, 1937
while
00000326 Popeye statue unveiled, Crystal City ^MTX Spinach Festival, 1937
outputs
Wed 26 Mar 2014 - Tomorrow
TX Spinach Festival, 1937unveiled, Crystal City
| Does the unix calendar program `pal` support line breaks? |
1,460,992,633,000 |
I am using Konsole in Kubuntu.
I was wondering what is the difference between a profile in Konsole and the profiles in our bash?
I am reading that we can create different profiles per Konsole session and use different bash per session.
What is meant by using different bash per session here?
I thought that the default bash is the one defined in the /etc/passwd for a user
|
The Konsole profile contains settings specific to Konsole eg terminal font, text colour, background colour, settings for shorcuts to manipulate tabs etc.
/etc/passwd defines the default shell for the user, of which bash is just the most common option. Alternatives to bash are zsh, ksh, csh etc. You can google each of them to find out more about them. The default shell is the program that will be run inside of Konsole, which essentially can work with any shell or terminal program for that matter.
You also have files like .bashrc which contains settings specific to bash, regardless of the terminal it is run in. .profile is broader still in that it will effect whatever shell is used, even if it is not bash.
I think the most important distinction to make is that Konsole is a 'terminal emulator,' meaning that that it just does the same job as an old style terminal, but nicely inside a desktop environment. There are various settings which effect how it does this job and aren't much to do with the actual shell running inside it.
| What is the different bash per terminal session for Konsole? |
1,460,992,633,000 |
When I type time + number in zsh:
# atupal at local in /tmp/atupal/setup/bin [10:01:49]
$ time 1
/tmp/atupal/setup/bin/lib/python2.7
# atupal at local in /tmp/atupal/setup/bin/lib/python2.7 [10:01:54]
$ time 2
/tmp/atupal/setup
# atupal at local in /tmp/atupal/setup [10:01:59]
$ time 3
/tmp/atupal/setup/app
# atupal at local in /tmp/atupal/setup/app [10:02:03]
$ time 3
/tmp/atupal/setup/bin
# atupal at local in /tmp/atupal/setup/bin [10:02:04]
$ time 3
/tmp/atupal/setup/bin/lib/python2.7
# atupal at local in /tmp/atupal/setup/bin/lib/python2.7 [10:02:05]
$
But when I type time 10 is says command not found: 10
My zsh version is :zsh 5.0.2-4
And my uname result:Linux 3.12.5-1-ARCH #1 SMP PREEMPT Thu Dec 12 12:57:31 CET 2013 x86_64 GNU/Linux
|
oh-my-zsh creates a few alias in .oh-my-zsh/lib/directories.zsh named 1, 2 ... 9 which expand to cd -, cd -1, etc. So time is functioning correctly, but the unexpected alias 1 actually does something. The reason why time's normal output isn't given is due to the fact cd is a builtin command that doesn't require forking.
| zsh: What does the command "time + number" do in zsh |
1,460,992,633,000 |
To solve the bug reported here the solution seems to be commenting, not just adding a certain line in a file - as explained here and here.
That is - to make that bug disappear in Xfce, a certain line has to be added. But as that is not enough, the suggested solution is to comment that very line.
I have the impression that the solution works. Is that placebo?
How come a line is necessary, but also has to be commented, that is - deactivated?
|
They are commenting a line in a configuration file. Really, it would be working around a bug of some sort. In other words, having the line in the configuration file activates the broken code. Commenting it out in the configuration file leaves the broken code dormant, and things work again.
It only fixes it from the user perspective. The user thinks the bug is "fixed" because their program works again. But really, it is just a workaround.
| In which way is a commented line active in a program file? |
1,460,992,633,000 |
A bash script is running as I defined it in Startup Applications. It is possible to display on terminal a variable used in that script? If yes, how?
|
The quick answer (assuming this is a bash script as tagged) is no, variables are not shared between separate shell instances. The only way I know of to access a variable from a script started in a different shell is to have the script write the variable to a file and then access that file.
| How can I use a variable from a script? |
1,460,992,633,000 |
When I want to redirect an output of a.out program I will use
./a.out > output.txt
This doesn't work when the program reads something from stdin.
How would you redirect output in this case?
I can do it only with
./a.out < inputs.txt > output.txt
Can I do the same but reading inputs from stdin?
EDIT: I realized that it works, but I can't see prompts because everything goes to file output.txt. So, the only problem is to see prompts on stdin and preserve redirection at the same time.
|
One option would be to write your prompts to stderr rather than stdout. They'll be visible on the terminal but not in output.txt.
Another option is not to use redirection for your output but take an output filename as a parameter and open that file yourself. You can then use stdout for your prompts. (This is more flexible. You can decide what goes only to the file, what goes only to the screen, and potentially what goes to both.)
If you can't change the code, the only option is to use tee or some other such utility. Buffering can be a problem; stdbuf might help with that.
| Redirecting output of program reading from stdin |
1,326,796,718,000 |
I was using these steps to build an application from Csipsimple but the problem is, they have mentioned it's for Linux and I tried, as a first step:
subversion git quilt unzip wget swig2.0 python make
I get a command not found error. First thing is, that I'm using fedora 13. I searched for the swig, python, quilt, subversion packages in package management system in fedora already but I didn't get to install it. Can anyone help me, solving this error?
|
That is a list of the programs you NEED to have installed in order to compile Csipsimple. It is not a command, and it will give an error.
Use your package management system from Fedora and install all those programs and the Android SDK as required on the page you linked to, than continue with the "Check out source code" section and the rest of the documentation.
| Unable to follow steps for building application in Linux Os? |
1,326,796,718,000 |
I just downloaded a Ubuntu v10.10 vmware image for Windows. I'm trying to install a web application that can only run on Linux MySQL, Apache and PHP. How do I open a terminal in Ubuntu?
|
Try: Accessories > Terminal
;-)
| Ubuntu x86 10.10 terminal |
1,326,796,718,000 |
I have a script with this usage:
myscript [options] positional-args... -- [fwd-params]
Because [options] can have long, or short variants, I like using getopt. But I'm having troubles.
I use getopt like this:
args=$(getopt -o a:,b --long alpha:,gamma -- "$@")
eval set -- "$args"
while : ; do
case "$1" in
-a | --alpha) a="$2" ; shift 2 ;;
-b ) b=true ; shift ;;
--gamma ) g=true ; shift ;;
-- ) shift ; break ;;
esac
done
positionals=()
while [ $# -gt 0 ] ; do
case "$1" in
* ) positionals+=("$1"); shift ;;
-- ) shift ; break ;;
esac
done
# What-ever is left in "$@" needs to be forwarded to another program
backend "$@"
This works great if I don't have any [fwd-params]:
$ getopt -o a:,b -- -a 1 -b pos1 pos2
-a '1' -b -- 'pos1' 'pos2'
^-- getopt adds this to help me find
the end-of-options/start-of-positionals
But it falls apart if the user defined any [fwd-params]. Here's my desired output:
$ getopt -o a:,b -- -a 1 -b pos1 pos2 -- fwd1
-a '1' -b -- 'pos1' 'pos2' '--' 'fwd1'
^
\- I'll use this to delimit
the positional arguments
from the forwarding ones.
And here's what I actually get. The user's intentional -- has been filtered out.
$ getopt -o a:,b -- -a 1 -b pos1 pos2 -- fwd1
-a '1' -b -- 'pos1' 'pos2' 'fwd1'
What's the best way to delimit my positional-arguments from the forwarding ones?
|
Well, if the user passes the arguments -a 1 -b pos1 pos2 -- fwd1, getopt takes the -- as the marker making all following arguments non-options. It's not a positional argument itself here.
If you want the -- to appear as-is, your user would have to explicly add the marker, and another -- one after to separate the two sets of positionals, e.g.:
$ getopt -o a:,b -- -a 1 -b -- pos1 pos2 -- fwd1
-a '1' -b -- 'pos1' 'pos2' '--' 'fwd1'
or, you could prefix the set of option characters with a + to ask for the POSIX behaviour, where any non-option marks the end of options. That way, the -- in your example would no longer be the marker, but a positional in itself:
$ getopt -o +a:,b -- -a 1 -b pos1 pos2 -- fwd1
-a '1' -b -- 'pos1' 'pos2' '--' 'fwd1'
But note that if you don't have positional arguments in the first set, the user will still need to manually add a total of two --s:
$ getopt -o +a:,b -- -a 1 -b -- -- fwd1
-a '1' -b -- '--' 'fwd1'
I would likely do something similar to what GNU Parallel does, and use some other fixed string to separate the two types of positionals. E.g. have the script look for a :: instead, leaving -- for getopt. So the user would enter -a 1 -b pos1 pos2 :: fwd1 (with or without a --):
$ getopt -o +a:,b -- -a 1 -b pos1 pos2 :: fwd1
-a '1' -b -- 'pos1' 'pos2' '::' 'fwd1'
or with no positionals in the first set:
$ getopt -o +a:,b -- -a 1 -b :: fwd1
-a '1' -b -- '::' 'fwd1'
| getopt with several `--` |
1,326,796,718,000 |
I want to run a shell command once at a specific time in the future on Guix. My idea was to use the at command, but that seems to not be available on Guix.
The imperative nature of my desire goes against the declarative philosophy of Guix somewhat, which I expect is the (indirect) reason at is not available.
For example, at requires atrun or atd to be running, which might not fit nicely in the Guix way of doing things.
Is there perhaps a Guix-specific way of running a command at a specific time that I have not found yet?
For the moment I can just do a sleep xxx; cmd, which should be good enough for now.
This is my solution based on @GAD3R suggestion to use mcron, and the manual on how to emulate at
~/.config/hello.guile:
(job '(next-minute-from (current-time) '(25))
(lambda () (system "echo $(date) >> ~/hello.txt")
(kill (getppid) SIGINT)))
and then call mcron.
This indeed added a line to ~/hello.txt at 25 minutes past the hour.
It is a bit more work than using at, but it does work.
Two potential improvements I can see:
It seems that job simply expects a function that returns a unix timestamp as first argument. So I suppose it would be feasible to give it the exact timestamp that is necessary. That might also make the killing part redundant.
I suppose that instead of calling mcron manually, it is maybe better to use the mcron Home Service
Finally, maybe it would be worthwhile to explicitly state that mcron replaces also atd or atrun, as those where terms I knew and searched for. For some reason I did not think that cron should also be able to approximate what I looked for. Maybe I'll write a patch.
|
You have to use mcron via a posix or guile (scheme) script then run mcron &. Or install cron, crond or crontab as root.
scripts goes under ~/.config/cron.
Posix scripts should have the .vixie or .vix extensions.
Guile scripts should have .guile or .gle extensions.
The GNU documentation: mcron
| How to run a one-off command at a specified future time in Guix? |
1,326,796,718,000 |
I am a student and I am trying an excercise that involves assigning a word with quotes, double quotes and other symbols to a file name.
The problem is that I am not getting the expected results.
I am using the scape bar to scape the symbols, but when I list the name of the file this appears with unexpected single quotes at the beginning and the end of the file name and scape bars surrounded by single quotes before the single quotes that are supposed to be part of the file name. This is a bit confusing, I'll show you:
$ echo > \"\\\?\$\*\'\'\*\$\?\\\"
I expect this:
$ ls
"\?$*''*$?\"
but I get this instead:
$ ls
'"\?$*'\'''\''*$?\"'
I've tried to do it in other ways such as wrapping the double quotes in single quotes and viceversa, but I always get the same result. I already had made the same excercise in other computer and it worked perfectly. What is more intriguing is that I've discovered that both the single and the double quote work as a command to redirect to the standard input, what, I suspect, has to do with the unexpected name display:
$ '
>
$ "
>
The commands don't seem to work for anything else than to display the standard input. Is this normal at all? What is going on? What can I do?
|
It seems that I expected ls to work as ls -N. It seems in other machines the expected behaviour was different due to version or maybe configuration issues. About the quotes, it seems that they are not commands, but the prompt appears when the quotes are not properly closed as it interprets it as unfinished. All the explanations and relevant links are in the comments, thanks to @don_crissti and @Marcus Müller.
| Quote characters work as commands that redirect to the standard input |
1,326,796,718,000 |
I am just trying out the FISH - (the FriendlyInteractiveSHell) - CLI & whenever I type in an erroneous Command, a new Prompt appears under the faulty Command prompt with a number in square brackets. I have searched through FISH's FAQs etc., but no joy in explaining the meaning of the error-number? If I hit Enter, the prompt just reappears with the error number until I enter a random correct command, such as 'ls'.
|
The number in brackets is the exit status of the last command.
What this means depends entirely on the command. There is a strong understanding that returning 0 means success, but anything else signals some error and the command can choose the exact code.
| FISH CLI What do the error numbers at the prompt mean? |
1,326,796,718,000 |
Using sudo visudo I add the line username ALL=(ALL) NOPASSWD: /home/user/script.sh in sudoers but the script.sh does not run on double click.
If I add the line username ALL=(ALL) NOPASSWD:ALL in sudoers then the script.sh runs and works when double clicked. How can do it?
Thanks.
|
Setting my comment as an answer. Add this line as the first executable statement in your script
[[ $UID -ne 0 ]] && exec sudo $0 "$@"
This checks to see if you're running as root and restarted the script under sudo with the same arguments. Normal precautions and warnings apply in configuring sudo and with running things as root.
| How to run a bash script by double clicking by entering the path in sudoers? |
1,326,796,718,000 |
Say I do:
ls somedir
is there a way I could re-run it with a different command?
Example:
ls somedir
<run same command as above but with cd instead of ls>
|
The variable $_ is the last argument of the previous line you typed. So, for example, cd $_ would do what you described
bash-4.2$ ls X
1
bash-4.2$ cd $_
bash-4.2$ ls
1
There's also some command substitution options as well; eg ^ will replace commands on the previous line
bash-4.2$ ls X
1
bash-4.2$ ^ls^cd
cd X
bash-4.2$ ls
1
| How to run previous command with another command [duplicate] |
1,326,796,718,000 |
xargs provides a command xargs -P 0, the doc mentions that is spawns as many processes as possible to run the commands in parallel. But is it stopping, e.g., at the number of available CPU, or litterally several tens of thousands of threads? If it is the second option, isn't it less efficient than spawning around the number of CPU threads? I guess constantly changing between processes can take a non-negligeable time right?
|
I ended up doing the test myself. So -P 0 does try to run all processes at the same time (the limit being around the maximum number of allowed threads per the OS, which can be several thousands, I think), but it is a very bad idea to use it on CPU intensive tasks as this makes the system really slow.
I tested with around 200 pdflatex processes, and the overall running time is actually higher (2:05mn instead of 1:10mn) than with -P 16 on my process when using 200 processes (I guess the additional time to switch between processes explains this). Moreover, with -P 0 the whole OS is laggy and hard to use which is not the case with -P 16. (My cpu has 8 CPU threads.)
| What is `xargs -P 0` exactly doing, and is it a good idea to use it? |
1,326,796,718,000 |
i have a "sites" folder with a number of site folder named:
bu.my-url.com
dud.myurl.com
[must-be-indentical_string].myurl.com
etc
On each site folder, I'd like to check if the /themes/amu_[must-be-indentical_string] is identical to the site folder name
/sites/[must-be-indentical_string].myurl.com/themes/amu_[must-be-indentical_string]
Would there be a command for that?
|
You could do:
#! /bin/sh -
cd /path/to/sites || exit
ret=0
for file in */themes/amu_*; do
[ -e "$file" ] || continue
dir=${file%%/*}
theme=${file##*/amu_}
case $dir in
("$theme".*) ;; # OK
(*) printf>&2 '%s\n' "Mismatch: $file"
ret=1;;
esac
done
exit "$ret"
With GNU find, see also:
cd /path/to/sites &&
find . -mindepth 3 -maxdepth 3 -regextype posix-basic \
-path './*/themes/amu_*' \
! -regex '\./\(.*\)\..*/themes/amu_\1' \
-fprintf /dev/stderr 'Mismatch: %P\n'
| Finding on each folder if a subfolder respect the convention name according to the folder name? |
1,326,796,718,000 |
There is Herunterfahren(DE)/Shut Down and Neustarten (DE)/Reboot:
Is it possible to execute these GUI entries from the command line? If so, what exactly are the commands?
I already checked XFCE's docs page for the power manager, but as far as I understood it, these commands aren't listed there.
|
From https://askubuntu.com/a/771187/158442:
I think what you want is xfce4-session-logout (online manpage).
Excerpt from the manpage (reformatted, filtered):
The xfce4-session-logout command allows you to programmatically logout
from your Xfce session. It requests the session manager to display the
logout confirmation screen, or, if given one of the command-line
options below, causes the session manager to take the requested action
immediately.
OPTIONS:
--logout Log out without displaying the logout dialog.
--halt Halt without displaing the logout dialog.
--reboot Reboot without displaying the logout dialog.
--suspend Suspend without displaying the logout dialog.
--hibernate Hibernate without displaying the logout dialog.
--fast Do a fast shutdown. This instructs the session manager not to
save the session, but instead to quit everything quickly.
So to shut down: xfce4-session-logout --halt
To reboot: xfce4-session-logout --reboot
To get the dialogue where one can pick an action manually, run it without arguments:
xfce4-session-logout
| Are there commands for execute XFCE's menu entries to reboot or power off/shut down? |
1,326,796,718,000 |
Usually, a double dash separates options from filenames, but xdg-open does not care:
❯ xdg-open -headlinesAfter.epub
xdg-open: unexpected option '-headlinesAfter.epub'
Try 'xdg-open --help' for more information.
❯ xdg-open -- -headlinesAfter.epub
xdg-open: unexpected option '--'
Try 'xdg-open --help' for more information.
Is there any other way?
|
You can open the file by adding ./:
xdg-open ./-headlinesAfter.epub
| How to open a file starting with dash via xdg-open |
1,326,796,718,000 |
How can I import FreeCAD from the python console?
I'm trying to write a script that can manipulate a given FreeCAD file, but I can't even get FreeCAD imported into the python console on a system where FreeCAD is installed.
user@disp7637:~$ sudo dpkg -l | grep -i freecad
ii freecad 0.19.1+dfsg1-2+deb11u1 all Extensible Open Source CAx program
ii freecad-common 0.19.1+dfsg1-2+deb11u1 all Extensible Open Source CAx program - common files
ii freecad-python3 0.19.1+dfsg1-2+deb11u1 amd64 Extensible Open Source CAx program - Python 3 binaries
ii libfreecad-python3-0.19 0.19.1+dfsg1-2+deb11u1 amd64 Extensible Open Source CAx program - Python 3 library files
user@disp7637:~$
user@disp7637:~$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import FreeCAD
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'FreeCAD'
>>> import freecad
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'freecad'
>>>
I am running Debian 11
user@disp7637:~$ cat /etc/issue
Debian GNU/Linux 11 \n \l
user@disp7637:~$
How can I import FreeCAD in the python console?
|
Ah, I think the post you're referring to is overselling things a bit when it says
There is also possibility to import FreeCAD as a Python module but this is more complex.
FreeCAD in itself embeds python to give access to its internal state to scripts running within in the FreeCAD process.
So, things like import order start to matter. Anyways, here we go:
Because the modules are so interlinked with FreeCAD, they're not installed to python standard module paths; instead, you will find them in /usr/lib/freecad-python3/lib on debian. So,
from sys import path as syspath
syspath.append("/usr/lib/freecad-python3/lib")
syspath.append("/usr/share/freecad/Mod/")
import FreeCAD
import Draft
By the way, other distros more stubbornly use architectural names for these subfolders, so on Fedora and related distros, the path happens to be /usr/lib64/freecad/lib64, at least on x86_64.
| How to `import FreeCAD` in CLI (python) |
1,326,796,718,000 |
I have a text file contains about 50 song titles.
I'm searching for a CLI utility that can take each title from the text file, search for it on YouTube, get the top 4 video links then download the audio of the link.
So far I tried with youtube-dl, there is a search feature there but it only returns metadata and I can't seem to find the links of video in there.
|
WAV is not modern audio container format; not quite sure why you want a container format that only supports constant-rate and -blocksize audio codecs. But since that is a really awkward thing for a streaming service to deliver, you'll have to download the audio and transcode it to some such codec and put it in a WAV container. Luckily yt-dlp -x --audio-format wav does that for you. (yt-dlp is a fork of youtube-dl, haven't checked that.). Again, fix whatever you're doing with that audio if it needs WAV, rather than transcode to a codec that fits in WAV, which very often means gigabytes of uncompressed audio.
yt-dlp ytsearchN:Search string will get the N top results for "Search string" (youtube-dl doesn't currently have that feature, you need yt-dlp)
putting it all together, ytdlp -x --audio-format wav ytsearch:YOURTERMSHERE does what you want, for each YOURTERMSHERE search term.
To read a full file line by line, you would just pipe it to the stdin of yt-dlp and use --batch-file -:
grabsound.sh:
#!/usr/bin/env zsh
sed 's/^/ytsearch4:/' < $1 | yt-dlp --batch-file - -x --audio-format wav
(untested)
chmod 755 grabsound.sh allows you to run the program directly, and you can then just do /path/to/grabsound.sh yourtextfile.txt.
Again, I strongly advise you to not use WAV. Even if you plan to transcode to a different format later on, what you're ending up doing here is not necessary, since you'd be better and faster off just decoding the original audio on the fly instead of transcoding to (mostly uncompressed PCM) WAV.
| CLI utility to search and download YouTube videos in wav format |
1,326,796,718,000 |
For every program/utility on Unix, I would type the name of the program followed by --version to check its version, like so
program --version
If I understand correctly, the double dash -- is used to specify a single option named version instead of -version, which would mean 7 options v,e,r,s,i,o,n.
Why is it then that for java and javac I have to use -version with a single dash. Java --version does not even work.
Can someone please explain this to me? Thank you in advance.
|
The underlying issue is that every application implements its own argument parsing. From there it follows that each person/organisation might standardise on a format, but you can't convince everyone to follow a single standard.
There are several pieces of historical baggage which make the situation worse:
The BSD tools and POSIX generally only support the compact -v format.
GNU tools have expanded on POSIX to also support the human-readable --version format. They can't support -version since it's ambiguous.
Microsoft standardised on slash as the leading character instead of a hyphen. Since they developed all the core tools for Windows they basically dictated the argument parsing there, which means it's much more uniform.
Some organisations only support human-readable options, so they can use a single hyphen as the prefix to save typing.
| Confused about java -version [duplicate] |
1,326,796,718,000 |
$ highlight -l -s clarity -S sh -O ansi some_file
No matter what I try, highlight always shows the same theme. And it is supposed to create a file 'highlight.css' but it doesn't. What am I doing wrong?
|
With -O ansi, the output will consist of ANSI escape sequences which would colorize the output in your shell.
The highlight.css file is created for HTML, XHTML, and SVG outputs.
Using -O html should get you your desired results:
$ highlight -l -s clarity -S sh -O html some_file
(Note that you could use the -o flag to save the output in a file and use the -I flag to include the styles in the output as opposed to in a separate highlight.css file)
| highlight command refuses to change theme |
1,326,796,718,000 |
When using psql I always have to issue set role ... first.
Could this be executed automatically before accepting interactive commands?
E.g.:
psql -h HOST -U USER -c "set role 'ROLE';" -f -
This almost does what I want except that it reads the input directly (without readline).
I can't use -U ROLE because the role is not allowed to log in.
|
Unless it is passed an -X option, psql attempts to read and execute commands from the system-wide startup file (psqlrc) and then the user's personal startup file (~/.psqlrc), after connecting to the database but before accepting normal commands.
and:
ENVIRONMENT PSQLRC Alternative location of the user's .psqlrc file. Tilde (~) expansion is performed.
So you can just create a set-role.sql file:
set role 'ROLE';
and then run
PSQLRC=set-role.sql psql -h HOST -U USER
psql will execute the set command and then show a prompt (with autocomplete).
| Running psql with a role |
1,326,796,718,000 |
If you apply git clone --help in the command line, your result will include something like the following:
git clone [--template=<template_directory>]
[-l] [-s] [--no-hardlinks] [-q] [-n] [--bare] [--mirror]
[-o <name>] [-b <name>] [-u <upload-pack>] [--reference <repository>]
[--dissociate] [--separate-git-dir <git dir>]
[--depth <depth>] [--[no-]single-branch] [--no-tags]
[--recurse-submodules[=<pathspec>]] [--[no-]shallow-submodules]
[--jobs <n>] [--] <repository> [<directory>]
Please anyone explain the meaning of the equal sign used in some options like --template.
I made an extensive search in Google and found this and this, but none of them explain what I want.
Please note that this question is not about Git, but about the syntax or convention used to describe linux commands.
|
In --template=<template_directory> the character = is literal. Whatever you substitute for <template_directory> shall be appended to --template= and together they shall form one argument passed to git in the array of arguments. = is not special to the shell, it may be escaped or quoted in a shell. The argument may be quoted as a whole.
For comparison, in --depth <depth> there's a space character. You may perceive the space literally like = (i.e. say to yourself: I need = after --template but I need a space after --depth, it's just a different character, no big deal), but technically the mechanism is different. git expects the shell to split --depth <depth> to two arguments because of this space. It doesn't matter if you use one or more spaces (or tabs) when typing in a shell. What matters is the option --depth and whatever you substitute for <depth> shall be two arguments passed to git in the array of arguments. Therefore the space(s) must not be escaped nor quoted in a shell.
When there is no shell (i.e. when you craft an explicit array of arguments for execve(2) or similar), you should pass --template=<template_directory> as one argument, but --depth+<depth> as two.
Why git uses different conventions for different options is another matter. There may or may not be some rationale for the discrepancy in this specific case (i.e. in git).
In general a utility may interpret its arguments in its own way. Even if it follows one convention for --foo, it is by no technical means obliged to stick to the same convention with --bar.
| Syntax or convention used to describe Linux commands |
1,326,796,718,000 |
Is there some command I can use to monitor changes in /proc/interrupt?
For example, using head -4 I can see that the file is changing, but only if I run head again and again:
> head -4 /proc/interrupts
CPU0 CPU1
0: 451325 0 IO-APIC 2-edge timer
1: 0 3445 IO-APIC 1-edge i8042
4: 0 3055 IO-APIC 4-edge ttyS0
> head -4 /proc/interrupts
CPU0 CPU1
0: 451559 0 IO-APIC 2-edge timer
1: 0 3451 IO-APIC 1-edge i8042
4: 0 3063 IO-APIC 4-edge ttyS0
Is there a way to display these lines as they are updated by the system?
Notice that the solutions like tail -f <file> proposed in
Output file contents while they change
do not work, because the change is not due to some text being appended.
|
How about watch cat /proc/interrupts That seems to be working on my ubuntu server.
| Output text file contents (/proc/interrupts) as they change [duplicate] |
1,326,796,718,000 |
This question should be asked for a million times, but I colud not find normal answer.
user is member of group adm
I created as root
# touch /tmp/keyboard-backlight.on
# chmod 666 /tmp/keyboard-backlight.on
# chgrp adm /tmp/keyboard-backlight.on
# chgrp adm /tmp/
# echo "text" > /test.txt
# chmod 0666 /test.txt
as user
user@host ~ $ rm /tmp/keyboard-backlight.on
rm: cannot remove '/tmp/keyboard-backlight.on': Operation not permitted
user@host ~ $ rm /test.txt
rm: cannot remove '/test.txt': Permission denied
Why I can't remove this files?
|
Deleting and creating files require write permissions for the directory containing the file.
For /, it's owned by root, and it has no "write" permissions for group or others. So only root could delete files there, regardless of the permissions of the file.
$ ls -ld /
drwxr-xr-x 24 root root 4096 Nov 3 19:21 /
Regarding /tmp, this folder usually has the sticky bit enabled. See Linux permissions: SUID, SGID, and sticky bit:
The last special permission has been dubbed the "sticky bit." This
permission does not affect individual files. However, at the directory
level, it restricts file deletion. Only the owner (and root) of a file
can remove the file within that directory. A common example of this is
the /tmp directory:
[tcarrigan@server article_submissions]$ ls -ld /tmp/
drwxrwxrwt. 15 root root 4096 Sep 22 15:28 /tmp/
The permission set is noted by the lowercase t, where the x would
normally indicate the execute privilege.
| Linux: Delete file as other and as group |
1,326,796,718,000 |
I have an interactive shell (assume dash) running under a GNU screen session. Is it possible to rename the "current" window via commands issued to the interactive shell? If so, then how?
By contrast, if I wanted to accomplish the same thing via GNU screen keybindings, then I would type CTRL+a followed by A to bring up the Set window's title to: prompt.
|
You can run any screen command (the ones that are or can be bound to keys) with screen -X.
So:
screen -X title 'New title'
Would set the title of the current window, same as Ctrl+a, A or Ctrl+a, : titleEnter followed by the new title.
See info screen title, info screen options, info screen 'Command Index' for details.
Ctrl+a, ? will tell you what command is bound to each key.
To set the title of another window, see the at command.
| GNU screen: How to rename current window via shell commands? |
1,326,796,718,000 |
My logs nohup.out is owned by root user while I m trying to rotate the logs using system which has privileged access using sudo
I have written the below script to rotate logs.
cat rotatelog.sh
cp /var/www/html/nohup.out /var/www/html/nohup.out_$(date "+%Y.%b.%d-%H.%M.%S");
sudo tee /var/www/html/nohup.out;
The issue is when I run rotatelog.sh it does the job but the control does not return to the command line terminal.
i tried > /var/www/html/nohup.out but I get Permission denied error.
How can I get the logs rotated and return to the command-line?
|
tee will block waiting for standard input.
If your system provides the truncate command, you can try
sudo truncate -s 0 /var/www/html/nohup.out
Otherwise, you could do something like
: | sudo tee /var/www/html/nohup.out
to supply tee with an empty stdin.
| Trying to rotate logs however tee command fails to return after execution |
1,326,796,718,000 |
I'm looking for a command-line utility that allows me to check what service (eg: http/ftp/ssh) is running on a specific port of a remote machine.
An example of how I imagine a program like this would operate:
kess@KG-PC:~$ portcheck google.com:80
Port 80 of google.com is running a(n) "http" server
|
The simplest way to do such recognition is by establish connection to this port and grab the banner. Banner (usually) can tell you if this is for example Apache httpd, openssh and so on. The list of banners can be quite big. Also you can try some commands like GET / HTTP/1 to check if the service answer to them. For plain text command telnet can be enough. For encrypted (SSL/TLS) you may need to use openssl s_client.
AFAIK nmap can do such things so you can download the source and check how is done there.
If you want to use just a tool you can test:
nmap -A google.com -p 80
| Check what service is running on a specific tcp/udp port |
1,326,796,718,000 |
I would like to edit some preferences of xubuntu-desktop (xfce4), but 100% via Terminal.
In ubuntu-desktop (gnome) I use, for example:
# Prevent suspend and lock the sreen
gsettings set org.gnome.desktop.screensaver lock-enabled false
gsettings set org.gnome.desktop.screensaver ubuntu-lock-on-suspend false
# Set performance settings
gsettings set org.gnome.desktop.interface enable-animations false
gsettings set org.gnome.shell.extensions.dash-to-dock animate-show-apps false
# Set personal configs
gnome-extensions enable [email protected]
gnome-extensions enable desktop-icons@csoriano
gnome-extensions enable [email protected]
gnome-extensions enable [email protected]
gsettings set org.gnome.desktop.privacy remember-recent-files false
gsettings set org.gnome.SessionManager logout-prompt false
In xubuntu-desktop (xfce4), I can accomplish all these preferences via GUI, but I couldn't find a way to do the same tasks via Command Line.
Just adding infos for more details:
OS: Ubuntu 20.04
Types of Access: Remote Desktop via xrdp and SSH
Which preferences to change?
Prevent system suspension due to inactivity
Disable screensaver
Disable animations
Disable logout confirmation
Disable "dock"
Change panel position
References I: similar commands to gsettings set ... and gnome-extensions enable ... from ubuntu-desktop (gnome) to perform changes
References II: similar commands to gsettings list-schemas and gsettings list-keys ... — also from ubuntu-desktop (gnome) — to list the available preference settings
|
Solution:
The command to perform the changes: xfconf-query.
Listing Available Channels for Change
xfconf-query -l
Listing the Properties per Channel
xfconf-query -c $PROPERTY -l -v
# For example, the property "xfce4-desktop":
xfconf-query -c xfce4-desktop -l -v
-v: displays the value of the properties.
Each / is a subproperty.
Monitoring Changes in Real Time
xfconf-query -c $PROPERTY -m
# For example, the property "xfce4-desktop":
xfconf-query -c xfce4-desktop -m
For example, if the workspace0 wallpaper is changed, it will display the full path of the updated property: /backdrop/screen0/monitorrdp0/workspace0/last-image.
You can start monitoring and making changes via the GUI, where all the properties that have been changed will be displayed in the terminal for later use via the command line.
Creating or Updating a Property
xfconf-query -c $CHANNEL -np $PROPERTY -t 'bool' -s 'true';
# For example, the channel "xfce4-panel" and the property "/panels/dark-mode":
xfconf-query -c xfce4-panel -np '/panels/dark-mode' -t 'bool' -s 'true';
-n: ensures that if the property doesn't exist, it will be created.
You must enter the type of the property value:
[ 'string', 'int', 'bool', 'double' ]
-s: sets the value of the property.
To insert an array with multiple elements, just insert the type and value in sequence:
-t int -s 0 -t int -s 1 -t int -s 2 #...
To force a single item as an array:
-t int -s 0 -a
Removing a Property
xfconf-query -c $CHANNEL -p $PROPERTY -r -R;
# For example, removing "Panel 2" completely:
xfconf-query -c xfce4-panel -p '/panels/panel-2' -r -R;
-r: indicates the removal.
-R: ensures that all subproperties are deleted along with the property.
Xfce Terminal
You can edit the Xfce Terminal preferences into ~/.config/xfce4/terminal/terminalrc.
You can edit via GUI and copy the file for later use.
Just close and reopen terminal to see the changes.
Whisker Menu
If you use the Whisker Menu, you can edit preferences into ~/.config/xfce4/panel/whiskermenu-**.rc.
Replace ** with the order of plugin:
Look for the plugin whistermenu in the xfce4-panel/plugins property to see the plugin number.
For example, if Whister Menu is plugin-19, then: ~/.config/xfce4/panel/whiskermenu-19.rc.
You can edit via GUI and copy the file for later use.
Considerations:
Most changes that affect the front end require logging out and logging in again to view the changes, especially in the panels.
The xfconf-query command only works with the display active.
Below is the script with the complete solution to the problem at hand:
#!/bin/sh
# Check the display's availability
if [ -z $DISPLAY ]; then exit 1; fi;
# Prevent suspend and lock the sreen
xfconf-query -c xfce4-screensaver -np '/lock/enabled' -t 'bool' -s 'false';
xfconf-query -c xfce4-screensaver -np '/lock/saver-activation/enabled' -t 'bool' -s 'false';
xfconf-query -c xfce4-screensaver -np '/saver/enabled' -t 'bool' -s 'false';
xfconf-query -c xfce4-power-manager -np '/xfce4-power-manager/inactivity-on-ac' -t int -s 0;
xfconf-query -c xfce4-power-manager -np '/xfce4-power-manager/blank-on-ac' -t int -s 0;
xfconf-query -c xfce4-power-manager -np '/xfce4-power-manager/dpms-on-ac-sleep' -t int -s 0;
xfconf-query -c xfce4-power-manager -np '/xfce4-power-manager/dpms-on-ac-off' -t int -s 0;
xfconf-query -c xfce4-power-manager -np '/xfce4-power-manager/lock-screen-suspend-hibernate' -t 'bool' -s 'false';
xfconf-query -c xfce4-power-manager -np '/xfce4-power-manager/dpms-enabled' -t 'bool' -s 'false';
# Remove dock
xfconf-query -c xfce4-panel -p '/panels/panel-2' -r -R;
xfconf-query -c xfce4-panel -np '/panels' -t int -s 1 -a;
# Removing wallpaper
xfconf-query -c xfce4-desktop -np '/backdrop/screen0/monitorrdp0/workspace0/color-style' -t int -s 0;
xfconf-query -c xfce4-desktop -np '/backdrop/screen0/monitorrdp0/workspace0/image-style' -t int -s 0;
xfconf-query -c xfce4-desktop -np '/backdrop/screen0/monitorrdp0/workspace0/rgba1' -t double -s 0.184314 -t double -s 0.207843 -t double -s 0.258824 -t double -s 1.000000;
# Personal settings
xfconf-query -c xfce4-desktop -np '/desktop-icons/tooltip-size' -t 'double' -s 48.000000;
xfconf-query -c xfce4-desktop -np '/desktop-icons/gravity' -t int -s 0;
xfconf-query -c xfwm4 -np '/general/workspace_count' -t int -s 1;
# Put menu in bottom
xfconf-query -c xfce4-panel -np '/panels/dark-mode' -t 'bool' -s 'true';
xfconf-query -c xfce4-panel -np '/panels/panel-1/position' -t 'string' -s 'p=10;x=0;y=0';
xfconf-query -c xfce4-panel -np '/plugins/plugin-1/show-tooltips' -t 'bool' -s 'true';
# Grouping tasklist
xfconf-query -c xfce4-panel -np '/plugins/plugin-2/grouping' -t int -s 1;
# Logout settings
xfconf-query -c xfce4-session -np '/shutdown/ShowSuspend' -t 'bool' -s 'false';
xfconf-query -c xfce4-session -np '/shutdown/LockScreen' -t 'bool' -s 'false';
xfconf-query -c xfce4-session -np '/shutdown/ShowHibernate' -t 'bool' -s 'false';
xfconf-query -c xfce4-session -np '/general/PromptOnLogout' -t 'bool' -s 'false';
# Logout to save changes
xfce4-session-logout --logout;
Sources:
https://docs.xfce.org/start
https://docs.xfce.org/xfce/xfconf/xfconf-query
https://wiki.xfce.org/settings4.6
| Edit Xubuntu preferences via Command Line |
1,326,796,718,000 |
I can't understand the purpose of -p option in 'useradd'
Let's create a user
useradd -m -p 'pass1' user1
after running the command above, when trying to log in using su - user1
authentication fails.
another problem is that password is not encrypted in /etc/passwd file, if I run cat /etc/passwd | grep user1 I get user1:pass1:19196:0:99999:7:::.
|
You are supposed to provide the encrypted password to the -p option. From the useradd(8) man page:
-p, --password PASSWORD
The encrypted password, as returned by crypt(3). The
default is to disable the password.
Note: This option is not recommended because the password
(or encrypted password) will be visible by users listing
the processes.
The value you provide is used verbatim in the passwd file, so of course if you provide an unencrypted value you'll never be able to log in using that password.
Something like this works:
useradd -m \
-p "$(python -c 'import crypt; print(crypt.crypt("pass1"))')" \
testuser
(But do note the warning in the man page about potentially exposing the password to other system users.)
| useradd -p option [duplicate] |
1,654,992,595,000 |
About the less and according with:
Less command
Linux / Unix Colored Man Pages With less Command
indicates the following:
f ^F ^V SPACE * Forward one window (or N lines).
b ^B ESC-v * Backward one window (or N lines).
z * Forward one window (and set window to N).
w * Backward one window (and set window to N).
Enabling the line numbers - with -N - for example for man less itself, I can see that b/f works/behaves the same than w/z it about the amount of content/lines moved up/down by either window or page.
Question
What is the difference between b/f vs w/z?
normally I use the first pair, but when use the second pair?
Extra Question
What does and set window to N mean?
I am assuming it is the expected difference that makes w/z different against b/f
|
I'll try my best to explain with an example.
Open a long text file with less, something with obvious lines.
Now type 4z, and you will see that 4 lines have shifted down.
Type z and another 4 lines have moved.
That 4z has told less that you want the window size to be set to 4.
Once you have set the window size, all options (f,b,z or w) will now use that as the window size when moving through the text.
The difference is when f and b are used like this, they do not set the window size, they only move by that N number of lines.
Summing up with an example:
8f: Move through the document 8 lines.
9b: Move backwards through the document 9 lines.
f or z: Move one "window size" through the document.
b or w: Move backwards one "window size" through the document.
6z: Move through the document 6 lines and set the "window size" to 6 lines. Using f,b,z or w after this will shift the document 6 lines.
3w: Move backward through the document 3 lines and set the "window size" to 3 lines. Using f,b,z or w after this will shift the document 3 lines.
To reset the window size, you can type -+z (then enter).
Hope that helps.
| less command: b/f vs w/s |
1,654,992,595,000 |
Say that
https://example.nosuchtld
https://example.net
https://example.org
https://example.willfail
is the content of urls.txt. I want to run <command> <url> for every URL/line of urls.txt —where <command> is, let's say, curl; so,
cat urls.txt | xargs -n1 curl
or
<urls.txt xargs -n1 curl
for instance. I want every URL/line which was unsuccessfully curled (so, the first and last ones) to
be removed from urls.txt; and
be appended to another file —let's say nope.txt— to be created if it doesn't already exist
leaving urls.txt as
https://example.net
https://example.org
and nope.txt as
https://example.nosuchtld
https://example.willfail
I know that the exit status of every command run by the shell is made available via the variable $?, that 0 represents successful execution of a command, and that all other integers represent failure. I'm unsure, though, of how to construct a composite command that incorporates this, deletes lines from the file being read from, and appends 'em to a different file.
|
With zsh, you could do:
while IFS= read -ru3 url; do
curl -- $url
print -ru $(( $? ? 4 : 5 )) -- $url
done 3< urls 4> bad 5> good
That way, the bad and good files are opened only once and only if urls itself can be opened for reading.
| What's a clean way to run a specific command C for each line L of a given file F and then move every L where C(L) ran unsuccesfully? |
1,654,992,595,000 |
Some potential duplicates may be the following:
How to ignore specific lines from being redirected
BUT I can't extrapolate from that to solve my problem. So, I am asking here instead.
So, here is what I got so far:
sbcl --noinform --non-interactive --eval "(ql:quickload :lambda-calculus-compiler)" < test.lisp > x.txt
I am trying to evaluate a lisp file, read a text file test.lisp and use it as the input code to (ql:quickload :lambda-calculus-compiler. When loading this a function gets called the reads which will read the contents in test.lisp and then this evaluation will be redirected to x.txt.
Here are the contents of x.txt after this operation:
To load "lambda-calculus-compiler":
Load 1 ASDF system:
lambda-calculus-compiler
; Loading "lambda-calculus-compiler"
(((LAMBDA (N) (LAMBDA (M) (LAMBDA (F) (LAMBDA (Z) ((M F) ((N F) Z))))))
(LAMBDA (F) (LAMBDA (Z) (F (F (F Z))))))
(LAMBDA (F) (LAMBDA (Z) (F (F (F Z))))))
I want to ignore or get rid of
To load "lambda-calculus-compiler":
Load 1 ASDF system:
lambda-calculus-compiler
; Loading "lambda-calculus-compiler"
which is the output of (ql:quickload :lambda-calculus-compiler).
Any help will be appreciated. Please don't mark this a duplicate. As I mentioned I can't extrapolate from answers like the one I linked above.
Thanks.
|
One option is to pipe the command through sed before redirecting to the file:
sbcl --noinform --non-interactive --eval "(ql:quickload :lambda-calculus-compiler)" < test.lisp |
sed '1,/Loading "lambda-calculus-compiler"/ d' > x.txt
This removes the lines starting with 1 through the one after that containing that phrase.
But I suspect there may be other options specific to sbcl as I mentioned in the comments.
| How to redirect while ignoring some of the text |
1,654,992,595,000 |
I have a static html site at localhost. When I open Chromium and manually enter localhost into the address bar, the site appears and behaves correctly.
However, when I type the following from the terminal...
chromium-browser localhost
...Chromium opens, localhost appears in the address bar, but the page never loads and I end up with "Do you want to wait."
I'm running Raspberry Pi
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
cat /proc/version
Linux version 5.15.32-v8+ (dom@buildbot) (aarch64-linux-gnu-gcc-8 (Ubuntu/Linaro 8.4.0-3ubuntu1) 8.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #1538 SMP PREEMPT Thu Mar 31 19:40:39 BST 2022
How do I launch the page from command line?
ETA:
I tried the following URLs after chromium-browser
localhost (as described above)
http://localhost
www.google.com
https://www.google.com
Each had the same result.
In the terminal window the following error messages appear (I'm exercising my google-fu to see if these messages guide me to a solution):
libEGL warning: DRI2: failed to authenticate
[6181:6181:0420/125426.351921:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is egl, ANGLE is
[6181:6181:0420/125426.457777:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization
libEGL warning: DRI2: failed to authenticate
[6312:6312:0420/125429.016432:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is egl, ANGLE is
[6312:6312:0420/125429.154421:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization
[6358:6358:0420/125429.257975:ERROR:egl_util.cc(74)] Failed to load GLES library: /usr/lib/chromium-browser/libGLESv2.so: /usr/lib/chromium-browser/libGLESv2.so: cannot open shared object file: No such file or directory
[6358:6358:0420/125429.298887:ERROR:viz_main_impl.cc(188)] Exiting GPU process due to errors during initialization
[6367:6367:0420/125429.388139:ERROR:gpu_init.cc(454)] Passthrough is not supported, GL is disabled, ANGLE is
[6129:6186:0420/125436.969258:ERROR:chrome_browser_main_extra_parts_metrics.cc(227)] START: ReportBluetoothAvailability(). If you don't see the END: message, this is crbug.com/1216328.
[6129:6186:0420/125436.969526:ERROR:chrome_browser_main_extra_parts_metrics.cc(230)] END: ReportBluetoothAvailability()
|
I found this post: https://forums.raspberrypi.com/viewtopic.php?t=330711#p1990605
I blindly tried its suggestion, which was to execute the following:
echo 'export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --use-gl=egl"' | sudo tee /etc/chromium.d/egl
(that is, it creates a file /etc/chromium.d/egl which contains the command to set that envar).
It worked. Someday I may even try to understand why.
| chromium-browser - can't launch site from command line |
1,654,992,595,000 |
I'm trying to easily move some data with rsync from one server to another without actually connecting manually and doing all that, but only giving the IPs as arguments.
# -- Variables
my_key="my_key"
new_ct="${2}"
old_ct="${1}"
# -- SHH key generation on the localhost
mkdir /tmp/keys/
cd /tmp/keys
ssh-keygen -t ed25519 -f /tmp/keys/id_ed25519 -q -N ""; \
# -- Copy the keys on the old_ct
scp -P 2222 -o StrictHostKeyChecking=no -i ${HOME}/.ssh/${my_key} \ /tmp/keys/id_* root@${old_ct}:~/.ssh/
# -- Copy the key to new_ct and write it to authorized_keys file
scp -P 2222 -o StrictHostKeyChecking=no -i ${HOME}/.ssh/${my_key} \
/tmp/keys/id_ed25519.pub root@${new_box}:~/.ssh/
ssh -o StrictHostKeyChecking=no root@${new_ct} -p 2222 -i ${HOME}/.ssh/${my_key} \
"cat ~/.ssh/id_ed25519.pub >> ~/.ssh/authorized_keys"
# -- Lastly, start the rsync transfer on the old_ct in a detached screen session
ssh -o StrictHostKeyChecking=no root@${old_ct} -p 2222 -i ${HOME}/.ssh/${my_key} \
"
screen -dmLS "migrating.localdata.to.newCT" \
bash -c "rsync -azvhHSP --stats -e \
'ssh -p 2222 -o StrictHostKeyChecking=no' \
/home/user root@${new_ct}:/home"
"
# -- Remove the keys
rm -rf /tmp/keys
The last part of the script, the part with rsync is the one that doesn't work. The rest works flawlessly.
I do need those double quotes "" wrapping the entire bash command that will be running inside the screen session as well as those single quotes '' for the ssh options which rsync needs.
My question is how to put them all in such a way that it all works?
|
You have double quotes inside the double quotes (for instance, "migrating.localdata.to.newCT"). You need to escape the internal double quotes to be treated literally.
ssh -o StrictHostKeyChecking=no root@${old_ct} -p 2222 -i ${HOME}/.ssh/${my_key} \
"
screen -dmLS \"migrating.localdata.to.newCT\" \
bash -c \"rsync -azvhHSP --stats -e \
'ssh -p 2222 -o StrictHostKeyChecking=no' \
/home/user root@${new_ct}:/home\"
"
By the way, you don't have to run bash -c after screen. You can just add the command and the arguments after the screen, and it will run them. This will save you some nested quotes and the escaping.
ssh -o StrictHostKeyChecking=no root@${old_ct} -p 2222 -i ${HOME}/.ssh/${my_key} \
"
screen -dmLS 'migrating.localdata.to.newCT' \
rsync -azvhHSP --stats -e \
'ssh -p 2222 -o StrictHostKeyChecking=no' \
/home/user root@${new_ct}:/home
"
| How to initiate rsync transfer from one server to another through ssh? |
1,654,992,595,000 |
I am on a server, whoose network is set statically:
auto eth0
iface eth0 inet static
address 10.1.212.103
netmask 255.255.255.0
gateway 10.1.212.1
How can I, from the commandline, pretend I am DHCP client, and ask DHCP server for network info?
I don't actually want to change the network settings, but I want to see what DHCP info the server would send back.
Specifically, I have nameservers set statically in /etc/resolv.conf and they don't work. I want to see what nameservers the DHCP server would send me, if I set my interface dynamically
I tried dhcping but that does not really work. I don't know the IUP address of the DHCP server on my network. Without any pareameters:
# dhcping
dhcping -c ciaddr -g giaddr -h chaddr -r -s server -t maxwait -i -v -q
I only know the gateway, but specifying gateway does not work:
dhcping -g 10.1.212.1
no answer
|
For me, dhcping works:
$ sudo dhcping -v -s 192.168.177.1
Got answer from: 192.168.177.1
You can use -V to see the packets exchanged. However, as it's not a real request, I got a NACK, and no nameserver information.
It doesn't work without the server address for me, apparently it doesn't do a broadcast?
As for -g, see the man dhcping:
-g gateway-IP-address
Use this IP address for the gateway IP address in the DHCP packet. This option is currently broken.
You could also use dhclient, but that will change your network configuration, so you'll have to manually restore if afterwards. But it's the best way I know of to get actual information.
There is dhcpdump which will show the DHCP packets on the network interface. This will include nameserver information, but needs something to trigger the exchange.
The gateway address often is the same address as the server where DHCP runs.
| get DHCP info from commandline |
1,654,992,595,000 |
Currently I am having a bash script in which I accept input from the command line, but the input is with spaces and the bash script is not reading the word after the space. The script is something like this
#!/bin/bash
var1=$1
var2=$2
echo $var1
echo $var2
Suppose I save this file as test.sh. Now my input is something like this -
./test.sh hi check1,hello world
and the output is -
hi
check1,hello
but I need the output as
hi
check,hello world
PS: I cannot provide the inputs in double quotes so I need some other solution where I can read the word with spaces
|
That's not possible. The word splitting of the arguments happens before the script is run, so when it starts, the arguments have already been split into words. Read about "word-splitting" in man bash to learn more about the details.
If you know there will be 2 arguments and the first one will never contain spaces, you can workaround it somehow with
#! /bin/bash
first=$1
shift
rest="$*"
printf '<%s>\n' "$first" "$rest"
But it will still shrink multiple spaces into one.
| How to accept input with spaces from command line in bash script |
1,654,992,595,000 |
I use incremental backup using
tar --create --file=/home/blueray/Documents/backup/dest/$(date +%Y-%m-%d-%H-%M-%S).tar --listed-incremental=/home/blueray/Documents/backup/dest/usr.snar /home/blueray/Documents/backup/src
But the problem is,
it create too many .tar files as i backup multiple times a day. For example 2021-11-23-23-34-38.tar, 2021-11-23-23-34-43.tar . I am not understanding whether to merge or delete older tar files.
How will I extract the tar files with minimum effort (expected a single command).
Find a specific file to extract.
What can be the solution for this problem.
Is it even worth investing time in tar , given my requirements?
|
It is normal to have a multiple backup version, if you delete the old tar file you can not extract/restore from it anymore.
in order to extract you can issue the below command:
tar --extract --verbose --verbose --listed-incremental=/dev/null --file=2021-11-23-23-34-38.tar
I suppose here you want to restore from the file " 2021-11-23-23-34-38.tar "
Regarding your 3rd point in order to find a specific file to extract you can list
tar --list --verbose --verbose --listed-incremental=/home/blueray/Documents/backup/dest/usr.snar --file=2021-11-23-23-34-38.tar
| How to manage tar files when creating incremental backups |
1,654,992,595,000 |
I have a script that reads a line from a file into variable foo. The line is in fact a long command that I often use as a template. I am trying to edit and re-use this long command. How can I place $foo on the command line ready to edit? I do not want to copy and paste the line, nor do I want to use any new apps that need installing such as xdotool. I only want to use the basic Linux commands if possible.
The method with partial success is to append the line to the .bash_history file and then recall it in the usual way. But to do this I need to close the terminal and re-open it (so that the latest copy of the history file is loaded to RAM).
|
To reload bash's history file into the current shell (i.e. without logging out and logging in again), run:
history -n
From help history:
-n read all history lines not already read from the history file
| how to place a text in the command line |
1,654,992,595,000 |
Is there a Linux command that can remove the first character on the first line for every file in a folder, and then save it?
Many online resources cover doing this for every line instead of just the first one, for filenames, or they don't save the files. I tried modifying one of these commands to do what I wanted, but it actually made the files blank instead.
|
If you have GNU sed :
find PATH -type f -exec sed -s -i '1s/.//' \{} +
| Is there a Linux command that removes just the first character of every file? |
1,654,992,595,000 |
I have improper text file which has product name, website location and quantity. Now I just want to prepare product name, number (extract from URL) and quantity.
Input file:
rawfile.txt
Component name Link Quantity
Ba Test Con - Red https://kr.element14.com/multicomp/a-1-126-n-r/banana-plug-16a-4mm-cable-red/dp/1698969 25
Ban Te Con - Black https://kr.element14.com/multicomp/a-1-126-n-b/plug-16a-4mm-cable-black/dp/1698970 25
Ban Te Con - Black https://kr.element14.com/hirschmann-testmeasurement/930103700/socket-4mm-black-5pk-mls/dp/1854599 15
Expected output:
Ba Test Con - Red 1698969 25
Ban Te Con - Black 1698970 25
Ban Te Con - Black 1854599 15
My code:
For product name:
# extract product name
grep '.*?(?=https://)' rawfile.txt
# extract product number
grep -Po '\b[0-9]{6,7}\t\b' rawfile.txt
# extract quanity
grep -Po '\t[0-9]{1,3}' rawfile.txt
# Now combining the last two functions into one ; this works
# grep -Po '(number argument)(quantity argument)' rawfile.txt
grep -Po '(\b[0-9]{6,7}\t\b)(\t[0-9]{1,3})' rawfile.txt
1698969 25
1698970 25
1854599 15
# Now combining the three functions into one and producing an output text file; this works
# grep -Po '(product name argument)(number argument)(quantity argument)' rawfile.txt
grep -Po '(.*?(?=https://))(\b[0-9]{6,7}\t\b)(\t[0-9]{1,3})' rawfile.txt
Present output:
>> grep -Po '(.*?(?=https://))(\b[0-9]{6,7}\t\b)(\t[0-9]{1,3})' rawfile.txt
>> # no output
|
Something simple like this would do? (can be improved, but you got the idea)
$ cat test.txt
Ba Test Con - Red https://kr.element14.com/multicomp/a-1-126-n-r/banana-plug-16a-4mm-cable-red/dp/1698969 25
Ban Te Con - Black https://kr.element14.com/multicomp/a-1-126-n-b/plug-16a-4mm-cable-black/dp/1698970 25
Ban Te Con - Black https://kr.element14.com/hirschmann-testmeasurement/930103700/socket-4mm-black-5pk-mls/dp/1854599 15
$ sed 's#https://.*/##' test.txt
Ba Test Con - Red 1698969 25
Ban Te Con - Black 1698970 25
Ban Te Con - Black 1854599 15
| Extract Product, number and quantity from a text file |
1,654,992,595,000 |
I'm attempting to use sed to remove some lines from json files. As you can probably tell, I'm very much an amateur but think I have the regex correct. However sed throws various errors including unterminated address and unterminated 's' command.
The text I'm trying to remove is:
{
"trait_type": "Accessories",
"value": "None"
}
And the regex I have to remove this (trait_type is different throughout different files and so I need to remove this entire block based on the value being "none")
(\{)([\r\n].+)(\r|\n|)(.*(?:None).*)([\r\n].*)
Using online regex testers, the above seems to work perfectly.
The command (and a couple of variations of) that I'm using is:
sed -e 's/\{([\r\n].+)(\r|\n|)(.*(?:None).*)([\r\n].*)//g' 2.json
Could anyone assist at all?
Many thanks
|
Assuming this is part of a JSON file, the following jq command would find all objects with a trait_type key and a value key and delete all of those objects where the value key has the value None.
jq 'del( .. |
select(
type == "object" and
has("trait_type") and has("value") and
.value == "None"
) )' file
The command above would write the result of the operation to standard output. Redirect the output to a new file to save it.
| Delete text from files using regex with sed |
1,654,992,595,000 |
I'm using the following command to unzip epub files recursively inside a folder:
find -iname \*.epub -exec unzip {} \;
It works. But the terminal asks me this each time a file is being extracted:
replace mimetype? [y]es, [n]o, [A]ll, [N]one, [r]ename:
Is there a flag that I can add to the command so it automatically selects [A]ll?
|
If you want to overwrite existing files without prompting, use unzip’s -o option:
find -iname \*.epub -exec unzip -o {} \;
If you only want to avoid being prompted, a safer option would be -n — it skips files which already exist, instead of overwriting them, again without prompting.
| How to automatically replace mimetype when unzipping? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.