date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,495,530,814,000 |
I am facing an issue in my project.We support an App which is hosted on a unix server.
Partners try to access the same from many regions. Now some users in an region say "R" are complaining about slowness and we have reason to believe that it might be due to a network issue which is at their end.
What are some commands I can run in their system in their terminal through which I can prove to them that it is an network issue at their end?
Also some commands which can prove to them that other web applications in the past few minutes have also taken a lot of time from their systems?
I am very new to UNIX.Thanks in advance.
|
Inherent limitations
The issue could be variable (e.g. congested link to ISP or congestion within ISP). It could also be horrible ("firewall" or even "anti-virus" doing deep packet inspection); the tools below might not show any problem at all. They're worth having, but there is a limit to how much you can achieve just typing commands into a terminal.
2 tests you should know
Use ping to measure round-trip latency to servers over ICMP/IP. You can also traceroute or tracepath to your server, and check how much round-trip latency there is to the first few hops. You're mainly trying to check for symptoms of bufferbloat, so be aware that only happens when the link is fully used! ("latency under load" measure).
You can check available web download bandwidth (single-stream) just using wget or curl --remote-name to download a file. If you're uninspired I suggest downloading a Linux :-). Find the download link and use "copy link location" from the right-click menu. You probably don't have to let the download run to completion because it'll show the current download rate - use Control+C to cancel it. You could test a mirror in the same region as your server (this is potentially significant). I guess if you're considering the terminal, it's good to know wget exists. Personally I'd prefer to use http://testmy.net/mirror.
That's basically it, from the information you've given. There's a caveat with one of the results from ping, which I've highlighted below.
ping is excellent for initial testing. traceroute is an expert tool. I only suggest traceroute as a way to try and illustrate bufferbloat, if that's what ping seems to show... it may actually be better to use ping on the routers you see in traceroute.
Low download rates as a direct cause can easily be over-estimated. Webapps don't need to serve much data to respond to user requests unless there are uncached images. E.g. unix.stackexchange.com is 75K and takes 0.2s to download at 4Mb/s. But it's easy to run a test, and provides a little data point to fit into the puzzle.
How much packet (ping) loss is too much?
Any noticeable packet loss rate can limit download rate, and particularly over trans-continental distances.
Unfortunately the effect of loss on short transactions is a bit more complicated than that. It looks like a single loss probably won't cause more than 100% increase for transfers around ~20Kb. Unless the first packet from the server (or client) is dropped, in which case it won't recover until a full "receive time out" - 3 seconds.
There's an issue/caveat when measuring loss, in that it could be affected by packet size. When measuring loss with ping, you should notice that it uses small packets by default. This is similar to the first packets from client and server (SYN / SYN-ACK respectively). Putting this together, if you see 5% loss when running ping $SERVER without options, you wouldn't expect a perfect experience using that web app. (I.e. out of 20 user actions, expect 1 of them to take 3 seconds before anything happens at all. It won't be mitigated by persistent connections given common web server configurations)
You can check statistics for full-size packets e.g. ping -s 1400 on unix. In principle there could be yet more factors ("prioritization" at the router aka QoS) and what you specifically want are TCP retransmit details from the specific application, gathered from the kernel or a packet trace.
Note that from an endpoint, it's very difficult to distinguish whether a link is congested, v.s. whether the link is physically unreliable. Packet loss is how routers tell TCP to slow down; more congested links will have more packet loss. I think the best you could expect is to identify ("prove") a link with high packet loss, and ask someone with access for investigation or monitoring.
| Network slowness [closed] |
1,495,530,814,000 |
One of my applications won't open so I'm trying the first thing to do on this website:
http://www.cnet.com/news/tutorial-what-to-do-when-a-mac-os-x-application-will-not-launch/
But I am getting an error for this command that I enter. Why?
sudo update_prebinding -root / -force
|
You are receiving that error because the program "update_prebinding" is not in your PATH, possibly because it is not installed on your system.
| Why am I receiving the error: "sudo: update_prebinding: command not found" on the command line? |
1,495,530,814,000 |
I would like to compare the first column of file1 with the second column of file2 and only if they match it should display any matching lines in file2 only as the output. The columns are separated by |.
file1:
syfar03040k16.audc1.oraclecloud.com |
syfar03040m02.audc1.oraclecloud.com |
syfar03040m04.audc1.oraclecloud.com |
syfar03040n11.audc1.oraclecloud.com |
syfar03040n01.audc1.oraclecloud.com |
syfar03040n02.audc1.oraclecloud.com |
syfar03040n03.audc1.oraclecloud.com |
syfar03040n05.audc1.oraclecloud.com |
syfar03040n07.audc1.oraclecloud.com |
syfar03040o11.audc1.oraclecloud.com |
syfar03040o01.audc1.oraclecloud.com |
syfar03040o02.audc1.oraclecloud.com |
syfar03040o03.audc1.oraclecloud.com |
syfar03040o13.audc1.oraclecloud.com |
syfar03040o05.audc1.oraclecloud.com |
syfar03040o04.audc1.oraclecloud.com |
syfar03040o16.audc1.oraclecloud.com |
file2:
| LDAP | syfar03040o11.audc1.oraclecloud.com |
| OIM | syfar03040o01.audc1.oraclecloud.com |
| AUTHOHS | syfar03040o02.audc1.oraclecloud.com |
| APPOHS | syfar03040o03.audc1.oraclecloud.com |
| BI | syfar03040o04.audc1.oraclecloud.com |
| ADMIN | syfar03040o05.audc1.oraclecloud.com |
| PRIMARY | syfar03040o06.audc1.oraclecloud.com |
| SECONDARY | syfar03040o07.audc1.oraclecloud.com |
| APPOHS_HA1 | syfar03040o13.audc1.oraclecloud.com |
| PRIMARY_HA1 | syfar03040o16.audc1.oraclecloud.com |
| SECONDARY_HA1 | syfar03040o17.audc1.oraclecloud.com |
| OSN | syfar03040o09.audc1.oraclecloud.com |
File3:
| LDAP | syfar03040o11.audc1.oraclecloud.com |
| OIM | syfar03040o01.audc1.oraclecloud.com |
| AUTHOHS | syfar03040o02.audc1.oraclecloud.com |
| APPOHS | syfar03040o03.audc1.oraclecloud.com |
| BI | syfar03040o04.audc1.oraclecloud.com |
| ADMIN | syfar03040o05.audc1.oraclecloud.com |
| APPOHS_HA1 | syfar03040o13.audc1.oraclecloud.com |
| PRIMARY_HA1 | syfar03040o16.audc1.oraclecloud.com |
|
awk '
NR == FNR {
file1[$1] = 1;
next;
}
$4 in file1 {
print $0;
}
' file1 file2
| Column comparison using awk [duplicate] |
1,495,530,814,000 |
I just started learning Linux commands. I was experimenting with the > command, that, as far as I understand, makes the command before it write it's output to the file after the sign. My perception however, seems to be different from the behavior of time > time.txt:
nagy@nagy-VirtualBox ~/Dokumentumok/random $ date
2016. márc. 20., vasárnap, 18.14.58 CET
nagy@nagy-VirtualBox ~/Dokumentumok/random $ time
real 0m0.000s
user 0m0.000s
sys 0m0.000s
nagy@nagy-VirtualBox ~/Dokumentumok/random $ date > date.txt
nagy@nagy-VirtualBox ~/Dokumentumok/random $ time > time.txt
real 0m0.000s
user 0m0.000s
sys 0m0.000s
nagy@nagy-VirtualBox ~/Dokumentumok/random $ cat date.txt
2016. márc. 20., vasárnap, 18.15.21 CET
nagy@nagy-VirtualBox ~/Dokumentumok/random $ cat time.txt
nagy@nagy-VirtualBox ~/Dokumentumok/random $
So, as it seems, while date > date.txt works as expected, the >, oddly enough for me, fails with time > time.txt. Can anybody explain this?
This happened on a Linux Mint 14.3 virtual machine hosted by Windows 10.
|
The time command does not do what you seem to think it does. In fact, since its purpose is to time the running of another command, running it without arguments (without a command to time) doesn't make much sense. (Apparently it still runs without complaint though!)
The specific effect you are seeing here is that time outputs its statistics on standard error, not on standard output (in order to avoid interfering with the output of whatever it is that you are timing). Redirecting the standard output has no effect on the standard error channel. Try this instead:
time ls 2>time.txt
...where you are redirecting the standard error, not the standard output. The output of ls is displayed as usual (standard output is not redirected), but the output of time itself on standard error goes to the file, which is what I think you were trying to achieve.
| "time > time.txt" refuses to work propely |
1,495,530,814,000 |
How do I do this in command line?
Below is code i saw imported to PhpmyAdmin but i don't use any interface...
CREATE TABLE IF NOT EXISTS `products` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`product_code` varchar(60) NOT NULL,
`product_name` varchar(60) NOT NULL,
`product_desc` tinytext NOT NULL,
`product_img_name` varchar(60) NOT NULL,
`price` decimal(10,2) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `product_code` (`product_code`)
) AUTO_INCREMENT=1 ;
|
Assuming this is for MySQL (or MariaDB), and you have the above as the contents of a file named table_create.sql:
$ mysql --user=$USERID --password=$PASSWD --database=$DB < table_create.sql
You have to know what your MySQL user ID and password are, and know the database name. Substitute those strings as appropriate for the apparent "shell variables" in the above command.
It's probably advisable to have your table creation in a file, so as to be able to repeat that creation when you screw up, but if you must do it at the command line:
$ mysql -u $USER -p
Enter password:
...
MariaDB [(none)]> use yourdatabase
Database changed
MariaDB [yourdatabase]> CREATE TABLE IF NOT EXISTS `products` (
-> `id` int(11) NOT NULL AUTO_INCREMENT,
-> `product_code` varchar(60) NOT NULL,
-> ...
-> );
Query OK, 0 rows affected (0.24 sec)
| how to create table with fields in command line Centos |
1,495,530,814,000 |
Assume a multitail call like the following:
multitail -s 2 -l "long-running-command" -l "short-running-command"
Now, I would like to have both windows remain open, even after they are finished. However, multitail will just close the "short-running-command"-window once it exits. Which makes it kinda useless for my use case.
I know there are workarounds like outputting to files and tailing those instead, but I wonder if there is a way to keep the windows open in multitail even after the process is finished, without creating new files.
|
There are a couple of approaches I can think of.
First, if you don't mind having multitail close when the longer running command finishes, you can pipe the shorter running command to it and display stdin:
short-running-command | multitail -s 2 -l "long-running-command" -j
Second, you can add a long delay after running the commands:
multitail -s 2 -l "long-running-command; sleep 120" -l "short-running-command; sleep 3600"
| Possible to prevent multitail from closing "finished" command windows? |
1,495,530,814,000 |
after reading the man pages I am unable to find an explanation of what this does (the tack e option)
usermod -L -e 1 username
-e 1
does this mean one day after the linux epoch, jan 1 1970? How would anyone know this since it's not documented anywhere?
|
You are right. It's value is in days.
From the usermod(8) man page:
-e, --expiredate EXPIRE_DATE
The date on which the user account will be disabled. The date is specified in the format YYYY-MM-DD.
But there is more information in the shadow(5) man page:
account expiration date
The date of expiration of the account, expressed as the number of days since Jan 1, 1970.
Note that an account expiration differs from a password expiration. In case of an acount expiration, the user shall not be
allowed to login. In case of a password expiration, the user is not
allowed to login using her password.
An empty field means that the account will never expire.
The value 0 should not be used as it is interpreted as either an account with no expiration, or as an expiration on Jan 1, 1970.
You can confirm this reading the usermod.c source code:
case 'e':
if ('\0' != *optarg) {
user_newexpire = strtoday (optarg);
| usermod question about option e |
1,495,530,814,000 |
GNU bash, version 4.3.42(2)-release-(i686-pc-msys)
Usage: bash [GNU long option] [option] ...
bash [GNU long option] [option] script-file ...
GNU long options:
--debug
--debugger
--dump-po-strings
--dump-strings
--help
--init-file
--login
--noediting
--noprofile
I notice there are two options for debug of my bash script. since I'm new to bash, I really want a debugger or things like that.
I have searched official document about bash 4.3, but I still don't know what the "--debug" long option suppose to do.
|
Looking at the source code, --debug sets a debugging flag (in shell.c), but I can't find anywhere that flag is used. So it appears to not do anything.
--debugger activates support features for the bash debugger; the latter is probably what you're looking for.
| What the long option " --debug" suppose to do in bash? |
1,495,530,814,000 |
Some flac files apparently have a “cuesheet metadata block”. I know how to split flacs files with shnsplit when I have a separate cuesheet at hand (cf. “How do I split a flac with a cue?”), but how do I split a flac when the cuesheet is stored inside a metadata block of the flac file?
Command-line preferred.
|
By exporting the cue-sheet to a file first. For example, metaflac has an --export-cuesheet-to=FILE option.
From man metaflac:
Export CUESHEET block to a cuesheet file, suitable for use by CD
authoring software. Use '-' for stdout. Only one FLAC file may be
specified on the command line.
For example:
f='file.flac'
bn=$(basename "$f" .flac)
cue="$bn.cue"
[ ! -e "$cue" ] && metaflac --export-cuesheet-to="$cue" "$f"
shnsplit -f "$cue" -t '%n-%t' -o flac "$f"
| Splitting a flac from a cuesheet metadata block |
1,495,530,814,000 |
I need to run the same command every time I boot.
I'm running Linux Mint 17.2 Cinnamon.
I have added a startup application with the command under preferences "startup applications".
However the command does not run although it runs and works perfectly when I run it myself.
Details
The command is:
echo 'adminpassword' | sudo -S modprobe -v ndiswrapper
For some reason I need to run that every time I boot or ndiswrapper will forget I installed a windows wifi driver.
|
You do not need to run a program. You just need to configure your system to load the ndiswrapper module at startup.
For that you edit the file /etc/modules with the command:
sudo vi /etc/modules
And append to the end of it the line:
ndiswrapper
The file /etc/modules configures which loadable modules are automatically loaded.
For more details please read:
https://help.ubuntu.com/community/Loadable_Modules
| How do I enable a kernel module on startup? |
1,495,530,814,000 |
I'm planning to write a data cruncher command line tool, I'm considering Go, Rust and Javascript. But I'm afraid of jeopardising the project by using something not stable enough or painful to deploy.
Are there examples of widespread packages carried by many distributions written in Go?
And if I want to either write in javascript or evaluate javascript from another language, what are my options? I know there's node.js, but it looks like a huge dependency. Are there examples of UNIX tools written in Javascript?
Am I right to assume that Rust is not ready yet in the same sense?
|
I'm not aware of widely-used tools written in any of those languages. Node.js is not any bigger a dependency than perl or python, though perhaps not as commonly installed. I actually just installed node 0.12 yesterday on a Windows system with no trouble.
My question is - why not C, python, perl, php, awk, bash script, or tcl? All have long histories and are widely available.
| Languages to use for console tools - Go, Rust, Javascript? [closed] |
1,442,690,062,000 |
I tried to remove Blender in the command line , in order to install it anew because of flawed selection in edit mode. So, this is what I typed and the error message I got, I do not understand the error-message.
sudo apt-get remove blender
[sudo] password for terazer:
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?
|
As for understanding the message, the lock file is used to stop more than one package manager trying to modify the system at once. If this weren't here the system could easily become damaged beyond repair.
Therefore this normally means that another package manager is running at the same time. Do you have a graphical front-end open, like the Ubuntu Software Center or synaptic? As a general rule, you can't have the graphical apps open and use the terminal to install software at the same time. Try seeing if there is a dpkg utility running by running pgrep dpkg - if you see a list of process IDs that probably means dpkg is currently modifying the packages on the system.
| How to remove Blender in the command line ? |
1,442,690,062,000 |
I have a directory looks like:
dhcp-18-189-47-44:CE-06-new-stuctures_backup wenxuanhuang$ ls
DFT-00001 DFT-00004 DFT-00007 DFT-00010 DFT-00013 DFT-00016 DFT-00019 DFT-00022 DFT-00025 DFT-00028 DFT-00031 DFT-00034
DFT-00002 DFT-00005 DFT-00008 DFT-00011 DFT-00014 DFT-00017 DFT-00020 DFT-00023 DFT-00026 DFT-00029 DFT-00032
DFT-00003 DFT-00006 DFT-00009 DFT-00012 DFT-00015 DFT-00018 DFT-00021 DFT-00024 DFT-00027 DFT-00030 DFT-00033
And inside each folder there is there is a file called Li?Fe?O?_0, however, some of them might overlap, for example:
dhcp-18-189-47-44:CE-06-new-stuctures_backup wenxuanhuang$ ls DFT-00001/
Li1Fe5O6_0
dhcp-18-189-47-44:CE-06-new-stuctures_backup wenxuanhuang$ ls DFT-00002/
Li1Fe5O6_0
dhcp-18-189-47-44:CE-06-new-stuctures_backup wenxuanhuang$ ls DFT-00010/
Li2Fe4O6_0
Now, I want to extract the subfolder out into another directory, The first attempt I try is:
find `pwd` -mindepth 1 -maxdepth 1 -type d -exec sh -c "echo {}; cd {}; ls; cp -r * /Users/wenxuanhuang/Desktop/software/CASM_NEW/LiFeO_from_Alex_2015_08_25/LiFeO2-CE/02-refinement/CE-06-new-stuctures_extracted" \;
However, due to naming conflicts, some of them will overlap each other. what I want is: if they overlap: I want to rename it into something that is not conflicting and copy it inside....
Ideally, Suppose Li1Fe5O6_0 is already in the new folder and I am going to copy another Li1Fe5O6_0 into that, I would like to name the last Li1Fe5O6_0 into Li1Fe5O6_1 and copy that Li1Fe5O6_1 inside (in the future, we might have Li1Fe5O6_1 Li1Fe5O6_2 Li1Fe5O6_3 etc.) But if this version of code is too tedious. Then it doesn't matter...
|
This should do:
#!/bin/bash
# this is the crucial setting: replace a glob pattern that matches zero files
# with nothing (the default is to *not* replace the pattern at all)
shopt -s nullglob
destination=/some/directory
unique_filename() {
local root=${1%_*}_
local files=( "$destination/$root"* )
echo "$destination/${root}${#files}"
}
cd /wherever/you/need/to/go
for f in */Li?Fe?O?_0; do
echo mv "$f" "$(unique_filename "$(basename "$f")")"
done
It works by counting the number of files in the destination directory matching, for example, "Li1Fe5O6_*". If there are none, use "Li1Fe5O6_0". If "Li1Fe5O6_0" is already present, the $files array will have one element, so the unique filename will then be "Li1Fe5O6_1"
| extract subfolders outside and rename if necessary |
1,442,690,062,000 |
I would like to run multiple processes where each process is running another one, so it will create a long process branch.
I've tried:
$ bash -c '(bash -c "sleep 20"); sleep 20' &
$ pstree $(pgrep -fn bash)
which creates parent and one child process, but is there any trick or easier way of doing that it would generate further 10-20 processes down without struggling with the syntax too much?
|
You can create a recursive script. eg in file /tmp/run
#!/bin/bash
depth=${1:-5}
f(){
let depth--
if [ $depth -gt 0 ]
then $0 $depth
else sleep 10
fi
}
f
then chmod +x /tmp/run and do /tmp/run 10.
| How to run bunch of hierarchical dummy shell processes (process of another process, etc.)? |
1,442,690,062,000 |
Given a large list of files, containing the following:
FILE1.doc
FILE1.pdf
FILE2.doc
FILE3.doc
FILE3.pdf
FILE4.doc
Is there a terminal command that would allow me to remove all files that do not have a duplicate name in the list? In this case...FILE2.doc and FILE4.doc?
|
Using bash, this will remove all files that don't have another file with the same name but different extension:
for f in *; do same=("${f%.*}".*); [ "${#same[@]}" -eq 1 ] && rm "$f"; done
This approach is safe for all file names, even those with white space in their names.
How it works
for f in *; do
This starts a loop over all files in the current directory.
same=("${f%.*}".*)
This creates a bash array with the names of all files with the same basename.
$f is the name of our file. ${f%.*} is the name of the file without its extension. If, for example, the file is FILE1.doc, then ${f%.*} is FILE1. "${f%.*}".* is all the files with the same basename but any extension. ("${f%.*}".*) is a bash array of those names. same=("${f%.*}".*) assigns the array to the variable same.
[ "${#same[@]}" -eq 1 ] && rm "$f"
If there is only one file with this basename, we delete it.
"${#same[@]}" is the number of files in the array same. [ "${#same[@]}" -eq 1 ] is true if there is only one such file.
&& is logical-and. It causes the statement which follows, rm "$f" to be executed only if the statement which precedes it returns logical true.
done
This marks the end of the for loop.
| Delete all files that DON'T have duplicate names? |
1,442,690,062,000 |
I have 2 related Doubts about Bash.
(Q1) Consider tail -f SomeFile | wc, a fictitious command-line, where tail is used to simulate a command (C1) which runs for a long time, with some output from time to time, and wc is used to simulate a command (C2) which processes that output when C1 finishes. After waiting for a long time (longer than usual) I want to see what output has been generated till now, so I want C1 to terminate. But pressing Ctrl-C will kill this whole pipeline. How can I kill only C1 ( or even a component of C1, if that is itself a compound command ) ?
If C1 had been looping over many files and grepping some text, but one file was from some hung nfs server, then I want to kill only that grep process.
(for F in A B C D E ; do grep sometext $F ; done) | wc
Here Ctrl-C will kill the whole command-line, but I want to kill only the currently running (or hung) process and continue with remaining files.
One solution I have, is to open a new connection, get the ps output, and "kill" it. I was wondering if there was a solution from Bash itself, such that some strange key-combination kills only the current process ?
(Q2) While trying to make examples for this question, I made this command-line,Here, where if I press Ctrl-C, I get an extra line of output, like this:
# echo `ping 127.0.0.1` | wc
^C
#
When backticks (``) are not used, the extra line is not there:
# tail -f SomeFile | wc
^C
#
Am I correct in thinking that since backticks (``) are handled by bash itself and, when the sub-process is killed, it is still considered as "empty output", so that is printed as the extra line ?
|
In bash you can run:
cmd1 | cmd2 | (trap '' INT; cmd3)
And a Control-C will only kill cmd1 and cmd2, but not cmd3.
Example:
$ while sleep .1; do echo -n 1; done | (trap '' INT; tr 1 2)
^C222222222
$ while sleep .1; do echo -n 1; done | tr 1 2
^C
This takes advantage of the fact that a signal disposition of "ignore" is inherited by subprocesses -- the trap '' INT will also affect the tr command. But of course, some commands install their own SIGINT handlers, which will break this assumption.
Unfortunately, this doesn't work in ksh93 because of a stupid bug. A workaround there could be:
ksh93$ while sleep .1; do echo -n 1; done | sh -c 'trap "" INT; exec tr 1 2'
^C222222222ksh93$
| How to kill only current process and continue with shell pipe-line? |
1,442,690,062,000 |
I was reading this link from MySQL:
https://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-install-linux-binary.html
shell> cp support-files/mysql.server /etc/rc.d/init.d/
shell> chmod +x /etc/rc.d/init.d/mysql.server
shell> chkconfig --add mysql.server
In the mysql.server file the content says:
PATH=$PATH:/usr/local/SomeDir/mysql/bin
export PATH
But checking the $PATH variable the /usr/local/SomeDir/mysql/bin was not added.
Now, I was looking for the proper solution of this.
I found these links:
https://stackoverflow.com/questions/10235125/linux-custom-executable-globally-available
Edit your .bashrc to add the desired directory on the PATH environmental variable.
export PATH=/usr/local/google_app_engine/bin:$PATH
then, either start new terminal or do,
source ~/.bashrc
Now try to run the script from anywhere.
How can I make a program executable from everywhere
If you just export PATH=$PATH:. at the command line it will only last for the length of the session though.
If you want to change it permanently add export PATH=$PATH:. to your ~/.bashrc file (just at the end is fine).
https://rajesh9333.wordpress.com/2013/09/12/mysql-binary-installation-on-linux-redhat-and-centos-servers/
Create a file with the name of mysql.sh at the path /etc/profile.d/
# vi /etc/profile.d/mysql.sh
#!/bin/sh
PATH=$PATH:/usr/local/mysql/bin
export PATH
http://sgdba.blogspot.com/2013/08/install-mysql-56-binary-on-centos-64-64.html
[root@CentOS ~]# echo "export PATH=$PATH:/usr/local/mysql/bin" >>/etc/profile
[root@CentOS ~]# source /etc/profile
Question
In CentOS 6.x what is the proper place (path or location) to put this file with instruction?
Comment: Maybe before solutions works, my question is, how MUST put my instructions. My question is about styles...
Thank you
|
First part of this question refers to some how to, which is NOT making program available from everywhere -- it is making the program available from that particular init script, which is the correct solution for such task.
Second part lists the correct solution to make it available from everywhere. If you see manual pages for bash(1), you can see the difference in description:
/etc/profile
The systemwide initialization file, executed for login shells
~/.bashrc
The individual per-interactive-shell startup file
So the difference is
when the file is loaded: shell startup OR login
if it is available for the specific user OR for all users
The difference profile.d version is only that you write it in other file, but I think you can make up that it has the same meaning as putting it into /etc/profile.
| How must (Not how can) a program be available from everywhere in CentOS/Fedora |
1,442,690,062,000 |
I want to write a Script that picks a random English word from /usr/share/dict/words, translates it into German, display both of them for a certain amount of time and repeat the process. I only know the beginning part and do not know how to use a word to word translation in the shell:
watch -n5 sh -c 'cat /usr/share/dict/words | shuf -n1 | .....'
|
Download Translate Shell
cd
wget https://github.com/soimort/google-translate-cli/archive/master.tar.gz
tar -xvf master.tar.gz
cd google-translate-cli-master/
Install
use make
sudo make install
OR
use checkinstall
sudo apt-get install checkinstall
sudo checkinstall
If you see this: 3 - Version: [ master ]then
Press 3
Enter a number, e.g. 20150330
Press Return
Translate with
cat /usr/share/dict/words | shuf -n1 | tee >( xargs -0 trs {=de} | xargs echo) | xargs -0 echo -n
Your command
watch -n5 'bash -c "cat /usr/share/dict/words | shuf -n1 | tee >( xargs -0 trs {=de} | xargs echo) | xargs -0 echo -n"'
| How can I translate in the CLI an English word into a German one? |
1,442,690,062,000 |
sudo apt-get install anjuta
Error I get while installing the IDE. What is this error about? However, I am able to successfully install the IDE
Unknown media type in type 'all/all'
Unknown media type in type 'all/allfiles'
Unknown media type in type 'uri/mms'
Unknown media type in type 'uri/mmst'
Unknown media type in type 'uri/mmsu'
Unknown media type in type 'uri/pnm'
Unknown media type in type 'uri/rtspt'
Unknown media type in type 'uri/rtspu'
|
It is caused by a KDE library adding custom MIME types.
Fix it with:
sudo mv /usr/share/mime/packages/kde.xml /usr/share/mime/packages/kde.xml.bak
sudo update-mime-database /usr/share/mime
| Error I get when I try install IDE Anjuta |
1,442,690,062,000 |
Assume, someone searches for files that match some criteria using the find command and pipes the result into the rm command.
find / -type f -name "*.jpg" | xargs rm
First, I am not sure if the command above will get an error if the number of found files is very big, because I guess there is a limit to the number of arguments a command like rm can hold.
Than, I know that I can set how many files are passed as arguments at once to rm. Xargs with -n3 would, for instance, pass three arguments to rm execute the command and repeat it until all files have been passed as arguments.
Now does it make somehow a difference what value I take for the -n option of xargs ? Do these command lines differ in some aspect , like for instance the duration of execution:
find / -type f -name "*.jpg" | xargs rm
find / -type f -name "*.jpg" | xargs -n1 rm
find / -type f -name "*.jpg" | xargs -n2 rm
find / -type f -name "*.jpg" | xargs -n3 rm
.
.
.
find / -type f -name "*.jpg" | xargs -n999 rm
|
The limitation you mentioned is generally about the exec buffer used, not specific to individual commands. The purpose of xargs is exactly to address that problem; xargs will take as many arguments as possible to feed the command. This will get you the least command calls and thus good performance. Reducing the amount of arguments for the command by explicitly specifying -n will not gain anything in this respect. You can use -n e.g. in cases where the commands expect a specific number of arguments, with the special case that the command can be called once for each argument. (Note that findhas also an option -exec and a special terminator + to facilitate a similar behaviour.)
| Makes it a difference whether I use a different -n flag value for xargs in combination with find command ? |
1,442,690,062,000 |
I know the > and >> operators, > puts something into a file by deleting the old content, >> appends something onto the end of the file's content.
When I only use
cat > foo
I am able to write several lines into foo interactively until I hit ctrl+z, than it ends.
Now, if I add the append operator in the reverse order with some word of my choice
cat > foo << "wordofchoice"
I can do the same as in the first case, but this time a > prompt appears at the beginning of each line and I can not end the interactive input by ctrl+z but by typing "wordofchoice" (without the double quote signs).
Why is it so, what is the logic behind this expression, the << operator does not seem to have its meaning of appending something in here.
|
$PS2
Each time the user enters a \newline prior to completing a command line in an interactive shell, the value of this variable shall be subjected to parameter expansion and written to standard error. The default value is >.
You see the > because that is the default value for $PS2 - which is the prompt string the shell will print when it cannot properly delimit a command string that contains a newline.
Your command string cannot be delimited because you have not closed a quoted string:
The various quoting mechanisms are the escape character, single-quotes, and double-quotes. The here-document represents another form of quoting; see Here-Document.
The shell's parser tokenizes input as it reads it - line by line. You have sent it a \newline - which also means your terminal has just sent it a copy of the command so far - but it detects at least one open-ended quoted string - in other words an incomplete command - and so prompts you for the rest of it..
If the current character is \backslash, 'single-quote, or "double-quote and it is not quoted, it shall affect quoting for subsequent characters up to the end of the quoted text. The rules for quoting are as described in Quoting. During token recognition no substitutions shall be actually performed, and the result token shall contain exactly the characters that appear in the input (except for \newline> joining), unmodified, including any embedded or enclosing quotes or substitution operators, between the \'"quotation-mark and the end of the quoted text. The token shall not be delimited by the end of the quoted field.
The here-document is a special form of quoting in that it is also a redirection - the shell redirects your input as delimited by the open token <<heredoc_delimiter_string\n and the expected close token \nheredoc_delimiter_string\n to command <<start_here_document.
The here-document shall be treated as a single word that begins after the next \newline and continues until there is a line containing only the <<delimiter and a \newline, with no blank characters in between. Then the next here-document starts, if there is one. The format is as follows:
[n]<<word
here-document
delimiter
The shell would behave similarly if you submitted any other shell token which implied a second to the shell's parser but handed it a newline before the end token. $PS2 will print for example, if you start a for loop without entering done before entering a \newline, or one of the other quotes already mentioned.
Another way to put a \newline in a command is to quote it with the terminal - you can usually literally quote the next input character with CTRL+V. If you do CTRL+V then CTRL+J (or \newline) you can typically enter a literal \newline into the command string without the terminal sending the shell a copy of your input immediately - because usually the terminal sends it line-by-line, but when you CTRL+V quote the newline delimiter the shell won't yet receive it and so won't prompt you for more input.
This does not imply, however, that the terminal quoted newline is also properly quoted for the shell - you'd need to do that with shell quotes as applicable - but it will prevent the command string being read, at least, until you're ready to submit it.
| What is the logic behind the expression cat > foo << "wordofchoice" [duplicate] |
1,442,690,062,000 |
After editing context menu entries (by editing files in /usr/share/contractor) I have to restart the file manager to see the changes.
How to restart Pantheon Files with a command so that I don't have to log out and in?
|
Run the command pkill pantheon-files in the terminal The pkill command sends a signal to terminate the process with name pantheon-files; a process is a running instance of a program. Reopen the file manager by clicking on the icon on the dock or from the application menu. By clicking on the icon, a new process is started for the program.
| Command to to restart Pantheon files manager (elemenaryOS)? |
1,442,690,062,000 |
I have create a gerber file and it has lines inside which look like this:
X90095Y15350
X100095Y15350
X110095Y15350
X120095Y15350
X120095Y25350
X110095Y25350
X100095Y25350
X90095Y25350
X80095Y25350
X160095Y25350
These are coordinates of the drill path - mind the X and Y. Number after the X and Y varies - it is sometimes a 6-digit and sometimes 5-digit.
Now I need a script which would first identify if the line starts with an X and then use sed or any other Linux tool to convert this into a format where X and Y are both 4-digits long so that line:
X160095Y25350
becomes:
X1600Y2535
Can anyone help me. I am a bit too weak in regular expressions in order to solve this myself... If there is any other command which would do this it would allso be fine.
ADD
The anwser provided is working OK and I could use it to reformat my file. After uploading it into my application I noticed that we did a wrong conversion.
I am sorry for this, but now I know what needs to be done. So If a line starts with an "X" we only have to remove the last digit of numbers after X and Y. For example if I have:
X90095Y15350
X100095Y15350
X110095Y15350
I need to get the last digit of every position (X and Y) removed:
X9009Y1535
X10009Y1535
X11009Y1535
Could someone help me with this one. =/
|
Try this:
sed 's/\(^X[[:digit:]]\{4\}\)[[:digit:]]*\(Y[[:digit:]]\{4\}\)[[:digit:]]*/\1\2/'
If line starts with X (^X) then all digits which are not inside parenthesis \(\) are deleted.
| Coordinates X237595Y81600 to X2375Y8160 |
1,442,690,062,000 |
I'm using diff -rq to compare directories on a Mac OS X 10.6 server. In folder1, we have 'filename'. In folder2, we have'filename.archive' because folder2 is created by a file sync app that also archives the files it syncs.
We've just pulled folder1 from a backup and I want to see what, in folder2, is more recent and could be copied over.
How would I do this? Specifically I'm asking how to do the diff in such a way that 'file' is evaluated equivalent to 'file.archive', and preferably by file content or checksum, not filename.
|
You cannot do this with "just" diff as the filenames do not match in both directories and diff doesn't have anything build in to do the mapping.
What you can do is change to the "new" directory and do:
for i in *; do diff "$i" /path/to/other/dir/"$i"; done
or, change to the restored backup dir and do:
for i in *.archive; do diff "$i" /path/to/new/dir/"${i%.archive}"; done
If this extends to a whole directory structure this becomes somewhat more complex. If the originals are under a and the backup restored temporarily under backup/b:
.
├── a
│ ├── p
│ │ ├── x
│ │ ├── y
│ │ └── z
│ └── q
│ ├── 1
│ ├── 2
│ └── 3
└── backup
└── b
├── p
│ ├── x.archive
│ ├── y.archive
│ └── z.archive
└── q
├── 1.archive
├── 2.archive
└── 3.archive
You can run the following script with python toddiff.py a backup/b > todiff
(the todiff is not listed in the above tree structure).
import sys
import os
base = sys.argv[1]
if base[-1] != '/':
base += '/' # this last / needs to be removed to create bup_file_name
backup = sys.argv[2]
for root, directory_names, file_names in os.walk(base):
for file_name in file_names:
full_name = os.path.join(root, file_name)
bup_file_name = os.path.join(backup, full_name.replace(base, '', 1))
bup_file_name += '.archive'
# adjust the diff options to your need in the following line
print('diff -u "{}" "{}"'.format(full_name, bup_file_name))
This will generate a file of the form:
diff -u "a/q/3" "backup/b/q/3.archive"
diff -u "a/q/1" "backup/b/q/1.archive"
diff -u "a/q/2" "backup/b/q/2.archive"
.
.
which you can then source with: source ./todiff
In that case I would suggest your run the following program
| Diff: slightly different filenames, same file |
1,442,690,062,000 |
I am developing an API in Unix environment for virtual machines. Most of the modules are developed in python. I have few questions on this.
I want to make this as an installable API, I mean to install through apt-get/ yum install. In terms of yum I have to create a repository and place the module in a FTP location. But I didn't get the complete picture about that. What are the steps I need to do to achieve this or some reference URL's would be helpful.
|
You should approach this as a two step process.
First make your api installable using pip by creating a setup.py file for the project and use setuptools. There is quite a lot to go through and I recommend you follow some of the examples out there to go through the steps before you start tweaking to get your own project installable this way. Extension in e.g. C that need compilation can be included.¹
Once that works correctly, you can use stdeb to base a Debian installable package from that (.deb).
The facilities for building an rpm are built-in, but require the rpm utility to be available.²
¹ If you get confused about setuptools, distutils, distribute, etc., welcome to the club. Read this stackoverflow answer for some comparison and history, and put this under your pillow
² It takes some care to get this to work, it is possible to have a working setup.py that doesn't work (well) as .rpm or .deb. Start with the working examples and go from there
| Make python module as an installable API |
1,442,690,062,000 |
After performing a series of actions in Linux's terminal, I am supposed to perform a "listing" of them.
I couldn't find it online. All I can guess is that I am supposed to make the terminal print out a list of recently performed actions.
What command is used for doing something like this?
|
An “action in Linux's terminal” is presumably a command that you typed at a shell prompt.
Typical shells record a history of past commands. In bash, the commands fc and history both display the history of recent commands; see the manual for available options. You can also navigate among past commands by pressing Up and Down, and search with Ctrl+R and Ctrl+S.
| What is (and how to perform) action listing? |
1,442,690,062,000 |
Suppose I have a file. This file is stored in some complex location (within a hundred subfolders), but I nevertheless use it a lot. Is there some way to assign this file a set of keywords (i.e., "my favorite file") and then to input those keywords later on in some natural language processor (eiher a command line interface or a voice recognition software) to open that file? Like I might type "open favorite file" into the command line, or I might say "Open my favorite file" in the voice recognition software.
Does such a service exist?
|
Yes it does:
Create a link to it as explained by @emory.
Make it an environmental variable. Add this line to your shell's initialization file (~/.bashrc if you're using bash):
myfile="/absurdly/long/path/that/you/would/rather/not/type/every/time"
Then, from the commandline, you can use $myfile as though it were the actual file name:
$ echo $myfile
/absurdly/long/path/that/you/would/rather/not/type/every/time
$ cat > $myfile
This is so much easier now!
$ cat $myfile
This is so much easier now!
If you use the file for a specific purpose, for example, you simply cat it to your terminal, then you could also set up an alias that does the same thing. Add this to your shell's initialization file:
alias myfile='cat /absurdly/long/path/that/you/would/rather/not/type/every/time'
Then, just run it:
$ myfile
This is so much easier now!
| Assigning files key words to easily find them later |
1,442,690,062,000 |
I executed the minimal building directions of xcape:
$ sudo apt-get install git gcc make pkg-config libx11-dev libxtst-dev libxi-dev
$ mkdir xcape
$ cd xcape
$ git clone https://github.com/alols/xcape.git .
$ make
But when I press xcape it says xcape: command not found. It errors even when I'm in the xcape folder with a program that seems to be called xcape inside it. Why is this?
|
Have you tried ./xcape ? You have to execute it this way, because the location is probably not defined in the $PATH variable.
| Installing Xcape (question involving the "make" command) |
1,442,690,062,000 |
I have been using, studying and developing for Linux for quite a few years now, and over time, I have successfully convinced a few people to join the cause, and make the switch to Linux.
However, when these people went discovering this brand new world, every one of them eventually came to me, asking what the command was to do this or that:
What's the command to install a piece of software?
What's the command to list currently directory contents?
What's the command to open my web browser?
...
To me though, it feels like they are asking the wrong questions. Indeed, I would actually expect them to ask:
What program do we use to manage software installed on the system?
What program can list current directory contents?
What are the available web browsing programs?
...
It would seem to me that the term command is a very abstract word used to describe pretty much everything you type after your shell prompt. With this thinking, several of the newcomers I brought started to believe that:
Linux is made of a huge amount of commands.
There is a command for everything, and every command does one thing specifically.
By learning more commands, I'll become more capable as a Linux user/administrator.
By running ./something I don't run a command, but a program stored in my current directory, probably something I compiled myself.
...
Due to that, I tend not to use the word command, and prefer to use program or executable file. I also explain that these executable files can be found automatically, without typing their absolute paths, thanks to the $PATH mechanism. Thanks to that, typing ls executes /bin/ls, which is a program stored in /bin, a directory registered within the $PATH variable.
Still, I come across the word command in many books, tutorials, guides, ... which eventually brought this question to my mind: does the word command actually have a meaning in the UNIX/Linux vocabulary? Does it refer to an actual concept other than "whatever you type in your terminal"?
|
The POSIX shell spec refers to simple commands, compound commands, command position, and the command command.
The first two reference how/when a shell command is parsed and executed. For instance:
#one compound command
{ simple command; simple command && simple command; }
The compound command is how much the shell must read in as input before evaluating and executing its constituent simple commands.
command position is the first word in each simple command and it has special significance in that it determines how the rest of the command is to be parsed. Certain reserved words, (such as if and while), when in command position, can indicate to the parser that what follows should be interpreted as a compound command and not merely a simple command.
The command command can serve to move a command out of command position. For instance, these two would be executed identically, but potentially parsed differently:
command simple command
simple command
| What is the concept hidden behind the term "command"? |
1,442,690,062,000 |
It sounds really simple, but many times I was stuck at the beginning of learning a new command. The tutorial writer think it's natural to start, but it's not.
This is the tutorial I've been reading:
http://code.tutsplus.com/tutorials/scheduling-tasks-with-cron-jobs--net-8800
After reading that, I started something like:
* * * * * /usr/bin/sh t.sh
And I got this error:
bxg: command not found
bxg is an empty file I created on the directory to debug, the tree graph is like this:
.
├── bxg
└── t.sh
Inside t.sh, the content is:
a=$(date)
mkdir "$a"
Obviously, bash didn't recognize this as a cron command.
Then I searched the cron process to see if it is started alread:
ps aux | grep cron
root 942 0.0 0.0 3056 908 ? Ss 10:34 0:00 cron
zen 4924 0.0 0.0 4652 568 pts/1 T 11:54 0:00 grep --color=auto cron
zen 9722 0.0 0.0 4656 568 pts/1 S+ 12:05 0:00 grep --color=auto cron
Seems it's running there, but as another tutorial says that the process is called crond.
I tried the followed:
service crond start
crond: unrecognized service
Maybe the process I greped out with "ps aux" isn't the authentic process:
service cron start
start: Job is already running: cron
OK, I totally got no ideas now, friends help me!
|
crontab -e
it will open a configuration file in editor and you then add your line to the end:
* * * * * /usr/bin/sh t.sh
| How to start a cron task? |
1,442,690,062,000 |
In case there's a better solution to the problem, my goal is to be able to quickly analyze ALL recent apache activity in /usr/local/apache/domlogs. The usefulness of this is when the server load is going through the roof I want to quickly see which website(s) are getting hammered (and what URLs) without having to individually grep/awk everything in domlogs.
For example, I know I can get a list of the very recently active apache access domlogs using this command within the domlogs directory:
find -regex '.*\.\(com\|org\|net\|biz\|info\)' -mmin -1
Now what I'd like to do is concatenate the tail 1000 lines of each of those files into a new file so I can quickly analyze it and see where all the traffic is coming from.
|
You can use -exec option:
find -regex '.*.(com\|org\|net\|biz\|info)' -mmin -1 \
-exec tail -n 1000 "{}" >> logs.txt +
Now all last 1000 lines of each files in domlogs is written to file logs.txt, separated by filename.
-exec command {} + tells find to run command with files found, the command line is built by appending each filename at the end. This option works line combination of find pipe to xargs.
See more:
POSIX find documentation
| For a given directory, how do I concatenate the tail end of recently modified files to a new file? |
1,442,690,062,000 |
Is it possible to start an xfreerdp session into Microsoft windows from a command-line only install of Linux?
The command I use from a full blown Linux install is this:
$ sudo xfreerdp /v:farm.company.com /d:company.com \
/u:oshiro /p:oshiro_password /g:rds.company.com
This command works fine. However, when I run the same command from a command-line install of Linux, I get the following error message:
Warning xf_GetWindowProperty (client /X11/xf_window.c:178): Property 340 does not exist
|
If you're just logged into a system that doesn't have a X desktop running then no you won't be able to make use of xfreerdp or any such application that requires the use of a GUI.
Remember that the X desktop is driving the videocard & monitor locally and is providing a basis (the X protocol) on which other graphical applications can also display GUIs through it as well. Without it any applications such as xfreerdp have no way to access the the display directly.
If you're familiar with the DOS/Windows model then think of trying to run a Windows application directly from DOS. This wouldn't be possible here either. There are libraries and services that Windows provides APIs to, which applications then utilize.
This is the tradeoff one makes when developing an application for a given environment vs. developing it as a standalone entity that can interact with a given system's hardware directly.
| Logging into an RDS session from the command-line |
1,442,690,062,000 |
I use to establish a connection to my VPN PPTP server by typing:
pon MyVPN persist
I want to know the reconnection timing details, but I have not found documentation enough about it. Specifically:
- If there is a disconnection, does the PPTP client attempts to reconnect immediately?
- What is the frequency of reconnection attemps (5 mins, 15 mins, variable... etc)?
- The client stop trying to reconnect after some too long period (give-up time) without succeeding (6 hours, 1 day, etc)?
|
As far as i can see the reconnection is immediate (although it sometimes takes some time for it to detect that connection is broken).
I believe there's no lag between retries but you can set up it by using holdoff n where n is the number of seconds between the disconnection and next try to reconnect.
As for a number of retries, it defaults to 10, but you can change it: with maxfail option
you can get all the options by looking at man pppd.
| What is the reconnection policy of "persist" switch on "pon" command from pptp-linux package? |
1,387,291,572,000 |
When I cd to a/b/c, sometimes I want to see what are the children dirs of c. Normally, I would do it like this
cd a/b/c
ls
child1 child2
cd child1
Is there any way to investigate the children without stopping to do the ls?
|
Do cd a/b/c and hit Tab several times. It works with Bash if you enable the completion.
| Is there any way to see the options for the next directory on a cd command? |
1,387,291,572,000 |
I started recently to learn using Linux. I want to write a script in bash shell to run a serial job on a cluster. I have been searching for hints and instructions on how to write such a script. I managed to write the following:
#!/bin/bash
#PBS -l nodes=1:ppn=8
#PBS -l walltime=1:00:00
#PBS -e test.err
#PBS -o test.log
cd /home/myuser/
echo "Running on host `hostname`"
echo "Time is `date`"
echo "Directory is `pwd`"
/home/myuser/comsol4.3b/bin/comsol batch -inputfile DF.mph -outputfile output.mph -batchlog out.log
The last line calls the program (it is called COMSOL) to run. One of the things that are missing in my code is the command “qsub”; I don’t know how to incorporate it with the rest of commands in my code. Can anyone please instruct me how to modify the line that calls the program I am using by “qsub” command please?
Sorry if my question is so simple, I spent a lot of time trying to make it work but I couldn’t unfortunately.
EDIT: To summarize, if I typed the last line of the code ONLY it will be run on the main node only, I don't want that. What I want is to modify the last line of the code such that it uses the resources I allocated to it using PBS lines, how should I do that?
For the record, the command qhost shows there are 250 nodes with varying cores per node from 4 cores/node to 16 cores/node
|
You don't specifically say which clustering software you're using, but based on the fact you're asking about qsub, I know that both GridEngine (and derivatives) along with PBS use that particular command, so let's start with those.
I'm most familiar with GridEngine (and derivatives) so to submit a command using that package you'd do something like this.
Example
Here's a sample script, we'll call it sample.bash.
#!/bin/bash
echo "Working directory is $PWD"
To submit this script, you do the following:
$ qsub sample.bash
To target specific nodes within the cluster you'll need to include attributes that are unique to a set of these nodes, so that the GridEngine scheduling software can pick one of these nodes, and run your job on one of them.
| Script for running sequential job |
1,387,291,572,000 |
I want to create dump file of sqlite database from java but I can't run the below command from linux. I got the help of windows and I tried with linux but it appears one terminal nothing more than that. So can anyone help me how to write this command in linux?
Process p = Runtime.getRuntime().exec("cmd /c start /b sqlite3 db.sqlite .dump > dump.txt");
My implementation in Linux is below:
String command= "/usr/bin/xterm";
Runtime rt = Runtime.getRuntime();
Process p = rt.exec(new String[]{command , "/home/ubuntu/test.sqlite .dump > /home/ubuntu/newsql111.sql"});
Can anyone tell me where is my mistake?
|
I found the solution:
Process p = rt.exec(new String[]{"/bin/sh", "-c", "sqlite3 /home/ubuntu/testingdb.sqlite .dump > /home/ubuntu/success11.sql"});
It gives full dump file of sqlite database..
| how to run .dump command in linux? [closed] |
1,387,291,572,000 |
I'm trying to get an itemized list of files transferred by rsync. Using the -i and --out-format options, I'm instead ending up with output that looks like this:
3HMP1MO_001.tif
974.68K 100% 9.58MB/s 0:00:00 (xfer#1, to-check=8/10)
3HMP1MO_002.tif
974.68K 100% 2.27MB/s 0:00:00 (xfer#2, to-check=7/10)
3HMP1MO_003.tif
974.67K 100% 1.33MB/s 0:00:00 (xfer#3, to-check=6/10)
3HMP1MO_004.tif
974.65K 100% 1.03MB/s 0:00:00 (xfer#4, to-check=5/10)
3HMP81O_005.tif
974.66K 100% 834.19kB/s 0:00:01 (xfer#5, to-check=4/10)
3HMP81GS1MO_006.tif
974.66K 100% 2.35MB/s 0:00:00 (xfer#6, to-check=3/10)
3HMP1MO_007.tif
974.66K 100% 1.60MB/s 0:00:00 (xfer#7, to-check=2/10)
3HMP1MO_008.tif
974.66K 100% 1.21MB/s 0:00:00 (xfer#8, to-check=1/10)
3HMP8MO_009.tif
974.65K 100% 1009.34kB/s 0:00:00 (xfer#9, to-check=0/10)
The full command is this:
rsync -irltShz --modify-window=2 --progress --exclude 'Thumbs.db' --out-format "%n" --log-file=current.log rsync://myserver/UserData/user1/ /mnt/bup/user1 2>/dev/null >filechanges.log
If I use -q (quiet), there is no output at all.
Using the -i option doesn't seem to make any difference.
I could grep the output, but surely I'm just doing something wrong?
It's `rsync version 3.0.6 protocol version 30` on RHEL 6.2
|
surely I'm just doing something wrong?
I'm guessing you've spotted this yourself by now (from man rsync):
--progress show progress during transfer
I wonder how it will do that? ;P
| Get a clean list of files transferred from rsync |
1,387,291,572,000 |
I am trying to understand the correct use of »atomic grouping« a.k.a. »independent subexpressions« a.k.a. »non-backtracking subpattern« [this are the terms I found for the same thing, which doesn't make it less complicate]
egrep '123(?>fruit|juiceomatic)machine' means: look for
123fruitjuiceomaticmachine
BUT stop trying to match [the rest of]
the round brackets' content if 123 is followed by anything else than fruit!
Is this correct? Did I miss something important?
I ask this because I got to the expected results most of the time but I am not 100% sure if this is because I did it right or I just made lucky guesses.
|
This is not an extended regular expression (ERE). egrep will interpret this as 123 followed by either >fruit or juiceomatic (the ? is ignored) followed by machine. This means it will match on:
123>fruitmachine
123>juiceomaticmachine
but not:
123fruitjuiceomaticmachine
The (?>...) notation is from PCRE and means non-capturing atomic group, see pcresyntax(3). You can use those with GNU grep and the -P switch.
If I understand you correctly you require 123 to be followed by fruit which in turn may be followed by juiceomatic or nothing. In that case I think 123fruit(|juiceomatic)machine is the ERE you want.
| (e)grep and atomic grouping [comprehension question] |
1,387,291,572,000 |
Why don't the commands have a & between the setting of DISPLAY and running unity or compiz.
I understand that both work (re-starting unity or compiz), but fail to see why are they the way they are.
|
Because the value for DISPLAY is set for that one execution of one command, only.
| Syntax of DISPLAY=:0 unity --replace or DISPLAY=:0 compiz --replace |
1,387,291,572,000 |
I have installed the packages tinymce and tinymce2. It's an HTML Editor.
sudo apt-get install tinymce tinymce2
but when I launch the command tinymce the system tells me there's no such command. Same for tinymce2. I have noticed that they don't seem to be in /usr/bin/
I have tried man tinymce and man tinymce2 and the system says there's no man entry for them.
What can I do to find the command line to launch the application ?
|
tinymce is a javascript library for HTML so you need to create HTML and put it onto web browser.
The javascripts are located in /usr/share/tinymce/www, you can create link in document root of your web server.
| How to find the command line for tinymce |
1,387,291,572,000 |
I'm using yamlfix with ale in vim.
I followed these steps because I wanted to remove the automatic "---" adding on top of the file every time I save my work (and some others default configurations).
For some reason, the file is correctly fixed, but my configuration is skipped..
So I decided to try with CLI in order to test my config.
yamlfix exits without error, fixes my file, but is completely skipping my configuration..
The configuration is in ~/pyproject.toml:
# pyproject.toml
[tool.yamlfix]
explicit_start = false
The command is
yamlfix -c ~/pyproject.toml file.yaml
Do I miss something ? Do I need something more ?
|
When running yamlfix directly from the command line (or via a shell spawned from Vim, and not via e.g. maison), your TOML configuration file should contain no section headers:
$ cat myconfig.toml
explicit_start = false
You may then run yamlfix from the command line like so:
$ cat test.yml
---
test: some test data
$ yamlfix -c myconfig.toml test.yml
[+] YamlFix: Fixing files
[+] Fixed test.yml
[+] Checked 1 files: 1 fixed, 0 left unchanged
$ cat test.yml
test: some test data
This is described in the documentation.
You may also use an environment variable to trigger the same behaviour without needing a separate configuration file. This may be what you may want to do if removing the YAML document start marker is the only setting you want to change from the default:
$ cat test.yml
---
test: some test data
$ YAMLFIX_EXPLICIT_START=false yamlfix test.yml
[+] YamlFix: Fixing files
[+] Fixed test.yml
[+] Checked 1 files: 1 fixed, 0 left unchanged
$ cat test.yml
test: some test data
| yamlfix not using configuration + (neo)vim usage |
1,387,291,572,000 |
The key combination Alt + 6 for copy in nano does not work in tilix
Does anyone know how to fix this? I had a look through all the key commands, but did not find any entry for Alt + 6 being already in use.
|
In Tilix version: 1.9.6 you can manually change and set key commands. Disable Alt + 6 which is set to switch to terminal Nr.6. by default.
| The key combination Alt + 6 for copy in nano does not work in GNOME terminal emulator Tilix |
1,697,393,844,000 |
After a fresh Ubuntu Server install, the console (no GUI) output is not aligned to the left side of my 4k screen. Instead, each line starts at a third of the screen width away from the right side, and then overflows to the left side of the screen.
This doesn't seem to be a problem with lower-resolution screens (tested on 1440p). My GPU is an AMD Radeon HD 8490.
I have a picture here:
|
Your card might not be able to drive that monitor. 4K seems to be a common barrier, even for some 3-y.o. cards.
| Console output not aligned on 4k screen |
1,697,393,844,000 |
I am trying to create a bash function vim_run which operates as follows:
user pipes command output into vim_run
user can edit output
user exits vim and the contents of that buffer are now executed via source file
This flow is useful for editing a list of files one wants to remove. I have it currently implemented as
function vrun() {
local tmpfile=$(mktemp)
cat > "$tmpfile"
vim "$tmpfile" --not-a-term
source "$tmpfile"
rm "$tmpfile"
}
however, I cannot figure out how to make vim behave properly with respect to stdin and the terminal. Whenever I pipe output into this function, e.g.
echo 'echo foo' | vim_run
after editing the contents and exiting vim, it breaks my terminal: it doesn't show characters when I type. Only when I invisibly type stty sane does it go back to normal. I vaguely understand that vim needs access to stdin in order to work properly; however, after the cat > $tmpfile finishes, shouldn't the remaining stdin be available to vim?
|
This fixes it for me:
function vrun() {
local tmpfile=$(mktemp)
cat > "$tmpfile"
vim "$tmpfile" < /dev/tty
source "$tmpfile"
rm "$tmpfile"
}
We must just redirect the controlling TTY of the session to be Vim's stdin.
| Calling vim in subprocess after running cat |
1,697,393,844,000 |
I have a cli output as below,
GROUP DEFINITIONS SERVICE DIMENSION
RESULTSBYTIME False
KEYS AWS CloudShell
BLENDEDCOST 0.0000026589 USD
KEYS AWS CloudTrail
BLENDEDCOST 0.000001 USD
KEYS AWS Config
BLENDEDCOST 46.921 USD
KEYS AWS Glue
BLENDEDCOST 355.70735552 USD
KEYS AWS Key Management Service
BLENDEDCOST 0.10418545 USD
KEYS AWS Lambda
BLENDEDCOST 0.0000002605 USD
KEYS AWS Migration Hub Refactor Spaces
BLENDEDCOST 0 USD
KEYS AWS Secrets Manager
BLENDEDCOST 2.0496788951 USD
KEYS AWS Security Hub
BLENDEDCOST 0.028892556 USD
KEYS AWS Service Catalog
BLENDEDCOST 0 USD
KEYS AWS Step Functions
BLENDEDCOST 0.0000000031 USD
KEYS AWS Support (Business)
BLENDEDCOST 246.2376324993 USD
KEYS AWS Systems Manager
BLENDEDCOST 0.000351 USD
KEYS AWS Transfer Family
BLENDEDCOST 208.8 USD
KEYS Amazon EC2 Container Registry (ECR)
BLENDEDCOST 0.2636971622 USD
KEYS EC2 - Other
BLENDEDCOST 325.4630384796 USD
KEYS Amazon Elastic Compute Cloud - Compute
BLENDEDCOST 694.4962624953 USD
KEYS Amazon Elastic Container Service for Kubernetes
BLENDEDCOST 69.890509083 USD
KEYS Amazon Elastic File System
BLENDEDCOST 0.0000002652 USD
KEYS Amazon Elastic Load Balancing
BLENDEDCOST 73.2040001769 USD
I'm expecting the output as below,BLENDEDCOST duplicate should be removed and KEYS should come in new row. Convert row into coloumns and duplicate should be removed?
AWS CloudShell 0.0000026589 USD
AWS CloudTrail 0.000001 USD
AWS Config 46.921 USD
AWS Glue 355.7073555 USD
AWS Key Management Service 0.10418545 USD
AWS Lambda 0.0000002605 USD
AWS Migration Hub Refactor Spaces 0 USD
AWS Secrets Manager 2.049678895 USD
AWS Security Hub 0.028892556 USD
AWS Service Catalog 0 USD
AWS Step Functions 0.0000000031 USD
AWS Support (Business) 246.2376325 USD
AWS Systems Manager 0.000351 USD
AWS Transfer Family 208.8 USD
Amazon EC2 Container Registry (ECR) 0.2636971622 USD
EC2 - Other 325.4630385 USD
Amazon Elastic Compute Cloud - Compute 694.4962625 USD
Amazon Elastic Container Service for Kubernetes 69.89050908 USD
Amazon Elastic File System 0.0000002652 USD
Amazon Elastic Load Balancing 73.20400018 USD
Amazon Elastic MapReduce 2.28898852 USD
Amazon Glacier 0.0000000025 USD
Amazon GuardDuty 7.367077065 USD
Amazon Inspector 1 USD
Amazon Location Service 0 USD
Amazon Relational Database Service 388.1651428 USD
Amazon Route 53 0.508976 USD
Amazon Simple Notification Service 0.2022960904 USD
Amazon Simple Queue Service 0 USD
Amazon Simple Storage Service 3.338266835 USD
Amazon Simple Workflow Service 0.0000000015 USD
Amazon Virtual Private Cloud 147.4110373 USD
AmazonCloudWatch 60.12971039 USD
CloudWatch Events 0.000048 USD
CodeBuild 29.71339194 USD
|
With GNU sed:
sed -z 's/\nBLENDEDCOST//g'
or perl:
perl -0777pe 's/\nBLENDEDCOST//g'
To match your output, you will need to add "Blended cost" to the header and remove second line, and remove the USD, e.g. pipe output to:
sed '1s/$/\BLENDEDCOST/; 2d; s/[[:blank:]]\+USD$//'
In total:
some_command | perl -0777pe 's/\nBLENDEDCOST//g;' | sed '1s/$/\tBLENDEDCOST/; 2d; s/[[:blank:]]\+USD$//'
Output:
GROUP DEFINITIONS SERVICE DIMENSION BLENDEDCOST
KEYS AWS CloudShell 0.0000026589
KEYS AWS CloudTrail 0.000001
KEYS AWS Config 46.921
KEYS AWS Glue 355.70735552
KEYS AWS Key Management Service 0.10418545
KEYS AWS Lambda 0.0000002605
KEYS AWS Migration Hub Refactor Spaces 0
KEYS AWS Secrets Manager 2.0496788951
KEYS AWS Security Hub 0.028892556
KEYS AWS Service Catalog 0
KEYS AWS Step Functions 0.0000000031
KEYS AWS Support (Business) 246.2376324993
KEYS AWS Systems Manager 0.000351
KEYS AWS Transfer Family 208.8
KEYS Amazon EC2 Container Registry (ECR) 0.2636971622
KEYS EC2 - Other 325.4630384796
KEYS Amazon Elastic Compute Cloud - Compute 694.4962624953
KEYS Amazon Elastic Container Service for Kubernetes 69.890509083
KEYS Amazon Elastic File System 0.0000002652
KEYS Amazon Elastic Load Balancing 73.2040001769
Very easy alternative with output that should be fairly easy to fix:
paste - -
| Is there option to convert outputs of row in to new column in linux and delete duplicate entry in row |
1,697,393,844,000 |
I have the following dir structure:
relative_path/
app1/
data
config
trash
other_app/
inputs
outputs
garbage
anything_else/
v1/
v1.5
v2/
I need to figure out the disk usage of each dir only on the second level (e.g. relative_path/anything_else/v1, but not relative_path/anything_else nor relative_path/anything_else/v1/v1.5)
If I do a du -d2 relative_path it won't list the dirs on 3rd+ levels, but will still list the 1st-level dir.
How can I filter out ONLY 2nd level dirs?
|
Get the directories of the target depth with GNU find or a shell loop, use
-s (same as -d0) to display a total for each argument. This would
output relative_path/anything_else/v1 but show the total for this dir and its subdirectories.
If you also want to exclude the sizes of subdirectories, add another -S.
find relative_path -mindepth 2 -maxdepth 2 -type d -exec du -sS {} \;
or
for dir in relative_path/*/*/; do
du -sS "$dir"
done
| How to find the disk usage ONLY of 2nd level dirs? |
1,697,393,844,000 |
ascii command in Linux is fast and great. It allows us to search for a character or for a code point and returns all relevant results for a given search. Is there something similar for ASCII extended (e.g.: ISO-8859-1) and/or for Unicode characters?
|
The unicode tool provides similar functionality to ascii:
$ unicode -d ..255
.0 .1 .2 .3 .4 .5 .6 .7 .8 .9 .A .B .C .D .E .F
000.
001.
002. ! " # $ % & ' ( ) * + , - . /
003. 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
004. @ A B C D E F G H I J K L M N O
005. P Q R S T U V W X Y Z [ \ ] ^ _
006. ` a b c d e f g h i j k l m n o
007. p q r s t u v w x y z { | } ~
008.
009.
00A. ¡ ¢ £ ¤ ¥ ¦ § ¨ © ª « ¬ ® ¯
00B. ° ± ² ³ ´ µ ¶ · ¸ ¹ º » ¼ ½ ¾ ¿
00C. À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï
00D. Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ ß
00E. à á â ã ä å æ ç è é ê ë ì í î ï
00F. ð ñ ò ó ô õ ö ÷ ø ù ú û ü ý þ ÿ
It can be used to map other code pages (see its --fromcp option):
$ unicode --fcp cp437 -d 200
U+255A BOX DRAWINGS DOUBLE UP AND RIGHT
UTF-8: e2 95 9a UTF-16BE: 255a Decimal: ╚ Octal: \022532
╚
Category: So (Symbol, Other); East Asian width: A (ambiguous)
Unicode block: 2500..257F; Box Drawing
Bidi: ON (Other Neutrals)
It can also be used to search for characters by name:
$ unicode acute
U+00B4 ACUTE ACCENT
UTF-8: c2 b4 UTF-16BE: 00b4 Decimal: ´ Octal: \0264
´
Category: Sk (Symbol, Modifier); East Asian width: A (ambiguous)
Unicode block: 0080..00FF; Latin-1 Supplement
Bidi: ON (Other Neutrals)
Decomposition: <compat> 0020 0301
U+00C1 LATIN CAPITAL LETTER A WITH ACUTE
UTF-8: c3 81 UTF-16BE: 00c1 Decimal: Á Octal: \0301
Á (á)
Lowercase: 00E1
Category: Lu (Letter, Uppercase); East Asian width: N (neutral)
Unicode block: 0080..00FF; Latin-1 Supplement
Bidi: L (Left-to-Right)
Decomposition: 0041 0301
…
| Command similar to ascii for ascii extended and/or for unicode? |
1,697,393,844,000 |
Id like to create a video C with audio from video A but video from video B.
Video A and video B have nearly the same length in seconds.
Since the videos are a couple GBs, I guess it would be slower if I first extract audio from video A, then later merge that audio into video B (I mean, one step should be faster than two steps, right?). However, if this is the only way (extract audio, then merge), then I will do it.
Actually, video B was generated from video A and it took a 4 consecutive days of video processing to generate it (but lost the audio due to a mistake); that is why I am initially seeking a solution to do everything "at once".
|
I've been using this for a while:
ffmpeg -y \
-i "$videofile" \
-i "$audiofile" \
-c:v copy -c:a aac \
-map 0:v:0 -map 1:a:0 \
"$outfile"
I haven't tried it with files that are only "nearly" the same length, though.
| how to insert audio from video A into video B (no audio)? |
1,668,467,325,000 |
I'm trying to build opendingux from github repo/source. https://github.com/OpenDingux/buildroot
OpenDingux is an embedded Linux distribution focused on (retro) gaming.
I cloned the repo and then ran the commands below.
cd ./buildroot;
export CONFIG='gcw0'; bash ./rebuild.sh;
The output from the above command was pretty much a wall of text, too long to post in this question as it is 22301 lines long. The full output is available here https://paste.ee/p/UInYW
I have snipped the error I am getting however below.
/bin/bash: line 2: 186552 Killed build/genautomata ../../gcc/common.md ../../gcc/config/mips/mips.md insn-conditions.md > tmp-automata.c
make[3]: *** [Makefile:2459: s-automata] Error 137
make[3]: *** Waiting for unfinished jobs....
rm gcc.pod
make[2]: *** [Makefile:4415: all-gcc] Error 2
make[1]: *** [package/pkg-generic.mk:270: /home/vagrant/buildroot/output/gcw0/build/host-gcc-initial-11.1.0/.stamp_built] Error 2
make: *** [Makefile:84: _all] Error 2
|
/bin/bash: line 2: 186552 Killed build/genautomata ../../gcc/common.md ../../gcc/config/mips/mips.md insn-conditions.md > tmp-automata.c
make[3]: *** [Makefile:2459: s-automata] Error 137
means that bash exited with exit code 137 (128 + 9), which means the process was killed with signal 9, SIGKILL. The most common cause for that on Linux systems (apart from bugs) is when the system runs out of memory and the OOM killer kills a process. So it’s likely that genautomata is using too much memory and is being killed.
This should show up in the kernel logs, which you can see with dmesg.
| error trying to build/make from source |
1,668,467,325,000 |
lets say I go to this file on linux CentOS to setup some startup commands
sudo vi /etc/rc.local
in this case lets say I want to startup uwsgi
so normally in the command line I might type something like this:
[linuxuser@localhost ~]$ systemctl start uwsgi
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ====
Authentication is required to start 'uwsgi.service'.
Multiple identities can be used for authentication:
1. admin Support (Administrator)
2. linuxuser (linuxuser)
Choose identity to authenticate as (1-2): 2
Password:
How would I put the identity and the password in the rc.local file in order to get uwsgi to run upon startup?
Something like this?
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
systemctl start uwsgi
2
mypassword1$
I don't think that is right..
|
It wants to authenticate because you're not root. So starting from rc.local will work as that runs as root. But this is not the way to go - uwsgi is a service so can just be set to start at bootup anyway:
sudo systemctl enable uwsgi.service
| Running a few commands (that then ask questions) on startup |
1,668,467,325,000 |
I would appreciate having your advice on the below.
Having a folder name "ABC" with thousands of .xml files inside. The core structure of XMLs is the same:
<product abcd…>
<category>
...
</category>
</product>
Some xml files can be considered as valid as these contains a required <category> tag) some of these are invalid as the required <category> tag is completely missing. It even doesn't have a closing </category>
So the goal is to find via terminal those 'invalid' xmls without </category> tag inside XML placed in the "ABC" folder.
Any chance?
|
Assuming that all XML files are well-formed: Using xmlstarlet, the following prints the input filename of any file that does not have a category node as a direct child node under product:
xmlstarlet sel -t --if '/product/category' --else -f -nl ABC/*.xml
If you just want to detect files without any category nodes anywhere:
xmlstarlet sel -t --if '//category' --else -f -nl ABC/*.xml
In both these commands, xmlstarlet evaluates the given XPath expression. If the expression evaluates to a set of at least one found node, then the --if test is true and nothing else happens. Otherwise, the --else branch is evaluated and -f -nl causes the current filename to be outputted with a trailing newline character.
Assuming you want to do something to the files that lack the category node, the following sets up a loop that allows you to process the relevant files:
for xml in ABC/*.xml; do
if ! xmlstarlet sel -t --if '/product/category' -nl "$xml" >/dev/null
then
# process "$xml" here
fi
done
Installing xmlstarlet on macOS is best done via Homebrew. The Homebrew package is called xmlstarlet, and the command will be called xml rather than xmlstarlet.
| search of .xml files without certain tag through mac os terminal |
1,668,467,325,000 |
I have a script that has the following steps:
1 mirror a remote server with lftp
open ftps://'[name]':'[pwd]'@[remote_host]
set ssl:check-hostname no
mirror --delete-first --only-newer /ExchangeSets
/home/sandbox_autopilot/Exchangesets
exit
2 next I sort the files based on the start of the filename using the find command, and copying them to a folder.
find /home/sandbox_autopilot/Exchangesets -iname '1R4*.000' -exec cp -u --target-directory /home/sandbox_autopilot/1R4/ {} \;
find /home/sandbox_autopilot/Exchangesets -iname '1R5*.000' -exec cp -u --target-directory /home/sandbox_autopilot/1R5/ {} \;
find /home/sandbox_autopilot/Exchangesets -iname '1R6*.000' -exec cp -u --target-directory /home/sandbox_autopilot/1R6/ {} \;
3 I do i whole bunch of GIS alterations to the file in the folders: 1R4, 1R5, 1R6 that are not relevant to my question.
significant things:
after having mirrored the remote server the folder /home/sandbox_autopilot/Exchangesets is ordered with subfolder names that start with 4 digits, folders containing newer files have a higher 4 digit (see example below) start than older folders.
There are multiple versions of the same file in the folder structure in /home/sandbox_autopilot/Exchangesets. Requested behaviour from the "find -exec cp" command is that the newest version of the available file is put in the target directory.
Example of multiple files in the folder /home/sandbox_autopilot/Exchangesets structure:
find . -name 1R65Y842.000
./5704b_[projectname]/ENC_ROOT/1R/6/1R65Y842/1R65Y842.000
./5721a_[projectname]/ENC_ROOT/1R/6/1R65Y842/1R65Y842.000
./5673b_[projectname]/ENC_ROOT/1R/6/1R65Y842/1R65Y842.000
./5618_[projectname]/ENC_ROOT/1R/6/1R65Y842/1R65Y842.000
./5802b_[projectname]/ENC_ROOT/1R/6/1R65Y842/1R65Y842.000
./5646a_[projectname]/ENC_ROOT/1R/6/1R65Y842/1R65Y842.000
note: the [projectname] are all different but blanked-out in this example because of privacy.
The problem:
The "find -exec cp" command does not give me the newest file in the 1R6 folder.
I think this is what happening.
In the original folder structure the date on the file is correctly passed by the "lftp mirror" command. So the newest file has the latest date.
When the "find -exec cp" command finds a file and copies it to the relevant 1R folder. the filedate is set to now(). Then when the "find -exec cp" command finds a file with the same name that is newer it won't copy, because the date in de target directory is newer (now()) then the filedate of the file that needs to overwrite the file in the target directory, thus rendering the -u function on cp useless.
The solutions that I am thinking of:
can I prevent the cp command of changing the date of the file when it copies it to the target directory? So the cp -u can evaluate the correct date and put the latest version of the file in the target directory?
Would it help to symlink instead of making a actual copy
Is there an option on find that it evaluates the found versions of the file, and only executes the copy command on the latest version of the file?
My humble request:
Can anybody help me in the right direction?
|
The cp command can preserve the attributes, including the timestamps.
The easiest way the archive: cp -a you preserve all attributes.
From manual:
--preserve[=ATTR_LIST]
preserve the specified attributes (default: mode,ownership,timestamps), if possible additional at‐
tributes: context, links, xattr, all
and
-a, --archive
same as -dR --preserve=all
| find command, only execute when file is newer |
1,668,467,325,000 |
The description for the wall command states that it sends a message to all logged in users. However, the man page describes a flag, -g --group, which allows the sender to limit messages to a specified group:
-g, --group group Limit printing message to members of group defined as a group argument. The argument can be group name or GID.
However, I am only able to send messages to all logged in users. I have tried this command with:
-g my_group,
--group my_group,
-g "my_group",
--group "my_group",
--group=my_group
In addition, I have tried all of the above by replacing "my_group" (the group name) with the group ID to no avail.
I have also tried placing the flags after the message. None of this works to limit the messages to a given group. All messages go to all users. Am I misunderstanding the flag? The syntax? Is this command broken? Or is the man page simply incorrect? Please do not offer alternative commands, I am aware of their existence. I want to know how to use a listed option or why the option is not working correctly. I am using Ubuntu 20.04
|
Looked at the group membership checking code. Had a round of gdb --args wall -g root foobarbaz,
break is_gr_member
run
{wait for breakpoint to be hit}
finish
print utmpptr->ut_user # this prints my own user name
step # this shows we're actually taking the branch as if I was a member of group root
So, yeah, that's a bug in wall's checking of group membership as it seems. Feel free to open a bug report on their bug tracker; however, honestly, the pressure to fix this should be very low! I doubt there's many legitimate users of this feature. Fixing this yourself would probably be pretty valuable, still, and if you have the C skills, please do that.
| How to limit messages from the wall command to a specific group? |
1,668,467,325,000 |
I call a python script with some command line arguments like:
python3 script.py --run 1 --filepath "this/file/dir"
I now try to parse the arguments from a config file with:
grep -v '^#' ${THIS_FILE_DIR}/model.conf | sed 's/=/ /' | xargs -I% echo "--%"
This yields my desired string:
--run 1
--filepath this/file/dir
Is there a way to input this string dynamically into my python command, something along the lines of:
python3 script.py $(grep -v '^#' ${THIS_FILE_DIR}/model.conf | sed 's/=/ /' | xargs -I% echo "--%")
Conf File for references
# vars
run=1
filepath=this/file/dir
|
you have multiple choices, since you already use sed, I would suggest
python3 script.py $(sed -e '/^#/d' -e 's/=/ /' -e 's/^/--/' ${THIS_FILE_DIR}/model.conf )
where
use -e to specify multiple sed commands
'/^#/d' delete lines starting with #
's/^/--/' replace start of line by --
| Dynamically generate arguments for python script input |
1,668,467,325,000 |
I want to make it so that instead of a single hostname specified on the command line, it reads in a list of multiple target IP addresses from a file.
#!/bin/bash -
# bannergrab.sh
function isportopen ()
{
(( $# < 2 )) && return 1 # <1>
local host port
host=$1
port=$2
echo >/dev/null 2>&1 < /dev/tcp/${host}/${port} # <2>
return $?
}
function cleanup ()
{
rm -f "$SCRATCH"
}
ATHOST="$1"
SCRATCH="$2"
if [[ -z $2 ]]
then
if [[ -n $(type -p tempfile) ]]
then
SCRATCH=$(tempfile)
else
SCRATCH='scratch.file'
fi
fi
trap cleanup EXIT # <3>
touch "$SCRATCH" # <4>
if isportopen $ATHOST 21 # FTP <5>
then
# i.e., ftp -n $ATHOST
exec 3<>/dev/tcp/${ATHOST}/21 # <6>
echo -e 'quit\r\n' >&3 # <7>
cat <&3 >> "$SCRATCH" # <8>
fi
if isportopen $ATHOST 25 # SMTP
then
# i.e., telnet $ATHOST 25
exec 3<>/dev/tcp/${ATHOST}/25
echo -e 'quit\r\n' >&3
cat <&3 >> "$SCRATCH"
fi
if isportopen $ATHOST 80 # HTTP
then
curl -LIs "https://${ATHOST}" >> "$SCRATCH" # <9>
fi
cat "$SCRATCH" # <10>
The file containing the list looks like it:
10.12.13.18
192.15.48.3
192.168.45.54
...
192.114.78.227
But how and where do I put a command like set target file:/home/root/targets.txt. Or it needs to be done in another way?
|
You seem to want to have "$1" represent a file containing a list of targets, instead of being just 1 target.
So you will need to englob the main part in a loop
ATHOSTFILE="$1"
SCRATCH="$2"
for ATHOST in $( cat "$ATHOSTFILE" ); do
... # (the rest of the actions here)
done
Note that the $( cat "$ATHOSTFILE ) part will be replaced by the content of $ATHOSTFILE, and read "element by element", each element being splitted using $IFS (usually: any space(s) and tab(s) and newline(s) will act as separators).
There are quite a few other things to say about the syntax, and structure, but this should lead you in the right direction.
| Instead of a single hostname specified on the command line, it reads in a list of multiple target IP addresses from a file |
1,668,467,325,000 |
How can I get the position of an argument by using its value?
For example:
myScript.sh hello world
echo "$1"
hello
How can I get the position of 'hello', which is 1 in this case?
|
While it wouldn't surprise me if there were quicker, easier ways, something like this should work:
#!/bin/bash
pos=0
found=no
for arg ; do
let pos++
if [[ "$arg" == "hello" ]] ; then
found=yes
break
fi
done
if [[ "$found" == "yes" ]] ; then
echo "Found hello at position $pos"
else
echo "hello not found"
fi
| Get argument position given its value in a bash script |
1,668,467,325,000 |
I have this data
photo for Kurtis Hardy" data-
photo for Paul Hoven" data-sr
I would like filter this to only get the name in the middle.
I used this command to get those line
cat File| grep -E -o 'photo for.{0,20}'
|
No need for cat here. Leave the cat alone.
I'm assuming every name starts right after the "photo for " and ends with the first " as in your example. Let me know if that is not the case. Otherwise we will need to think more about what counts as a "name".
grep -E 'photo for [^"]{0,20}"' File | sed 's/photo for \([^"]\{0,20\}\)".*/\1/'
Or even simpler:
sed -n 's/photo for \([^"]\{0,20\}\)".*/\1/p' File
| String processing: remove a prefix and capture all characters before a double-quote |
1,642,514,714,000 |
I can't use environment variables with chmod command. Is there any workaround to run this?
The way I'm trying to run in a sh file
export PEM_FILE="~/.ssh/xxx.pem"
chmod 400 $PEM_FILE
## Output
chmod: ~/.ssh/xxx.pem: No such file or directory
However, it work perfectly fine without the variable like this: chmod 400 ~/.ssh/xxx.pem
P.S: I am on macOS. Kindly let me know if this works fine in windows or linux.
Thanks!
|
Unfortunately, special characters like ~ do not work in strings. Instead, use a variable like $HOME to point to the home directory, or remove the quotes. (not recommended because it breaks compatibility with some shells)
So you would do something like:
export PEM_FILE="$HOME/.ssh/xxx.pem"
or export PEM_FILE=~/.ssh/xxx.pem.
| Why can't I use environment variable with chmod command? [duplicate] |
1,642,514,714,000 |
I'm trying to set up a simple script that would clean up my working folder.
The working folder is structured like that:
Project 1
Project 1 [edited]
Project 2
Project 2 [edited]
Project 3
Project 3 [edited]
...
All project folders contain only files. The objective is to get rid of all folders within working folder that do not have [edited] in name.
|
With GNU find, something like
find . -maxdepth 1 -mindepth 1 ! -name "*edit*"
should match the files in the current directory that don't have "edit" in their names, and print the names. Of course you could add e.g. -type d -iname "*project*" to match only directories with "project" in the name. If the output looks correct, you could add -delete to have find remove the them.
With Bash and shopt -s extglob in effect, you could also use
echo rm -r !(*edit*)/
where the trailing slash would have it match only directories and echo just prints the command, instead of running it.
| Delete all folders in specified path that do not have word "edit" in them? |
1,642,514,714,000 |
I execute a set of handy commands I need to do often on a standard type of file using the -c (or equivalently, the +) option from command line using vim. However, after an update to the remote system's OS, commands beyond the first one are interpreted as different file names and results in the first command being executed and multiple buffers being opened rather than multiple commands being executed on one file.
The command I use is vim -c ':11' -c ':norm wllv,dZZ' myfile (go to line 11, move over a few characters, select the current position, and replace using a leader command, and then save and exit).
With the change on the remote system, this now results in two buffers being opened, one is wllv,dZZ and the other is myfile
Vim also throws this error:
Error detected while processing command line:
E471: Argument required: :norm
Furthermore, if I try vim -c ":11" -c ":21" myfile, both commands work and no extra buffer is opened, which indicates the error is perhaps somewhere with :norm, but I'm not sure why as this was working just fine very recently.
Current version of vim is 7.4, in case that helps.
Any help restoring the old behavior or understanding where the issue is coming from would be greatly appreciated, thanks!
|
Answer here: https://vi.stackexchange.com/a/31833/37021
The issue was a bug in a wrapper function on the remote system that was passing the command line arguments incorrectly to the vim executable.
The fix was to alias vim to run the executable directly and file a bug report to the sys admins about the wrapper function.
| Cannot string together multiple "-c" options using vim from the terminal |
1,642,514,714,000 |
When I tried to run this command line
cp -a /home/root1/.Xauthority .Xauthority
`
this error appear, What the reason is?
cp: cannot stat '/home/root1/.Xauthority': No such file or directory
please, I need help for this problem.
|
I got this to work for Brave on my Linux Mint 20.1 system:
xhost SI:localuser:root # give root access to the display.
sudo su - # switch to root user's shell
env DISPLAY=:0.0 brave-browser --no-sandbox
I did not need to copy the .Xauthority file.
| When I need to make Firefox and brave run as root and transfare Xauthority this error appear |
1,619,357,947,000 |
new to stackoverflow and linux usage, have a NAS setup on HC4 currently trying to set up a steam cache, after installing docker I was trying to install network-manager which lead me down a rabbit hole because it returned errors such as :
W: Failed to fetch http://deb.debian.org/debian/dists/stable/InRelease Could not resolve 'deb.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease Could not resolve 'deb.debian.org'
W: Failed to fetch http://deb.debian.org/debian-security/dists/buster/updates/InRelease Could not resolve 'deb.debian.org'
W: Failed to fetch http://ftp.debian.org/debian/dists/buster-backports/InRelease Could not resolve 'ftp.debian.org'
W: Failed to fetch https://download.docker.com/linux/debian/dists/buster/InRelease Could not resolve 'download.docker.com'
W: Failed to fetch http://packages.openmediavault.org/public/dists/erasmus/InRelease Could not resolve 'packages.openmediavault.org'
W: Failed to fetch https://openmediavault-plugin-developers.github.io/packages/debian/dists/usul/InRelease Could not resolve 'openmediavault-plugin-developers.github.io'
W: Failed to fetch http://packages.openmediavault.org/public/dists/usul/InRelease Could not resolve 'packages.openmediavault.org'
W: Failed to fetch http://ppa.linuxfactory.or.kr/dists/buster/InRelease Could not resolve 'ppa.linuxfactory.or.kr'
W: Some index files failed to download. They have been ignored, or old ones used instead.
W: Target Packages (stable/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/docker.list:1
W: Target Translations (stable/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/docker.list:1
W: Target Packages (stable/binary-arm64/Packages) is configured multiple times in /etc/apt/sources.list.d/docker.list:1 and /etc/apt/sources.list.d/omvextras.list:2
W: Target Packages (stable/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/omvextras.list:2
W: Target Translations (stable/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/omvextras.list:2
now I get those errors with apt-get update, and apt-get install network-manager returns
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package network-manager is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'network-manager' has no installation candidate
and because I messed with this a lot, here is my sources.list:
#------------------------------------------------------------------------------#
# OFFICIAL DEBIAN REPOS
#------------------------------------------------------------------------------#
###### Debian Main Repos
deb http://deb.debian.org/debian stable main
deb-src http://deb.debian.org/debian stable main
deb http://deb.debian.org/debian buster-updates main
deb-src http://deb.debian.org/debian buster-updates main
deb http://deb.debian.org/debian-security/ buster/updates main
deb-src http://deb.debian.org/debian-security/ buster/updates main
deb http://ftp.debian.org/debian buster-backports main
deb-src http://ftp.debian.org/debian buster-backports main
#------------------------------------------------------------------------------#
# UNOFFICIAL REPOS
#------------------------------------------------------------------------------#
###### 3rd Party Binary Repos
###Docker CE
deb [arch=amd64] https://download.docker.com/linux/debian buster stable
###openmediavault
deb http://packages.openmediavault.org/public erasmus main
deb-src http://packages.openmediavault.org/public erasmus main
any help or a direction to point me in would be greatly appreciated
|
Alright been messing around with it and the issue seems to have been that the OMV config for eth0 and the network manager config were set differently, so after manually setting both to the same settings it was able to resolve hosts.
| Debian HC4 NAS can't resolve any host |
1,619,357,947,000 |
Installed qemu kvm, created Windows10 vm and I can start and open it from the GUI. Exited (quit, not close) KVM GUI.
# virsh list --all
Id Name State
------------------------
- win10 shut off
Then I ran:
# virsh start win10
Domain win10 started
Again listed status:
virsh list --all
Id Name State
-----------------------
3 win10 running
It's running.
Host system is Debian 10.
How to open it from CLI?
|
Ok, my bad.
Forgot about remote-viewer.
I opened it with
remote-viewer spice://127.0.0.1:5900
| How to open (not only start) virtual machine from CLI |
1,619,357,947,000 |
I have several hundred directories currently in this form :
Band Name - Album Name (year)
I would like to rename everything (with a mass treatment) in
Band Name (Year) Album Name
Almost all of my directories have this naming rule (with a few exceptions ready)
Not being an expert in bash scripting, I'm here to ask for your help. Thank you in advance for your time.
Thank you!
|
With zsh:
autoload zmv
zmv -n '(*) - (*) (\(<->\))(#q/)' '$1 $3 $2'
Remove -n (dry-run) when happy.
| Change the position of a word in directory name |
1,619,357,947,000 |
redshift is a package that enables blue light filter, so people can avoid eye strain.
This software is pretty good, but it lacks controls to adjust the filter color temperature, incrementally: no sliders, buttons or commands.
When I say "incrementally", what I mean is that I need a command to set color temperature according to the previous value. The package xbacklight has a good example:
xbacklight +10 #increases monitor brightness by 10%
or...
xbacklight -10 #decreases monitor brightness by 10%
Hence, if brightness value was 70%: now, it would become 60%.
What I need is a command like this:
temperature +10 #increases color temperature by 10%
what redshift already provides
With redshift, you can manually set color temperature like this:
redshift -O 3000K
Although, there's no built-in way to increase this value by 10%.
Therefore, if you need to increase the value, you need to do this:
redshift -x #reset the previous value
redshift -O 3300K
Notice that: not only I had to calculate the new value manually, I also had to reset the previous value, first.
why do I need to do this
I study all day, using my laptop. So, I need to have blue light filter on, in order to save my eyes and be more productive.
I have some some ideas on how to make a simple shell script that could do this, but I have no idea how I would go and store the previous value variable or where to store this kinda script properly.
|
I figured it out and made a simple package called tempcolor.
Now, I'm able to even create keyboard shortcuts to change color temperature incrementally.
install
place the contents of the repo wherever you want to;
make tempcolor executable: chmod +x ./tempcolor;
feel free to create a symbolic link for tempcolor.
be it in /usr/bin;
or be it in $HOME/.local/bin.
usage
incrementally change color temperature
tempcolor -inc <percent_value>
tempcolor -dec <percent_value>
reset color temperature (turn off)
tempcolor -x
set value using one shot mode (-O)
tempcolor <value>
| How to create a command to incrementally increase redshift's color temperature? |
1,619,357,947,000 |
I have logs file that contain date and username something like below
Nov 05 14:36:03.752146 server.com [2020-11-05T14:36:03.752Z] [C 7f7e597fa700] R=6ssssdsdsd 91,CN= user1 drop: UP 10.11.100.100 TO 10.20.20.139 ICMP 8:0:1:23249 SIZE 60
how can I grep date which Nov 05 and user1 in the commandline
Thanks
|
May be this will help you to get the details : Run the below command on website : https://rextester.com/l/bash_online_compiler
#!/bin/bash
str='Nov 05 14:36:03.752146 server.com [2020-11-05T14:36:03.752Z] [C 7f7e597fa700] R=6ssssdsdsd 91,CN= user1 drop: UP 10.11.100.100 TO 10.20.20.139 ICMP 8:0:1:23249 SIZE 60'
dt=`echo "$str" | cut -f1 -d":"| awk '{print $1 " " $2}'`
echo "Date : $dt"
Usr=`echo "$str" | cut -d',' -f2 | cut -d':' -f1 | awk '{print $1 " " $2}'`
echo $Usr
Output :
Date : Nov 05
CN= user1
| grep multiple words that have space |
1,619,357,947,000 |
How may I change my doas prompt? For example, to change sudo prompt you just run
export SUDO_PROMPT="Prompt: "
Is there an equivalent for doas?
|
The OpenBSD doas(1) utility does not allow you to change its prompt in the same way that sudo does. This is by design, to keep it as simple as possible (which is why it was written in the first place).
| Changing "doas" prompt |
1,619,357,947,000 |
I am creating a custom shell using C language and I am successful with parsing, fork and exec, pipes, redirection etc. I noticed one particular type of command which seems to be throwing my shell off.
In bash shell , the following command works.
bash> echo "abc" >> tempFile
bash> sed s/a/b/g tempFile
bash> sed 's/a/b/g' tempFile
In my custom shell,
mysh> sed s/a/b/g tempFile
mysh> sed 's/a/b/g' tempFile
The first sed command works, the second fails as
sed: 1: "'s/a/b/g'": invalid command code '
This is how I fork and execute the command in my shell.
execvp(qualifiedPath, arguments)
In both the above command the qualifiedPath is "/usr/bin/sed" and the arguments are NULL terminated array of character pointer like
[0] = "sed",
[1] = "s/a/b/g",
[2] = "TEMP",
[3] = (char*)NULL
for the first command and
[0] = "sed",
[1] = "'s/a/b/g'",
[2] = "TEMP",
[3] = (char*)NULL for the second command respectively.
Does anyone know why the single quote causes exec to fail, and also the same behavior is observed for
mysh> sed "s/a/b/g" tempFile
sed: 1: "'s/a/b/g'": invalid command code '
|
Thank you to @iFreilicht and @Andy Dalton. It was very silly of me. The Bash shell consumes the quotes and depending on if the argument was enclosed with ' or " , the shell will interpolate the arguments. This stackoverflow thread offers a more detailed explanation.
| Understanding command line arguments in custom shell and the effect of using quote |
1,619,357,947,000 |
I installed Debian 10 with Xfce Desktop Environment. And I installed a minimum set of browsers like this:
$ sudo apt -y install firefox chromium
For development purposes I also want to install opera, yandex-browser and google-chrome. Previously I used a browser for this.
How can I do it from console?
|
Create a script to install browsers, which consists of two parts: first add the browser repositories, and then install browsers themselfs from these repositories. browsers.deb.sh:
wget -qO - https://deb.opera.com/archive.key | sudo apt-key add
sudo tee /etc/apt/sources.list.d/opera-stable.list <<DEBREPO
# This file makes sure that Opera Browser is kept up-to-date
# as part of regular system upgrades
deb https://deb.opera.com/opera-stable/ stable non-free #Opera Browser (final releases)
DEBREPO
sudo tee /etc/apt/sources.list.d/opera-developer.list <<DEBREPO
# This file makes sure that Opera Browser is kept up-to-date
# as part of regular system upgrades
deb https://deb.opera.com/opera-developer/ stable non-free #Opera Browser (final releases)
DEBREPO
wget -qO - https://repo.yandex.ru/yandex-browser/YANDEX-BROWSER-KEY.GPG | sudo apt-key add
sudo tee /etc/apt/sources.list.d/yandex-browser-beta.list <<DEBREPO
### THIS FILE IS AUTOMATICALLY CONFIGURED ###
# You may comment out this entry, but any other modifications may be lost.
deb [arch=amd64] http://repo.yandex.ru/yandex-browser/deb beta main
DEBREPO
wget -qO - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add
sudo tee /etc/apt/sources.list.d/google-chrome.list <<DEBREPO
### THIS FILE IS AUTOMATICALLY CONFIGURED ###
# You may comment out this entry, but any other modifications may be lost.
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
DEBREPO
sudo apt update
sudo apt -y install opera-stable opera-developer yandex-browser-beta google-chrome-stable
opera-developer opera-stable yandex-browser-beta google-chrome-stable
Manually adding the Opera APT repository
Installing and updating the Yandex Browser beta
Linux Software Repositories
| How to install a browser from the command line? |
1,619,357,947,000 |
Basically I have an application server and would like to allow the Owners/Admins of the Application to be able to run any sudo command on the directories related to the application. For example, if the application directory is /opt/my-application, the user would not be required to run "sudo vim /opt/my-application/some-file" but would to run "sudo vim /var/some-file"
I thought that the right way to go about it would be to determine which commands they frequent in the applications directory, but I feel that would still allow them too much power else were in the system.
Edit September 11th:
Seeing some of the answers, I figure I should clarify my question. At this moment, the users have sudo access. I'm not opposed to them having sudo access to do the job they need. What I am curious about is a potential solution I can put in place to make it easier on them to work in the very specific directories they would regularly modify. My hope was I could enable passwordless sudo for any command they would run on those specific directories. The answers thus far have helped me implent passwordless sudo for a few system commands they would need, such as systemctl to be able to manage the application.
|
In your sudoers file:
user ALL=(root) NOPASSWD: /usr/bin/vim /opt/my-application/some-file
This line only will be allow the specific command to run.
But you cannot change the permissions to directory (chmod or setfacl)? Or change default umask value?
| Is it possible to allow passwordless sudo for a specific directory |
1,599,246,125,000 |
Taking a random word as input from the user, can anyone suggest a logic in shell scripting to check if any permutation of that word is actually an executable command or not?
|
Try pipelining with compgen command
| Check if a user given random word is an executable linux command or not? |
1,599,246,125,000 |
My computer turned itself off to protect itself from a power-surge. This has happened before many times, with nothing weird happening the next time I turned on the computer. I would press the power button, wait a minute, enter my password, and then be right back at my Desktop. This time however, something different happened. When I press the power button, I am taken to the BusyBox command line.
To be completely honest, I am way out of my depth. My only guess for what is happening is that before there was a file that BusyBox automatically ran when the command line was opened upon start-up, and now it does not. This seems pretty absurd to me though because the problem started after a power-surge, which primarily affects hardware, so the software should be fine. Even if this is what happened, I have no idea what file is suppose to be run, where in the directory it would be located, or the syntax that BusyBox uses.
|
Your disk probably has been at least partially corrupted.
There are a variety of good answers on this post. I would post this as a comment, but I am 4 points away from being able to comment:
https://askubuntu.com/questions/137655/boot-drops-to-a-initramfs-prompts-busybox
| Stop BusyBox Command Line for Loading on Start Up |
1,599,246,125,000 |
I'm trying to find all lines of a file not being after a specific pattern.
For some time I had an issue with my history using GNU bash (version 4 and 5) where commands appeared in duplicates. I assumed this was due to the fact that in my .bashrc I had the following line:
PROMPT_COMMAND="history -a; history -n; $PROMPT_COMMAND"
and since I'm using terminal multiplexers (screen and/or tmux) the above mentioned command gets executed several times (therefore echo $PROMPT_COMMAND results in history -a; history -n; history -a; history -n;
In some situations (especially when doing stuff concomitantly on different panes/windows/frames/buffers) the last command I entered was stored twice or even more often in my ~/.bash_history. This led to entries like the following:
#1596110297
yadm list -a | xargs -t ls -l
yadm list -a | xargs -t ls -l
Needless to say, this is pretty annoying. I just (hopefully) found a fix for the history-issue (by changing the command to PROMPT_COMMAND="history -a; history -n) but correction: this did NOT solve the issue with duplicated entries in the history.
Now I'd like to get rid of the duplicated entries.
Therefore I'm currently trying to find a regular expression to mark everything except lines starting with # and one line after that. My first idea was to combine grep -v (to invert the selection) and grep -A 1 (to get additionally one line after the matching pattern). But
grep -v "^#" -A 1 ~/.bash_history
did not yield the result I hoped for.
Therefore my question: does anyone have a good idea on how to do that using grep? If not: how could I accomplish this with other tools (sed, awk, ...)?
|
As far as I understand grep -v "^#" -A 1 means to print the lines that don't start with a hash sign, and one line after each. But don't you want the opposite, print the lines that do start with a hash sign, and one line after?
Given a test file:
#123
echo this
echo this
#456
echo that
echo that
echo that
#789
echo third
grep -A1 ^# history.txt |grep -vxFe -- prints:
#123
echo this
#456
echo that
#789
echo third
The second grep is to get rid of the group separators grep -A prints.
Alternatively uniq history.txt should work to print just one of each set of consecutive identical lines.
| grep for lines not after a pattern |
1,599,246,125,000 |
I would like to write a script which add/create the file (like touch does). But, something own. First lines will be variables. Where:
echo -n "Choose your folder to save the file: "
read dir
echo -n "What is the name of your file: "
read name
etc.
But in this case I should type a full path, but I want to use TAB button that complete queries.
I just want to make a TAB button works in my script. Like it works with cat, cd and with so many other commands.
Thanks in advance.
|
You want tab-completion to work when entering input to Bash's read? Then read -e is what you're looking for (manual):
Options:
-e use Readline to obtain the line in an interactive shell
e.g.
/tmp/foo$ touch abcdef abcghi
/tmp/foo$ read -e -p 'enter a filename: '
enter a filename: ab[tab]
abcdef abcghi
Though you really should make the script accept input as command line arguments instead. That way, it's easier to run it from other scripts, or from shell loops etc.
Something like this, $1 is the first argument to the script, $# holds the number of arguments.
#!/bin/bash
if [ "$#" != 1 ]; then
echo "Usage: $0 <filename>"
exit 1
fi
echo "Doing something with '$1'..."
| Make 'complete' built-in command works for 'read' command |
1,599,246,125,000 |
I have a large text file with multiple occurrences of a tag containing a URL:
[tag]https://example.com/222389/link/11835457224168404[/tag]
I need to reformat the tags as follows:
[new-tag]11835457224168404[/new-tag]
(capture just the part of the url after 'link' (the 'id') and modify the tag to 'new-tag':
There can be multiple tags per line;
The tag locations are not uniform - they are found in random positions throughout the file;
The tag content can have a space at the start (' http'), use 'http://" or 'https://' and sometimes use 'www';
The tag occasionally has content or space at the end (after the 'id') such as follows:
[tag]https://example.com/222389/link/11835457224168404/qwertyiop[/tag]
or
[tag]https://example.com/222389/link/11835457224168404?link=11835457224168401 [/tag]
There are sometimes occurrences of '[tag]' on their own (without the closing [/tag] or 'http') that need to be ignored.
How can I do this with sed or alternatives?
|
Strategy
Whilst it is possible to write regular expressions which don't match multi-character strings, they get complicated. We will use a trick to convert [tag] and [/tag] into single characters and then use negated character classes. In this script I will use control-a and control-b. It is critical that these characters don't appear in the file. As these are hard to type, I will use a couple of variables s and e for the start and end tags. I use notend to represent any sequence of characters which is not the end tag.
#!/bin/bash
s=$'\001' # control-a, for the start tag
e=$'\002' # control-b, for the end tag
notend="[^$e]*" # expression for not the end tag.
# Program, Change the tags into single characters
# change matched pairs of tags into new form
# convert any unmatched tags back to original form
prog='
s:\[tag]:'"$s"':g
s:\[/tag]:'"$e"':g
s:'"$s$notend"'/link/\([0-9]*\)'"$notend$e"':[new-tag]\1[/newtag]:g
s:'"$s"':[tag]:g
s:'"$e"':[/tag]:g'
# run sed, passing any parameters
sed -e "$prog" "$@"
usage
Save this script, make it executable, then run it giving the datafile as a parameter and redirecting the output to a temp file. examine the temp file.
| Sed change a tag and keep part of the contents |
1,599,246,125,000 |
I'm working with Apache Geode nodes, all of the work is being done via the Linux CLI. Normally when I edit the config files (with VIM) the text looks like this:
"nodes": {"long_UUID_here": {"node_uuid": "another_long_string_here", pair_uuid:" null,
"is_admin_node": false, "ip_address": "ip-or-fqnd.domain.com", "preferred_addresses": {},
"node_name": shortname.domain.com", "membership_state": null, "region": null}
...and that's just one node. There are often several of them, and the text is just all globbed together in a big hard-to-read mess. What would make this a ton easier is if I could view and edit the file like so:
"nodes": {
"long_UUID_here": {
"node_uuid": "another_long_string_here",
"pair_uuid:" null,
"is_admin_node": false,
"ip_address": "ip-or-fqnd.domain.com",
"preferred_addresses": {},
"node_name": shortname.domain.com",
"membership_state": null,
"region": null
}
I know this is possible and I've seen it before, but I don't know how it's done.
Update: I have to point out, this is a SLES Appliance (often with no internet access) so I cannot install any app, such as jq
|
A colleague of mine from DevOps showed me how to do this with sed + regEx + json.tool:
`echo sed -nre "/clusterMembership/ s/^[^']+'([^']+)','([^']+)'.*/\2/p" /path/to/file/here/node.file.config | python -m json.tool
| How can I list out bracketed text in Linux CLI in a readable format? Apache Geode node names & info |
1,599,246,125,000 |
I want to repetitively untar a file which I often download in new versions; for example:
cd download_directory
myFile-1.99.tar.gz # Version 2.x might be released next day or next week or next month;
I have tried a similar command with shellglob/wildcard which failed:
cd download_directory
tar -xzv myFile-*.tar.gz
I also tried the following command after reading the manual about --wildcards but it also failed:
cd download_directory
tar -xzv myFile-*.tar.gz --wildcards
How to untar in a version agnostic way?
|
cd download_directory
tar -xzf "$(find . -name 'myFile-*.tar.gz' | tail -n 1)"
| How to untar in a version agnostic way? |
1,599,246,125,000 |
I want to have some extra binaries / command available when I'm on terminal, triggered by the CWD.
Let following be the directory structure as one example
├── P1
| ├── mybin
| │ └── cmd1
| ├── S1
| │ └── mybin
| │ └── cmd2
| ├── S2
└── P2
└── S3
Then in this directory structure
cmd1 and cmd2 are available in P1/S1.
only cmd1 is available in P1 or P1/S2.
neither is available in P2 or P2/S3
In general, I want this to work with any directory structure. This is similar to how git detects if you're in a git repository. It's equivalent to putting ./mybin, ../mybin, ../../mybin on $PATH.
How should I modify my PATH so that this works? I'm using the fish shell, but I'll be happy to port a solution from any other shell to mine.
|
I understand that you're not happy with just
PATH="./mybin:../mybin:../../mybin:../../../mybin:$PATH"
until some limit like 8 or so sub-directories.
Instead of polluting and breaking your regular command lookup, I suggest you use a short wrapper, such that the recursive lookup will only be done for the commands prefixed by it.
Eg. k foo will try ./mybin/foo, ../mybin/foo, etc. but simply foo will be looked up in $PATH, as usual.
But I'm not using fish, and I have no idea how that could be written in the fish shell's language. With bash / ksh / zsh, it could be something like:
function k {
typeset p=. cmd=$1; shift
while
typeset e=$p/mybin/$cmd
if [ -x "$e" ]; then "$e" "$@"; return; fi
[ ! "$p" -ef / ]
do
p=../$p
done
echo >&2 "k: not found: $cmd"; return 1
}
If that works, you could make it into a standalone executable script, instead of trying to translate it to fish:
#! /bin/bash
p=. cmd=$1; shift
while
e=$p/mybin/$cmd
if [ -x "$e" ]; then exec "$e" "$@"; fi
[ ! "$p" -ef / ]
do
p=../$p
done
echo >&2 "k: not found: $cmd"; exit 1
| Recursive lookup of a binary in current directory's parents through hierarchical $PATH |
1,599,246,125,000 |
I was trying to find all the users in my Gnu/Linux system that have access to one particular folder.
I tried
$ ls -ld */
dr-xr-xr-x. 2 root root 28672 Mar 20 21:33 bin/
dr-xr-xr-x. 4 root root 4096 Mar 16 16:02 boot/
I tried ls -ldd */ , but it list only 1 row for one folder with the user name with which it created.
But I was expecting to list one folder each will show all users names who can access this folder
Eg:
$
dr-xr-xr-x. 2 user1 user1 28672 Mar 20 21:33 bin/
dr-xr-xr-x. 2 user2 user2 28672 Mar 20 21:33 bin/
Is there any way to do this?
|
Quick and Dirty and only tested on a Mac (which doesn't use getent). I replaced getent with the mac command to test "dscl . -ls /Users"
chkgrp=$1
for userid in $(getent passwd); do
echo "$(id -Gn $userid)" | grep -wq $chkgrp
[[ $? -eq 0 ]] && echo "$userid is in the $chkgrp group"
done
| Gnu/Linux command to list all users who has access to one folder |
1,599,246,125,000 |
I was reading about the echo command in UNIX and i found the -e option which is used to enable the interpretation of backslash escapes, the thing is one of these backslash escape sequences was the \ \ ,but when i used the two commands :
echo "hello\\\world"
and
echo -e "hello\\\world"
i get the same output: hello\\world , so what is the difference ?
|
|
The default is to expand escape sequences, the -e option is non-standard.
If your echo implementation does not include -e in the echo output, it is non-compliant.
The problem here is that there are implementations that only follow the minimal POSIX requirements and ignore the fact that a platform that is allowed to be called UNIX-compatible needs to implement support for the POSIX XSI enhancements.
BTW: bash on MacOS X and Solaris is compiled to always expand escape sequences. Bash in the default compile variant is not POSIX XSI compliant.
POSIX compliant shells like ksh93, bosh or dash expand escape sequences by default as required.
If you like to check the behavior, your test string is not a good idea. A better test would be:
echo '\tfoo'
and to check whether foo is prepended by a tab character in the output.
The reason, why you may had problems is that you used foo\\bar and that the shell treats backslashes in double quoted strings already. If you like to check the echo behavior, you need to use single quoted strings as arguments.
| Unix echo command with -e option |
1,599,246,125,000 |
Hi I have a log file that has a lot of information and it is pretty difficult to spot what I am looking for, so I came to this command that show me only what I want to see in the log, it will act as a listener when the pattern is match only show me the search results
tail -f file.log | GREP_COLOR='01;36' egrep --color=always "\"stringOneExample\""
And works ok,the problem is if I pipe another grep
tail -f file.log | GREP_COLOR='01;36' egrep --color=always "\"jsonKeyOne\"" | GREP_COLOR='01;31' egrep --color=always "\"jsonKeyTwo\""
I think does not work because when I pipe one into the other as the result of the first does not contains the condition of the second, nothing is shown, so I want to both(or more) grep operate on the all file and just give each string a different color in order to spot the difference more easily
NOTE if I add :|$ to the end, it will act as a regular tail and show me a lot of extra info that is not what I want
tail -f file.log | GREP_COLOR='01;36' egrep --color=always "\"stringOneExample\":|$"
|
Preferrably use:
grep -e 'jsonKeyOne' -e 'jsonKeyTwo'
…to OR your terms. Depending on your flavor of grep also -E 'jsonKeyOne|jsonKeyTwo' is possible, too. This is the fastest option with only the terms.
Different colors work like this: first term only colored and all other lines uncolored, next term with different color and all other lines, too …until last term with its color and also all other lines.
Either grep or syntax highlight, better not both.
| How to tail on demand based on a search and colorized the result |
1,599,246,125,000 |
I saw other posts the same problem, but I can't figure out since everyone does different things. I'm new to shell script as well, so my limited knowledge is getting more confused when I read those posts.
I have this scrip work perfectly well on another server, but somehow on the new server, I need to check different service names and the script returns error above. I hope you can help me point out what I'm doing wrong here.
The error is: line 9: test: too many arguments
Here is the script:
#!/usr/bin/bash
GREEN=0
YELLOW=1
RED=2
pid=`/bin/ps -eo fname,pid | /usr/bin/awk '{if ($1 == "sshd") print $2}'`
if test $pid
then
message="sshd Server is running. PID: $pid"
status=$GREEN
else
message="sshd Server is stopped."
status=$RED
fi
echo $message
exit $status
|
Seems you have lot of ssh connections...
Try this,
#!/usr/bin/bash
GREEN=0
YELLOW=1
RED=2
pid=(`/bin/ps -eo fname,pid | /usr/bin/awk '{ if ($4 == "abc") print $2}'`)
if [ ${#pid[@]} -gt 0 ]
then
message="sshd Server is running. PID: ${pid[@]}"
status=$GREEN
else
message="sshd Server is stopped."
status=$RED
fi
echo $message
exit $status
if you just want only the PID of ssh service, try the below
#!/usr/bin/bash
GREEN=0
YELLOW=1
RED=2
pid=`cat /var/run/sshd.pid`
if test $pid
then
message="sshd Server is running. PID: $pid"
status=$GREEN
else
message="sshd Server is stopped."
status=$RED
fi
echo $message
exit $status
| test: too many arguments returns |
1,581,700,593,000 |
Post applying the following OS patch SUSE-SLE-Module-Public-Cloud-12-2020-251 (Security Update for aws-cli) for the month of Feb 2020, AWS CLI was throwing the error SSL validation failed. The command was working fine before applying the OS security patch
Does anybody knows how to fix this issue?
aws s3 ls s3://
SSL validation failed for https://s3.us-east-2.amazonaws.com/ [Errno 2] No such file or directory
My OS details:
cat /etc/os-release
NAME="SLES"
VERSION="12-SP4"
VERSION_ID="12.4"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP4"
aws cli version:
aws --version aws-cli/1.16.297 Python/2.7.13
Linux/4.12.14-95.45-default botocore/1.12.213
Your help would be appreciated.
|
An upgrade of aws-cli fixed the issue. Below commands, I ran and got the issue resolved.
zypper in python-pip
pip install --upgrade awscli
aws --version
aws-cli/1.17.17 Python/2.7.13 Linux/4.12.14-95.6-default botocore/1.14.17
| SSL validation fails, after applying SUSE-SLE-Module-Public-Cloud-12-2020-251 |
1,581,700,593,000 |
I am trying to clean the cache. I am running linux penguin 4.19.79 using a chromebook. I have run apt clean and apt-get clean to no avail. I get permission design. I have attached a screenshot of the error I get. I'm the root user.
apt clean permission denied
I hope somone can help me. Thank you.
|
When running any command on the 'open' terminal (actually chroot'd inside of its own container) you have to put { sudo apt clean }because the privileges you have are given to you from... well, google I presume. It took me awhile to catch up on that, I have the same on my chromebook. Also, as Mark Stewart said try run su first and then try it. If you run into a snag there, be it for whatever reason you can use "sudo sudo /bin/bash" which will run you in another instance of bash, effectively making you root. Then run "passwd" to set a new one and exit to return to the normal user mode. Then when you run su the password you just set should work.
| How to fix access denied even as owner when cleaning the cache? |
1,581,700,593,000 |
Doing echo -e "\uDDAA" (which is not a valid utf-8 codepoint) in bash, prints ���.
How can I make it not print anything if its not a valid codepoint?
what im trying to do is add in front of all codepoints in NamesList.txt, the character it represents. right now i have it as
sed -e 's/\<\([0-9A-F]\{4,6\}\)\>/\\U\1 \1/g' < NamesList.txt | while read -r line;do echo -e "$line"; done | sponge NamesList.txt
If theres a better way of doing it that completely goes around the issue then please post that solution
|
You should not create those sequences in the first place. This will print spaces for control characters (\pC) and give the marks (\pM) a space to ride on:
perl -CO -pe 's{^([0-9A-F]+)\b}{$x=$1,$c=chr hex $x;if($c=~/\pC/){$c=" "}elsif($c=~/\pM/){$c=" $c"}"$c $x"}e' NamesList.txt
(use -i NamesList.txt if you want to edit the file in place)
See Unicode Character Properties. Surrogates, bidi marks and other controls that you do NOT want displayed are in the category "Other" (\pC). Accents and other combining marks are in the category "Marks" (\pM).
| stop bash from printing invalid unicode sequences |
1,581,700,593,000 |
How can I check, from the command line, that the audio line output (of a virtual device created with ALSA plugins) has an audio signal? And check the strength of the signal?
|
I've found a solution using the "sox" tool (sox man). I can use the method proposed in this answer (Send sound output to application and speaker) to get the signal in an alsa virtual device call "Loopback" while it's also send to the output device and then use sox to find if there is a signal and it's strength:
sox -b 16 -t alsa hw:Loopback,1,0 -r 48000 -n stat
(-b -> 16 bits signal, -t (alsa hw:Loopback,1,0) -> the virtual device that give me the signal, -r 48000 -> sampling frecuency and -n stat -> analyze the signal)
this command gives an output like that:
Input File : 'hw:Loopback,1,0' (alsa)
Channels : 2
Sample Rate : 48000
Precision : 16-bit
Sample Encoding: 16-bit Signed Integer PCM
In:0.00% 00:00:02.47 [00:00:00.00] Out:115k [-=====|=====-] Hd:3.9 Clip:0
Samples read: 229376
Length (seconds): 2.389333
Scaled by: 2147483647.0
Maximum amplitude: 0.630951
Minimum amplitude: -0.630981
Midline amplitude: -0.000015
Mean norm: 0.159916
Mean amplitude: -0.004383
RMS amplitude: 0.198459
Maximum delta: 1.176422
Minimum delta: 0.000000
Mean delta: 0.223984
RMS delta: 0.278537
Rough frequency: 10721
Volume adjustment: 1.585
when you get signal and like that when there is no signal:
Input File : 'hw:Loopback,1,0' (alsa)
Channels : 2
Sample Rate : 48000
Precision : 16-bit
Sample Encoding: 16-bit Signed Integer PCM
In:0.00% 00:02:23.70 [00:00:00.00] Out:6.89M [ | ] Clip:0
Samples read: 13787136
Length (seconds): 143.616000
Scaled by: 2147483647.0
Maximum amplitude: 0.000000
Minimum amplitude: 0.000000
Midline amplitude: 0.000000
Mean norm: 0.000000
Mean amplitude: 0.000000
RMS amplitude: 0.000000
Maximum delta: 0.000000
Minimum delta: 0.000000
Mean delta: 0.000000
RMS delta: 0.000000
Rough frequency: 0
The meaning of the "-n stat" tool can be found at (Sox man page)
| Raspbian: Check the sound output |
1,581,700,593,000 |
I am looking at KeePassXC on my Linux Mint Cinnamon system. The installation and initial usage all went well, following the built-in tutorial; but there is a problem with Firefox integration.
After adding KeePassXC-Browser to Firefox, I had an error from the add-on:
Cannot connect to KeePassXC. Check that browser integration is enabled
in KeePassXC settings.
The browser integration is enabled. I guess that the error occurs because I am running Firefox under firejail, so I plan to run KeePassXC under firejail also.
I am having problems running KeePassXC under firejail. I even can’t run KeePassXC from the command-line. Initially, I got an error that the executable was missing. I found it under /var/lib/flatpak/app and created a link. Then I got an error that shared library libqrencode.so.4 was not found.
This question is not about shared libraries, I know about $LD_LIBRARY_PATH, but I don’t know why the installation did not set up those things. There is nothing relevant in /etc/ld.so.conf.d or in $LD_LIBRARY_PATH. It almost seems that the installation is incomplete, but the GUI Software Manager reported no errors.
How do I run KeePassXC from the command line? Is my guess valid that firejail is isolating Firefox from KeePassXC? Can I run KeePassXC with firejail? Does anyone have any tips about running these three together?
(I am running the latest versions of Mint and the applications.)
Any help would be appreciated.
|
Since the executable is found under /var/lib/flatpak/app, I am assuming you have installed keepassxc as a flatpak app. As of firejail v0.9.60, firejail does not have flatpak/snap support. See the release notes:
firejail (0.9.60) baseline; urgency=low ...
* drop support for flatpak/snap packages
If you want to sandbox keepassxc using firejail, you need to install it via the deb package, compile it from the source, or get the appimage. Since you are on Linux Mint, the most straightforward way is to install it from the repository, you can do it via:
sudo apt-get update
sudo apt-get install keepassxc
Also, make sure that /etc/firejail/keepassxc.profile has the line noblacklist ${HOME}/.mozilla, which prevents the firefox directory from being blacklisted, so that keepassxc have access to the browser extension.
As a side note, flatpak and snap apps have their own way of sandboxing. For example, default flatpak build options result in:
No access to any host files except the runtime, the app and ~/.var/app/$APPID. Only the last of these is writable.
No access to the network.
No access to any device nodes (apart from /dev/null, etc).
No access to processes outside the sandbox.
Limited syscalls. For instance, apps can’t use nonstandard network socket types or ptrace other processes.
Limited access to the session D-Bus instance - an app can only own its own name on the bus.
No access to host services like X11, system D-Bus, or PulseAudio.
In addition, flatpak version is not officially supported by KeepassXC team, see here.
References
https://firejail.wordpress.com/download-2/release-notes/
https://docs.flatpak.org/en/latest/sandbox-permissions.html#
https://keepassxc.org/download/#linux
| How to use KeePassXC with Firefox and firejail |
1,581,700,593,000 |
I have installed processing-3.5.3 IDE but really want to use the Sublime text editor as my goto for processing sketches. What I have done so far:
Install the "Processing" package in sublime.
Installed processing-java in /home/testuser/Downloads/Compressed/processing-3.5.3-linux64/processing-3.5.3
I believe there must be an issue with my PATH to processing-java but no matter what I do I am unable to get to work. I also use the "processing-java --help" but it always returns "processing-java: command not found" in the terminal.
OS: Pop!OS
Thanks
Kluivert
|
It looks like you're right, you need to add process-java to your PATH environmental variable. See this detailed guide on how to achieve this, but it should be something like this (though I'd recommend moving the binary elsewhere):
export PATH="$PATH:/home/testuser/Downloads/Compressed/processing-3.5.3-linux64/processing-3.5.3"
Also take note that you can use Processing with pure Java.
| How do I fix the "processing-java: command not found" error in the terminal |
1,581,700,593,000 |
So here's an excerpt of my directory tree:
|-- 20070214_014700.a
| |-- info
| |-- processed
| |-- HH.EL..BHZ
| |-- AZ.AS..HHZ
| |-- (hundreds more)
| |-- raw
| |-- resp
|-- 20100737_055560.a
| |-- info
| |-- processed
| |-- raw
| |-- resp
|-- 20190537_028750.a
| |-- info
| |-- processed
| |-- raw
| |-- resp
I have ~13,000 directories (ending in .a) and each directory has a 'processed' subdirectory which has files I'd like to copy from every processed/ directory into a single directory. Some of these files may have the same filename so I'd also like to rename them based on their parent directory. I'm not too picky but something similar to:
20070214_014700_HH.EL..BHZ
The whole dataset is 3 TB so I've been testing on just a few directories using 'find':
find . -name processed -exec cp -r '{}' 'test/{}' \;
For some reason this dumps some files into test/ but also creates another processed/ directory inside of that. I'm not sure how to include a copy command and renaming function into find at the same time so any advice would be great. Thanks for the help.
|
find . -type f -path "./*.a/processed/*" -exec sh -c '
for path; do
prefix=${path%%.a/processed*}
cp "$path" "test/${prefix##*/}_${path##*processed/}"
done
' sh {} +
Option -type f searches for regular files in the given path and the -exec option starts a shell script with find's result as arguments ({} +).
In the for-loop each argument is assigned to the path variable.
Example: If variable path is ./20070214_014700.a/processed/AZ.AS..HHZ, then
prefix=${path%%.a/processed*} removes the suffix -> ./20070214_014700
${prefix##*/} removes the prefix to the first / -> 20070214_014700
${path##*processed/} also removes the prefix and leaves the filename -> AZ.AS..HHZ
The resulting target filename of the cp command is test/20070214_014700_AZ.AS..HHZ.
| Copying files from specific subsubdirectories based on subdirectory name into single directory then renaming |
1,581,700,593,000 |
all experts
I have two types of files in the same directory
ex1) record1.txt (record2, record3, record4 ...)
11111 absda qwedc
11112 uiyds dqeds
11113 eqwev jfsec ...
ex2) Summary1.txt (Summary2, Summary3, Summary4 ...)
----some data is written----
.....
.....
***RESULT 111.114 30.344 90.3454*** OTHERNUMBER#1 OTHERNUMBER#2 .....
.....
.....
All I want to do is extract RESULT X(number) Y(number) Z(number) of Summary#.txt.
And then, put those positions into the corresponding record#.txt, but I want to add some information, like this
X Y Z
111.114 30.344 90.3459
11111 absda qwedc
11112 uiyds dqeds
11113 eqwev jfsec ...
So, I want my final file, record#.txt, to look above.
I tried sed and cat...
all failed.
Thanks in advance!
|
If I undersand you correctly, this is my proposal:
for i in record*.txt; do
xyz=$(grep -oP "(?<=RESULT ).*(?=\*\*\*)" $i)
sed -i "1 iX Y Z\n$xyz\n" summary${i//record/}
done
Loop through the files named record*.txt
for i in record*.txt; do
Capture the string between RESULT and ***
xyz=$(grep -oP "(?<=RESULT ).*(?=\*\*\*)" $i)
Add X Y Z in the first line, following by the captured pattern, in the file summary*.txt
sed -i "1 iX Y Z\n$xyz\n" summary${i//record/}
| Extracting some texts from txt file and adding to an existing file bash |
1,581,700,593,000 |
We have scripts to reinstall RHEL 5.x to RHEL 7.x and install our application, but I need to update timezone according to old one. So how to restore timezone from RHEL 5.x to RHEL 7.x using /etc/localtime file (We are taking backup) and in command line?
One of the specific environment is Red Hat Enterprise Linux Server release 5.8 (Tikanga) and Red Hat Enterprise Linux Server release 7.5 (Maipo)
|
I figured it out. In RHEL 5.x, there is a file /etc/sysconfig/clock contains the timezone. Using sed in the script to get the timezone from that file worked out. Sample output of this file:
ZONE="America/Chicago"
UTC=false
ARC=false
Thanks!
| How to restore timezone from RHEL 5.x to RHEL 7.x? |
1,581,700,593,000 |
I have to repeat this command a lot of times a day, is there any way to execute it multiple times (aprox. 25 times)
cd ../folderName && ../../tools/clone.py
What I need to do is execute the clone script inside every folder I specify.
|
Add this to your shell profile or just try this one:
myclone () { cd $1; /full/path/to/clone.py }
And you can do
myclone ../folderName
| Repeat commands with changes |
1,561,493,692,000 |
Below I run what I expected to be an invalid command: var=3 date, which in fact isn't.
$ var=3 date
Sun May 26 17:10:22 UTC 2019
$ echo $?
0
But the variable wasn't assigned the value 3:
$ echo $var
$
I expected to say that var=3 wasn't a valid command. What am I missing?
|
You are setting var to 3 as an environment variable in the environment of the date command, and not in the environment of the bash shell itself (the calling/parent process).
For refence, see the Bash manual at https://www.gnu.org/software/bash/manual/html_node/Environment.html
and specifically:
The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described in Shell Parameters. These assignment statements affect only the environment seen by that command.
| What happens when we run var=3 command [duplicate] |
1,561,493,692,000 |
I have a bunch of folders in an external HDD and I want to copy a part of them. The folders have the following structure:
A001A
A003A
A004A
etc...
...and all the folders contain similar directories e.g:
HHZ
HH1
HH2
LHZ
LH1
LH2
I need to copy all the directories (A001A, A002A ...) with the subdirectories (HHZ, HH1, HH2) but with only the directories with H initial (also every files in it).
How can I do that?
|
This should do the trick (assuming all the directories in the current folder are A*** directories):
cp -r --parents */H* destination/
You should obviously replace destination/ with your actual target.
| Copy specified subdirectories |
1,561,493,692,000 |
I need to write a script in AWK which will count all the files and their weight in "/home" directory from all the months and display the list in terminal.
The output should look like this:
|
I wrote script in awk which uses system commands ls to list files and stat to get info about file. After that script will print number of files and size in bytes.
#!/usr/bin/awk -f
BEGIN {
dir = "/home/matej" #chnage default directory
if(ARGC == 2){ #check for command line arguments
dir = ARGV[1]
}
printf("Listing directory: %s\n", dir)
cmd = "ls " dir
m_names[1] = "January"
m_names[2] = "February"
m_names[3] = "March"
m_names[4] = "April"
m_names[5] = "May"
m_names[6] = "June"
m_names[7] = "July"
m_names[8] = "August"
m_names[9] = "September"
m_names[10] = "October"
m_names[11] = "November"
m_names[12] = "December"
while((cmd | getline filename) > 0 ){
"stat --printf=\"%Y %s\" \"" dir "/" filename "\"" | getline info #use %W instead of %Y if your system supports date of birth
#FS = " "
split(info, arr, " ")
time = arr[1]
size = arr[2]
month = strftime("%m", time) + 0 #+ 0 is for converting string to int and removein first 0
months[month] = months[month] + 1
sizes[month] = sizes[month] + size
}
close(cmd)
#pretty print
printf("%-11s %-20.18s %s\n", "Month", "Number of files", "Total size of files (in bytes)")
for(a = 1; a <= 12; a ++){
printf("%-9s: %-20s %s\n", m_names[a], months[a], sizes[a])
}
}
Modify two things in this script:
dir = "/home/matej/" change your default directory
"stat --printf=\"%Y %s\" \"" dir filename "\"" | getline info use %W instead of %Y if your system supports time of birth
To run script:
chmod +x script.awk
./script.awkor with argument ./script.awk /home/user
Output in my system looks like:
Listing directory: /home/matej
month number of files total size of files
January : 7 163860
February : 1 4096
March : 1 4096
April : 1 764
May : 1 4096
June : 3 12288
July : 2 13142852623
August : 2 8192
September: 1 16
October : 8 10975459334
November : 4 44067
December : 10 49152
| AWK- script which lists number and weight of all files created in certain month [closed] |
1,561,493,692,000 |
I noticed while removing httpd package that dependencies installed all not removed along with that
# yum remove httpd
How can i remove a package and dependencies which were installed along with that without affecting other services, like if some dependencies are being used by other services
|
yum autoremove
will remove any unneeded dependencies from your system.
You need to add clean_requirements_on_remove=1 to yum.conf
| Remove Package and dependencies installed along with it |
1,561,493,692,000 |
I have a directory that contains quite a lot of files with names made up of artirst's name and album e.g.:
Now the task is to go through each of the files, create a directory named after artist's name and album the file's name, and move the file into that directory.
The final structure should look like this:
how would I go about doing this with only basic shell commands?
|
This will mostly do what you want, you are asking for a lot though so let me know if there are any parts you are confused about and I will try to explain them:
#!/usr/bin/env bash
song_dir="$HOME/tmp/songs"
out_dir="$HOME/tmp/org_songs"
[[ ! -d "$out_dir" ]] && mkdir -p "$out_dir"
get_artist () {
local a=($(tr '_' ' ' <<<"$1"))
for i in "${a[@]}"; do
if [[ $i =~ artiste.* ]]; then
printf '%s\n' "${i#*=}"
break
fi
done
}
get_album () {
local a=($(tr '_' ' ' <<<"$1"))
for i in "${a[@]}"; do
if [[ $i =~ album.* ]]; then
printf '%s\n' "${i#*=}"
break
fi
done
}
get_song () {
local a=($(tr '_' ' ' <<<"$1"))
for i in "${a[@]}"; do
if [[ $i =~ song.* ]]; then
printf '%s\n' "${i#*=}"
break
fi
done
}
for song in "${song_dir}/"*.mp3; do
bname=$(basename "$song")
artist=$(get_artist "$bname")
album=$(get_album "$bname")
sname=$(get_song "$bname")
[[ ! -d "${out_dir}/${artist}/${album}" ]] && mkdir -p "${out_dir}/${artist}/${album}"
cp "$song" "${out_dir}/${artist}/${album}/${sname}"
done
In use:
Before:
$ tree
.
├── script.sh
└── songs
├── artiste=linkin-park_album=meteora_id=02_song=Don't-stay.mp3
├── artiste=linkin-park_album=meteora_id=02_song=Session.mp3
├── artiste=linkin-park_album=meteora_id=02_song=Somewhere-I-Belong.mp3
├── artiste=linkin-park_album=minutes-of-midnight_id=04_song=Bleed-It-Out.mp3
├── artiste=linkin-park_album=minutes-of-midnight_id=04_song=Given-Up.mp3
├── artiste=linkin-park_album=minutes-of-midnight_id=04_song=Leave-out-All-The-Rest.mp3
├── id=01_artiste=eminem_album=recovery_song=cold-wind-blows.mp3
├── id=01_artiste=eminem_album=recovery_song=on-fire.mp3
└── id=01_artiste=eminem_album=recovery_song=talking-2-myself-(feat-kobe).mp3
1 directory, 10 files
After:
$ tree
.
├── org_songs
│ ├── eminem
│ │ └── recovery
│ │ ├── cold-wind-blows.mp3
│ │ ├── on-fire.mp3
│ │ └── talking-2-myself-(feat-kobe).mp3
│ └── linkin-park
│ ├── meteora
│ │ ├── Don't-stay.mp3
│ │ ├── Session.mp3
│ │ └── Somewhere-I-Belong.mp3
│ └── minutes-of-midnight
│ ├── Bleed-It-Out.mp3
│ ├── Given-Up.mp3
│ └── Leave-out-All-The-Rest.mp3
├── script.sh
└── songs
├── artiste=linkin-park_album=meteora_id=02_song=Don't-stay.mp3
├── artiste=linkin-park_album=meteora_id=02_song=Session.mp3
├── artiste=linkin-park_album=meteora_id=02_song=Somewhere-I-Belong.mp3
├── artiste=linkin-park_album=minutes-of-midnight_id=04_song=Bleed-It-Out.mp3
├── artiste=linkin-park_album=minutes-of-midnight_id=04_song=Given-Up.mp3
├── artiste=linkin-park_album=minutes-of-midnight_id=04_song=Leave-out-All-The-Rest.mp3
├── id=01_artiste=eminem_album=recovery_song=cold-wind-blows.mp3
├── id=01_artiste=eminem_album=recovery_song=on-fire.mp3
└── id=01_artiste=eminem_album=recovery_song=talking-2-myself-(feat-kobe).mp3
7 directories, 19 files
Also note I'm using cp to copy the files instead of mv to move them. I recommend just doing a copy first and then removing the old files afterwards as long as everything worked out right. Otherwise you risk making a mess of or losing some data.
| Move all files into subdirectories named after part of file name [duplicate] |
1,561,493,692,000 |
I use Debian 5 and I had to update my GCC to 4.9.4. So I build and installed it by myself. Now I need to update Python but I can't to do that because I receive this error:
configure: error: no acceptable C compiler found in $PATH
Can I link the compiler that I installed to solve this problem?
Also I tried following commands which gcc , which cc and which g++ and only last one returned path to g++. Other commands returned nothing.
I downloaded GCC archive, build it and install it
../gcc-4.9.4/configure --prefix=/opt/gcc49/ --disable-libsanitizer --disable-libcilkrts
make
make install
I just need python 2.6.
|
/opt/gcc49/bin is not in your path. You need to add it, or use stow.
| configure: error: no acceptable C compiler found in $PATH error on Debian 5 while installing Python |
1,561,493,692,000 |
I'm trying to figure out what this (obretschlt2-w7) is, from my command-line prompt. I'm using a conda environment, with the name µ_env, with my Username mu. However I can't figure out where the 2nd field is coming from. I'm logged into a secure server VPN, through my work, but I don't ever recall seeing this. What is this name and where is it coming from?
(µ_env) obretschlt2-w7:~ mu$ pwd
/Users/mu
Output from echo $PS1, as asked for (in comment).
(µ_env) \h:\W \u\$
|
Is that not your hostname on your system?
Try:
cat /etc/hostname
Edit:
$ echo $PS1 (µ_env) \h:\W \u\$
\h means hostname
Source: wiki.ubuntuusers.de/Bash/Prompt
Command "hostname" may also show your hostname
| What is this random name in my command-line prompt? [duplicate] |
1,561,493,692,000 |
Is there a way/tool for the linux command-line (bash) to get the overall audio playtime of a certain directory?
Something like:
playtime --all --recursive /Music/DrumAndBass/
output: 1:35:06
|
I found a simple way by using the tool mp3info with awk.
Unfortunately this is only working for mp3 files.
mp3info -p "%m:%s\n" directory/*.mp3 |
awk -F: '{a+=$1*60+$2}END{printf"%d:%02d:%02d",a/3600,a%3600/60,a%3600%60}'
or as tiny bash-script
#!/bin/bash
dir="$1"
mp3info -p "%m:%s\n" "$dir/*.mp3" |
awk -F: '{a+=$1*60+$2}END{printf"%d:%02d:%02d",a/3600,a%3600/60,a%3600%60}'
thanks to @steve for the awk-line
Update:
sndfile-info or sox can get the playtime for different file formats like flac or wav:
sndfile-info audio.flac | awk '/^Duration/ { print $3 }'
or the sox solution:
sox -V3 audio.flac -n |& awk '/^Duration/ { print $3;exit }'
(the exit in the awk command limits the output to only print the first match)
| Getting the audio-playtime of a folder from the command line |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.