date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,561,493,692,000 |
i need to find the last 15 users that logged in to the system.
i know there are commands like 'w' or 'who' or 'users' but as much as i know those commands refer only to users currnetly logged-in.
but i need the user names of the 15 last active users - even if they already logged out...
the man of users say:
users - print the user names of users currently logged in to the current host
which command/s i need to find data about users that not active right now?
|
The last command lists the last logged-in users. The data comes from /var/log/wtmp and can be limited to n-lines with the -n <number> option. Other options allow one to select records "since" or "until" particular login times.
If the wtmp file doesn't exist, no logging occurs. To create the file if it doesn't exist, touch the file and set the ownership to "root:utmp" with 664 permissions.
| find last 15 users that logged in - even if some of them already logged out |
1,561,493,692,000 |
I am trying to find all the files in my directory that does not contain the letters a,b or c; why does this command not work?
ls *[!abc]*
example: eg: MATCH: xyz, dkh, file, foo; NOT MATCH: bar, bxc, azi,csk
|
Your command does something very similar to what you want: it expands to the list of filenames that are not exactly: a, b, or c. If you test, you see:
$ touch a b c d
$ ls *[!abc]*
d
But if you create another file and re-test:
$ touch argh
$ ls *[!abc]*
argh d
To exclude file names that contain those characters anywhere, use:
$ shopt -s extglob
$ ls !(*[abc]*)
d
Another way of doing it (again with extended globs) would be to explicitly list out the patterns:
$ ls !(*a*|*b*|*c*)
d
| Find all the filenames in this directory that do not contain 'a' 'b' or 'c' in their name [duplicate] |
1,561,493,692,000 |
I was updating my laptop when an electrical blackout happene. I restarted and try to use again sudo dnf update, but this happened:
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction check error:
file /usr/share/doc/glibc/INSTALL from install of glibc-2.27-30.fc28.i686 conflicts with file from package glibc-2.27-19.fc28.x86_64
file /usr/share/doc/glibc/NEWS from install of glibc-2.27-30.fc28.i686 conflicts with file from package glibc-2.27-19.fc28.x86_64
file /usr/share/licenses/glibc/LICENSES from install of glibc-2.27-30.fc28.i686 conflicts with file from package glibc-2.27-19.fc28.x86_64
file /usr/share/kpackage/Purpose/Twitter/metadata.json from install of kf5-purpose-twitter-5.48.0-1.fc28.x86_64 conflicts with file from package kf5-purpose-5.47.0-1.fc28.x86_64
file /usr/share/gcc-8/python/libstdcxx/v6/__pycache__/printers.cpython-36.opt-1.pyc from install of libstdc++-8.1.1-5.fc28.i686 conflicts with file from package libstdc++-8.1.1-1.fc28.x86_64
file /usr/share/gcc-8/python/libstdcxx/v6/__pycache__/printers.cpython-36.pyc from install of libstdc++-8.1.1-5.fc28.i686 conflicts with file from package libstdc++-8.1.1-1.fc28.x86_64
file /usr/share/gcc-8/python/libstdcxx/v6/printers.py from install of libstdc++-8.1.1-5.fc28.i686 conflicts with file from package libstdc++-8.1.1-1.fc28.x86_64
file /usr/share/gcc-8/python/libstdcxx/v6/printers.pyc from install of libstdc++-8.1.1-5.fc28.i686 conflicts with file from package libstdc++-8.1.1-1.fc28.x86_64
file /usr/share/gcc-8/python/libstdcxx/v6/printers.pyo from install of libstdc++-8.1.1-5.fc28.i686 conflicts with file from package libstdc++-8.1.1-1.fc28.x86_64
file /usr/share/licenses/libcom_err/NOTICE from install of libcom_err-1.44.2-0.fc28.i686 conflicts with file from package libcom_err-1.43.8-2.fc28.x86_64
file /usr/share/locale/de/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/locale/en@boldquot/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/locale/en@quot/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/locale/es/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/locale/ja/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/locale/pl/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/locale/uk/LC_MESSAGES/elfutils.mo from install of elfutils-libelf-0.173-1.fc28.i686 conflicts with file from package elfutils-libelf-0.172-2.fc28.x86_64
file /usr/share/man/man5/cert8.db.5.gz from install of nss-3.38.0-1.0.fc28.i686 conflicts with file from package nss-3.37.3-1.1.fc28.x86_64
file /usr/share/man/man5/cert9.db.5.gz from install of nss-3.38.0-1.0.fc28.i686 conflicts with file from package nss-3.37.3-1.1.fc28.x86_64
file /usr/share/man/man5/key3.db.5.gz from install of nss-3.38.0-1.0.fc28.i686 conflicts with file from package nss-3.37.3-1.1.fc28.x86_64
file /usr/share/man/man5/key4.db.5.gz from install of nss-3.38.0-1.0.fc28.i686 conflicts with file from package nss-3.37.3-1.1.fc28.x86_64
file /usr/share/man/man5/pkcs11.txt.5.gz from install of nss-3.38.0-1.0.fc28.i686 conflicts with file from package nss-3.37.3-1.1.fc28.x86_64
file /usr/share/man/man5/secmod.db.5.gz from install of nss-3.38.0-1.0.fc28.i686 conflicts with file from package nss-3.37.3-1.1.fc28.x86_64
file /usr/share/doc/gstreamer1/NEWS from install of gstreamer1-1.14.2-7.gitafb3d1b.fc28.i686 conflicts with file from package gstreamer1-1.14.1-7.gitcba2c7d.fc28.x86_64
file /usr/share/doc/gstreamer1/RELEASE from install of gstreamer1-1.14.2-7.gitafb3d1b.fc28.i686 conflicts with file from package gstreamer1-1.14.1-7.gitcba2c7d.fc28.x86_64
file /usr/lib64/qt5/plugins/discover/flatpak-backend.so from install of plasma-discover-flatpak-5.13.3-3.fc28.x86_64 conflicts with file from package plasma-discover-libs-5.12.5.1-3.fc28.x86_64
file /usr/share/locale/cs/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/fr/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/id/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/lt/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/pl/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/sv/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/uk/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/locale/zh_TW/LC_MESSAGES/pulseaudio.mo from install of pulseaudio-libs-12.2-1.fc28.i686 conflicts with file from package pulseaudio-libs-11.1-18.fc28.1.x86_64
file /usr/share/doc/libpcap/CHANGES from install of libpcap-14:1.9.0-1.fc28.i686 conflicts with file from package libpcap-14:1.8.1-10.fc28.x86_64
file /usr/share/doc/libpcap/CREDITS from install of libpcap-14:1.9.0-1.fc28.i686 conflicts with file from package libpcap-14:1.8.1-10.fc28.x86_64
file /usr/share/man/man7/pcap-filter.7.gz from install of libpcap-14:1.9.0-1.fc28.i686 conflicts with file from package libpcap-14:1.8.1-10.fc28.x86_64
file /usr/share/man/man7/pcap-linktype.7.gz from install of libpcap-14:1.9.0-1.fc28.i686 conflicts with file from package libpcap-14:1.8.1-10.fc28.x86_64
file /usr/share/man/man7/pcap-tstamp.7.gz from install of libpcap-14:1.9.0-1.fc28.i686 conflicts with file from package libpcap-14:1.8.1-10.fc28.x86_64
Error Summary
-------------
I have tried to read solutions to this and used commands like:sudo dnf remove --duplicates, sudo dnf autoremove, sudo dnf update --skip-broken, sudo dnf update --refresh --allowerasing, but none of this works.
|
If you have 64x System, you should not install 32x packages in. Looking to you picture they are starting to create conflicts.
| How do I repair a bad update in Fedora 28?(edited) [closed] |
1,561,493,692,000 |
I need to reset router and I just get access with ssh to raspberry in local network with router. I'm trying with elinks and w3m browser but I can't see any option, maybe because the router's control panel is javascript page...
Can you recommend any option to access the option to restart router from the web control panel of the router itself?
Edit: router is provided by Movistar Internet operator. And Movistar offer a access to router through Alejandra web control management. My problem is resolved.
Thanks
|
I had a similar problem before and I fixed it by creating an SSH tunnel using Putty.
| How to reset router through command line browser? [closed] |
1,561,493,692,000 |
Attempting to discover a command to copy files if the destination's (not source's) file has not been modified in the last hour.
|
I know of no command that will precisely match your requirement. Something like this should work (remove the --dry-run when you're sure you're happy with the result; replace the --verbose with --quiet if you want it to run more silently):
src=/path/to/source
dst=/path/to/target
comm -z -23 \
<(find "$src" -type f -printf '%P\0' | sort -z) \
<(find "$dst" -type f -mmin -60 -printf '%P\0' | sort -z) |
rsync --dry-run --verbose --archive --from0 --files-from - "$src" "$dst"
It assumes relatively recent utilities that understand how to handle NUL-terminated lines. If necessary, and provided that you can guarantee that no filenames contain newlines, you could remove the three -z flags and rsync's --from0 and replace the \0 in the find commands with \n.
| Copy files from one directory to another, ignoring files where the destination's file has been modified in the last hour? |
1,561,493,692,000 |
How can I handle the attributes (read, write, hidden ...) of an executable program of Windows (*.exe) from the Linux terminal (command line)?
thanks in advance
Update:
For further clarification, suppose I have a hidden executable in Windows (NTFS). Start with a Linux LiveCD, mount NTFS partition and I want to remove the attributes of the read and hidden executable (.exe). (just an example)
|
When the filesystem is mounted with NTFS-3G, the setfattr command should let you change extended attributes, which are stored in system.ntfs_attrib_be.
First, query the existing attributes with getfattr:
$ getfattr -n system.ntfs_attrib_be -e hex file.txt
# file: file.txt
system.ntfs_attrib_be=0x00000022
Then set the new value, removing the one(s) you don't want. According to https://msdn.microsoft.com/en-us/library/cc246322.aspx , ATTR_HIDDEN = 0x2, ATTR_ARCHIVE = 0x20. So to remove only the Hidden bit:
setfattr -n system.ntfs_attrib_be -v 0x00000020 file.txt
There is a wrapper script called ntfs_attr.sh that may simplify this.
| changing attributes of Windows programs from Linux |
1,528,387,444,000 |
As in how can I get the *.foreground color value obtained with xrdb -query assigned to a variable in the script.
I'm trying to get both background and foreground colors into two variables that will be passed to another program as parameters.
|
Figured it out, kinda. When using pywal we can import the colours from its cache, like this:
#!/bin/sh
. "${HOME}/.cache/wal/colors.sh"
fg=$color7
bg=$color2
...
The catch is that this obviously won't work if wal's cache doesn't exist. But then one just need to reset the wallpaper with pywal to get the cache rebuilt.
| How to get background and foreground color values from xrdb in shell script? |
1,528,387,444,000 |
is there way how can I from my personal computer try connection between two servers?
Me from PC want to try connection Server1 -> Server2
|
You could use ping(1) (uses ICMP ECHO REQUEST/REPLY). If you suspect some sort of firewall might interfere, traceroute(8) will show the path followed to the destination (tcptraceroute(8) is a handy wrapper here). You can specify what protocol and port to use, and a slew of exotic options.
| Test connection between two servers [closed] |
1,528,387,444,000 |
Nee to find command to generate output exactly as if ls -p would generate?
With find /path/to/ -mindepth 1 -maxdepth 1 -exec basename {} \; the directories don't have trailing slash.. i need the output folder names to have trailing slash
sample output:
folder 1/
my-file-1.sh
find command to list directory contents without full path and folders with a trailing slash
|
$ find /path/to -mindepth 1 -maxdepth 1 -exec sh -c '
[ -d "$1" ] && printf "%s/\n" "${1##*/}" || printf "%s\n" "${1##*/}" ' _ {} \;
aDirectory/
afile
Explanations:
[ -d "$1" ], this checks if its a directory, if yes then run followed printf:
printf "%s/\n" "${1##*/}"
else, run below printf:
printf "%s\n" "${1##*/}"
${1##*/}: this removes longest match of everything * till last slash / seen start from begging of the file/directory path, which it will result only last directory/file name only.
| find command equivalent to ls -p |
1,528,387,444,000 |
I've been using cmake and creating a build folder for my cmake code and I want to easily cd to the build directory. I've been naming my build directory in this format:
/parent/codeandsuch
/parent/codeandsuch_build
I've tried the following alias in my bashrc but its not working. Ive copied the name of the current directory to a string, added _build to it and tried to cd but its not working. Any ideas? Thanks
alias cdbuild='DIR=${PWD##*/} || DIR = DIR + "_build"|| echo DIR || cd ../DIR'
|
You can't concatenate strings with + in bash. Also you need to prepend a $ to the variable name to use its value. So, instead of:
DIR = DIR + "_build"
use:
DIR="${DIR}_build"
The whole thing becomes:
alias cdbuild='DIR=${PWD##*/} || DIR="${DIR}_build || echo "$DIR" || cd "../$DIR"'
Alternatively:
alias cdbuild='cd "$(pwd)_build"'
| Alias CD to a directory that's name is included in current directory |
1,528,387,444,000 |
A little background:
I am running Windows 10, have installed git bash, and created the .bashrc file.
Right now in my .bashrc, I have the following line:
PS1='\w\> '
So suppose I am on my desktop, and there is a folder called test. In git bash, it would show this:
~/Desktop/Test\> (enter command here)
In my cmd, it would show:
C:\Users\John\Desktop\Test> (enter command here)
I like the cmd path that it shows, and was wondering if it is possible to show this path instead of the one git bash shows, by modifying the PS1.
|
That seems not possible with \w or \W but you can do this:
PS1='$PWD> '
| Change PS1 in .bashrc so that the following directory path is shown: |
1,528,387,444,000 |
I am working on an assignment that requires me to accept user input to search for current users on the server and if the user input is online, I need to output the users name. If the user is not logged in I need to respond accordingly. I have been working on this code for days and can't seem to figure it out. I can search for the user but when it comes to comparing my user input variable string with the variable string holding the users name who is logged in I keep getting an error that says too many arguments. Please see my code below:
#!/bin/bash
read -p "Please enter the user name you would like to search: " userName
name=$(who | grep "${userName}" | awk '{print $1}');
if [ [ $name == *"$userName"* ] ];
then
echo $name
else
printf "That user is not logged in.\n";
fi
|
You're close, but the main problem is the bash [[ ]] construct can't have spaces in between the brackets. It think's you're trying to do multiple [ commands, which is the POSIX test command. If you fix that it works, but if the user has multiple ttys open is will print their name once for every one. If you want to use grep, you can do this:
#!/bin/bash
read -p "Please enter the user name you would like to search: " userName
if who | awk '{print $1}' | grep -wq "$userName"
then
echo "$userName is logged in."
else
echo "That user is not logged in.";
fi
exit
| Bash User Input Search For Who is On Server If Statement Error Too Many Arguments |
1,528,387,444,000 |
My global preferences, alias cp='cp -iv', is ignored by sudo while using zsh.
I'm setting up a new system, and I'm trying out zsh for my user account. The root user still has bash. In /etc I have:
/etc/bash.bashrc
/etc/zsh/zshenv
Both of these have the above alias, alias cp='cp -iv'.
In the user's directories, neither of these contains the commands in the /etc global configs.
~/.zshenv
/root/.bashrc
If I switch to the root user su - and try to clobber a file with copy, I get the correct prompt, cp: overwrite 'fruits/apple.txt?'. The same for the home user. However, if I sudo the copy command for the home user while in zsh, the file is overwritten! Using bash, I've not experienced this problem before, so I don't have a clue where else to look.
|
Workaround for sudo to work with your aliases, e.g. sudo cp ...
alias sudo='sudo '
| Why would `sudo cp src dst` ignore no clobber rule in /etc? [duplicate] |
1,528,387,444,000 |
I've downloaded some files. I want to find out whether these file names are present in file.txt one by one. I've tried following command but not successful. I want to find out using xargs.
ls | xargs grep file.txt
How do I do that?
|
Though I'm not sure what exactly means your "one by one" but here is
Short ls + grep approach:
ls -1 | grep -wf - file.txt
-1 - ls option, lists one file per line
-f file - grep option, obtains patterns from file(or STDIN in case of passing - argument), one per line
The same could be written as:
grep -wf <(ls -1) file.txt
| Searching all downloaded files in a file list |
1,528,387,444,000 |
Is there an option in SoX effects processing to mix the wet and dry signals instead of only outputting the wet?
For example, say my effects chain is overdrive into pitch shift:
sox in.wav out.wav overdrive 0.5 gain -0.5 pitch 700
Except I don't want the final file to be just the shifted signal. I want a mix of the distorted, shifted signal and the distorted, unshifted signal.
Does SoX support this somehow?
|
If I understand you, there is an option especially to mix two signals: -m.
sox in.wav out.wav mix.wav -m
play mix.wav
You can probably do the effect and mix in one command, but I'm just a beginner with sox.
You can pipe the output of sox into another sox using the |command input
filename syntax, and adding -p to make sox output to a pipe in "sox format".
It has the advantage of not using temporary intermediate files.
sox -m '|sox -v 0.5 in.wav -p overdrive 0.5 gain -0.5' \
'|sox -v 0.5 in.wav -p overdrive 0.5 gain -0.5 pitch 700' \
mix.wav
| SoX - Mix original signal with effected signal |
1,528,387,444,000 |
How can I have surfraw use google as a default search engine?
Or how could I shorten surfraw google foo to surfraw g foo?
|
Set SURFRAW_customsearch_provider=google.
Then use: sr S foo
If setting via a shell variable, make sure to export it.
| Use google by default in surfraw |
1,528,387,444,000 |
EM7455 connect on Single Board Computer(NanoPC-T2) through USB.
I have attached AndroidRIL_5.1.11 binary files in my android and kernel source after build it and run on my android device.Below error messages received on android Log:
# dmesg | grep sierra
<6>[ 1.184000] usbcore: registered new interface driver sierra_net
<6>[ 40.552000] usbcore: registered new interface driver sierra
<6>[ 40.564000] sierra: v.1.7.42_android_generic_2:USB Driver for Sierra Wireless USB modems
<3>[ 47.308000] sierra_net: module is already loaded
<6>[ 87.844000] sierra 1-1.1:1.0: Sierra USB modem converter detected
<4>[ 87.872000] sierra_remove_sysfs_attrs
usb_get_serial_port_data(port_tmp) = 0xd8c90000
<4>[ 87.880000] kfree at sierra_remove_sysfs_attrs
<6>[ 87.884000] sierra ttyUSB0: Sierra USB modem converter now disconnected from ttyUSB0
<6>[ 87.892000] sierra 1-1.1:1.0: device disconnected
<6>[ 90.920000] sierra 1-1.1:1.0: Sierra USB modem converter detected
<4>[ 90.948000] sierra_remove_sysfs_attrs
usb_get_serial_port_data(port_tmp) = 0xda6b8d80
<4>[ 90.956000] kfree at sierra_remove_sysfs_attrs
<6>[ 90.960000] sierra ttyUSB0: Sierra USB modem converter now disconnected from ttyUSB0
<6>[ 90.968000] sierra 1-1.1:1.0: device disconnected
127|root@nanopi2:/ # dmesg | grep sierra
<6>[ 1.184000] usbcore: registered new interface driver sierra_net
<6>[ 40.552000] usbcore: registered new interface driver sierra
<6>[ 40.564000] sierra: v.1.7.42_android_generic_2:USB Driver for Sierra Wireless USB modems
<3>[ 47.308000] sierra_net: module is already loaded
<6>[ 87.844000] sierra 1-1.1:1.0: Sierra USB modem converter detected
<4>[ 87.872000] sierra_remove_sysfs_attrs usb_get_serial_port_data(port_tmp) = 0xd8c90000
<4>[ 87.880000] kfree at sierra_remove_sysfs_attrs
<6>[ 87.884000] sierra ttyUSB0: Sierra USB modem converter now disconnected from ttyUSB0
<6>[ 87.892000] sierra 1-1.1:1.0: device disconnected
<6>[ 90.920000] sierra 1-1.1:1.0: Sierra USB modem converter detected
<4>[ 90.948000] sierra_remove_sysfs_attrs
usb_get_serial_port_data(port_tmp) = 0xda6b8d80
<4>[ 90.956000] kfree at sierra_remove_sysfs_attrs
<6>[ 90.960000] sierra ttyUSB0: Sierra USB modem converter now disconnected from ttyUSB0
<6>[ 90.968000] sierra 1-1.1:1.0: device disconnected
Anyone known how to solve this error,help me.
Thanks!
|
Not positive but this may be a bug that was fixed with a patch that was added to the Linux Kernel.
Also make sure you have the Sierra Wireless Serial USB, Sierra Wireless WWAN, and Qualcomm QMI drivers built into your kernel.
| Sierra Wireless GSM modem EM7455 + Android Integration Error |
1,528,387,444,000 |
I installed powerline by this way:
pip install --user powerline-status
pip install --user git+git://github.com/powerline/powerline
After that I uninstalled it, but I get this error every time I open a terminal:
/Library/Python/2.7/lib/python/site-packages/powerline/bindings/bash/powerline.sh: No such file or directory
How can I fix that?
|
As nikolas was saying, this is very likely your ~/.profile or your ~/.bashrc files which are calling for powerline.sh.
Try to do the following:
Locate where powerline.sh is called:
#> grep "powerline.sh" .bash* .profile
.bashrc:POWERLINE_BASH=/usr/share/powerline/bindings/bash/powerline.sh
.bashrc: powerline-daemon -q;
Here, it is located in the .bashrc.
Open the file where it is located with a text editor and remove it.
That all, folks.
| powerline error in command line |
1,494,543,840,000 |
I have a long report in the form of text tab-delimnated ( meaning if I open this text file in excel spreadsheet then every word and number will fit in a cell). In this report there are key words that I am looking for and I want to print the
four numbers next to this key word. For example, let's say this paragraph is in the report (which is text tab delimnated) and the key word is "number "
The following number 00 02 25 226 is my card ID.
I can use the command "grep" to search for the exact word "number", but how can I grep this word and the next four cells (i.e. digits)
|
grep -Eo 'number( [0-9]+)+' input
| Search for word and the next few words |
1,494,543,840,000 |
I have a command like this:
if [ $battery_level -le 6 -a $STATUS = $NOT_CHARGING ] #Battery Low 1
then
/usr/bin/notify-send -i "$ICONL" "Battery critically low!" "Battery level is ${battery_level}%!"
paplay /usr/share/sounds/freedesktop/stereo/dialog-information.oga
It gives a message and a sound at a certain battery level. But it repeats too often.
I can set it to sleep and run again by separating the same command with a line like sleep 120s, but I do not know exactly how many times should it be run.
I need it to run at an interval of two minutes, indefinitely (as at some point system sleep or hibernation is triggered by a completely different script).
Looking here: How to Run or Repeat a Linux Command Every X Seconds Forever, I see that an option is sleep and the other is watch.
watch seems to work but it only works with the form
watch -n 120 <my_line>, and that changes a bit the output (battery level is not shown) anymore.
|
Just use a while loop:
while [ "$battery_level" -le 6 ] && [ "$STATUS" = "$NOT_CHARGING" ]
| Repeat command after a period of time |
1,494,543,840,000 |
I want a put a text file in a encrypted zip archive. I would currently use:
echo "<my text>" > file.txt
zip --encrypt myarchive.zip file.txt
Is there a way to do the same thing in one single command without having to save a file?
|
zip --encrypt myarchive.zip -j -FI <(echo message)
| How to add a text file to a zip in one single command? [duplicate] |
1,494,543,840,000 |
In command paste - - - -, numbers of - is equal the future column number. In my case, I have 55,000 future column, for me wont need to put 55,000 - what I will can use?
Example:
title1:A1
title2:A2
title3:A3
title4:A4
title5:A5
title1:B1
title2:B2
title3:B3
title4:B4
title5:B5
title1:C1
title2:C2
title3:C3
title4:C4
title5:C5
title1:D1
title2:D2
title3:D3
title4:D4
title5:D5
I want to transform this file (A) into following:
title1 A1 title1 B1 title1 C1 title1 D1 title2 A2
title2 B2 title2 C2 title2 D2 title3 A3 title3 B3
title3 C3 title3 D3 title4 A4 title4 B4 title4 C4
title4 D4 title5 A5 title5 B5 title5 C5 title5 D5
For this, I used :
$ sed 's/:/ /; /^$/d' sample.txt \
| sort | paste - - - - -
But in my file, I will have a big column number: 55,000 or 400 column.
What is the other form to replace the - ??
|
String multiplication trick based on this answer: https://stackoverflow.com/a/5349772/4082052
Use substitution to insert programmable number of dashes
paste $(for i in {1..400}; do echo -n '- '; done)
or
paste $(printf -- "- %.s" {1..400})
To know why printf -- was used: Dashes in printf
| Command "paste - - - - ", how to specify a big column number |
1,494,543,840,000 |
What is the cleanest way to require the user to enter password in order to run a particular program without the use of third-party applications? For example, if I type firefox to launch it from the terminal, it will prompt for a password and only run it if the correct password is entered. Something akin to that affect using perhaps user permissions.
|
create exampleuser , and set password to it
then change firefox permissions to 700 and change firefore own to exampleruser ; after this you can run firefox only root or exampleuser with sudo or su command.
for exmaple:
sudo useradd exampleuser
sudo passwd exampleuser
sudo chown exampleuser:exampleuser ../firefox
sudo chmod 700 ../firefox
test:
$ ../firefox
bash: ../firefox: Permission denied
$ su - exampleuser -c ../firefox
Password: #<-- type exampleuser password
or run with root user:
$ sudo ../firefox
[sudo] password for username: #<-type root password
| Cleanest way to require the user to enter password to run a program? |
1,494,543,840,000 |
I am trying to extract the compilation date from a linux command (or cpp would be fine too). I am using:
stat -c %z ./myProgram.bin
However, if I copy myProgram.bin to an another place via ssh for example, the stat command is basically giving me the date of the copy.
How can I get the real compilation date?
Thanks.
|
Thomas Dickey's answer addresses the issue in general, for any (ELF) binary. Given the way your question's phrased, you might find the __DATE__ and __TIME__ predefined macros useful; they allow the compilation date and time to be referred to within a program (so a program knows its own compilation date and time).
Here's a quick example:
#include <stdio.h>
int main(int argc, char **argv) {
printf("This program was compiled on %s at %s.\n", __DATE__, __TIME__);
return 0;
}
| Get compilate date [duplicate] |
1,494,543,840,000 |
I have a service in '/etc/init.d'. In that service, I run a command on a remote machine using ssh as a user. Currently I do this in the following way:
sudo -u user bash -c "ssh [email protected] 'source ~/.envrc ; (cd /catalog; ./bin/catalog start &)'"
This is the start command of that service and the service name is catalog.
When I do sudo service catatlog start the command runs successfully i.e it properly SSH'es into the target machine which is [email protected] as the user user but it does not start the service.
Can anyone tell me how to tweak this so that it runs successfully?
|
So I fixed it by using nohup command:
sudo -iu user ssh [email protected] "nohup bash -c 'source ~/.envrc ; (cd /catalog; ./bin/catalog start &)'"
| How to run a command on remote host from a service? |
1,494,543,840,000 |
I have the following folder structure:
/backup
/backup/copy.sh
/backup/archive/
/backup/20160405_logs/
/backup/20160405_logs/sql.log
/backup/20160405_logs/bak.log
I want to move the folder 20160405_logs into /backup/archive/. If I run mv /backup/20160405_logs /backup/archive from the CLI (manually type and run) it works perfectly. However, if I run that command from copy.sh I get the following error for each file within 20160405_logs:
copy.sh: line x: file_path: No such file or directory where is x is an incorrect line number for mv call in copy.sh.
All the files and their parent folder are moved though. So it's not like the move is failing...
What am I missing!?
Thanks in advance :)
|
Jeff Schallers' second comment pointed me in the right direction.
My backup script looks like this:
tar source_folder dest_file >> /backup/20160405_logs/bak.log
mv /backup/20160405_logs /backup/archive
echo "Backup complete" >> /backup/20160405_logs /backup/archive
The missing files that are being reported are log files that I am trying to write to after I run the mv command.
As mentioned in my comment above, if there were a badge for nitwits, I'd own one!
Sorry for wasting everyone's time.
| mv misbehaves in shell script [closed] |
1,494,543,840,000 |
I have a command, but I want to get the results into a .txt file I can open. How do I alter my command to allow the results to be put into a text file. I plan to transfer this .txt file to my local desktop.
my command | grep stackoverflow
I have tried: echo my command | grep stackoverflow > ex.txt
Although nothing comes up in the .txt file.
Thanks.
|
Well, basically use output-redirection
my command|grep stackoverflow > file #writes output to <file>
my command|grep stackoverflow >> file #appends <file> with output
my command|grep stackoverflow|tee file #writes output to <file> and still prints to stdout
my command|grep stackoverflow|tee -a file #appends <file> with output and still prints to stdout
The pipe takes everything from stdout and gives it as input to the command that follows. So:
echo "this is a text" # prints "this is a text"
ls # prints the contents of the current directory
grep will now try to find a matching regular expression in the input it gets.
echo "my command" | grep stackoverflow #will find no matching line.
echo "my command" | grep command #will find a matching line.
I guess "my command" stands for a command, not for the message "my command"
| Unix, create file based on command [duplicate] |
1,494,543,840,000 |
I tried this:
ifconfig br0 192.168.0.1 broadcast 192.168.0.255 netmask 255.255.255.0 network 192.168.0.0 gateway 192.168.0.1 up
But it did not work I am trying to use all of these parameters:
address 192.168.0.1
broadcast 192.168.0.255
netmask 255.255.255.0
network 192.168.0.0
gateway 192.168.0.1
How Would I achieve converting all of my interface parameters to a command?
|
I found my own result:
ifconfig br0 192.168.0.1 broadcast 192.168.0.255 netmask 255.255.255.0 up
route add default gw 192.168.0.1
| how would I convert this interface block into a command |
1,494,543,840,000 |
I am writing a simple unix script to automate reading of a log file.
The following does not give any output to the terminal. It just asks for the password for buser and then it just hangs. I understand that this is because the commands in -c of su are executed in the background. But the logfile had some logs and i wanted to output this to terminal. Is there any way to do this. Please note I dont have the option of using sudo command.
ssh -t [email protected] "ssh -t aserver "su - buser -c "tail -f /logfile
|
This is becoming a very common issue in these days, try it in this way:
ssh -t [email protected] ssh -t aserver 'su - buser -c \"tail -f /logfile\"'
| command line in su - and background |
1,494,543,840,000 |
I'm pretty new - came across this problem. My webserver terminal is looking weird, there are some code echoes and then rows and rows of tilde. Where I type would be at the green square, but nothing I type commands or otherwise do anything. Pressing enter jumps down the square and clears a tilde but does not make any difference.
Tried to run clear screen commands ( ctrl-l and such ) doesn't do anything. Searching about tildes just brings up the meaning of a tilde, not whatever this is. Any way to kill what is happening?
|
You seem to be in Vi Editor editing mode (look down below the screen: -- INSERT --, is for insert mode).
To change editing mode, you need to press ESC key first. And then to quit/exit vi altogether without saving you can type :q! and press ENTER.
Vi is a widely used command line text file editor comes with almost every linux distros. You should take some time to learn it: How to Use the vi Editor
| Linux server terminal - rows of tildes, can't write commands? |
1,494,543,840,000 |
I have a triple boot of Ubuntu, Haze OS, and Kali Linux, and I would like to run an app installed on another operating system from my Primary OS. Like running chrome installed in Ubuntu while using Haze OS.
Is this possible? And how can I do this if it is?
|
It's rather simple to do something like this using the chroot tool. If you mount say Ubuntu to some location, then you can run the programs from that OS. The process would look something like this.
mount /dev/sdu1 /mnt/otheros #Mount the / of the foreign OS to some directory
chroot /mnt/otheros #You should now be able to run all the programs of the foreign OS.
cat /etc/*-release
Note: chroot can only be run as root. You may want to look into fakechroot/fakeroot as an alternative.
| Share apps across multiple Linux distributions? |
1,494,543,840,000 |
using the mpc search command I can find a the filename of a song like so:
mpc search title 'Two Weeks'
The output would look like this: Grizzly Bear - Two Weeks.mp3
Since I also know the location of the mp3 (MPD Music directory), I know how I could delete the file with another command. But I want to have a one-line command that finds and deletes a music file in my MPD directory.
|
On the assumption that only one filename comes back, you can use something like:
rm "/path/to/MPD/Music/directory/$(mpc search title 'Two Weeks')"
If there are multiple results, this will break.
| Delete file returned by "mpc search" with single command |
1,494,543,840,000 |
FileA.txt:
ATGCATGC
GGGGGGTT
TTTTT
AAAA
FileB.txt:
asdfasdf
blah2
ATGCATGC
blah3
blah4
delte-me-too
GGGGGGTT
blah5
blah5
....
I want to compare the each line from FileA.txt and check if it is in FileB.txt. If it is in FileB, I want to delete the following:
Matched Line
One line above
Two lines below
and output into a new file.
NOTE: There will be 500,000 lines in FileA. I would like to do this in a way which we do not hardcode the patterns.
I currently have something to delete the lines, but I'm getting tripped up about the looping through FileA to create a new pattern for this awk expression:
awk '/$VARIABLE_REGEX/{for(x=NR-2;x<=NR+2;x++)d[x];} {a[NR]=$0}
END{for(i=1;i<=NR;i++)
if(!(i in d))
print a[i]}' FileB.txt
|
Note: No error checking. Also, assumption is that the input in 2nd file follows the pattern mentioned exactly.
awk 'NR== FNR {a[$0] = $0 ; next } {if (!($0 in a)) {b[count++] = $0; } else {count--; if (count > 0) delete b[count];getline;getline; }} END {for (i=0; i<count; i++) print b[i] }' 1 2
inputs are in 1 and 2
1
ATGCATGC
GGGGGGTT
TTTTT
AAAA
2
asdfasdf
blah2
ATGCATGC
blah3
blah4
delte-me-too
GGGGGGTT
blah5
blah5
foo
foo-delete
AAAA
bar-delete
bar-delete
bar-ok
Output
asdfasdf
foo
bar-ok
| How to loop through a file and make each line a new regular expression into an awk statement? |
1,494,543,840,000 |
I want to filter grep results by using grep -v option.
But the output does not differ when using a particular pattern.
contents of log.log:
ERROR
error
EXCEPTION
exception
<STATUS>ERROR</STATUS>
<MessageType>ERROR</MessageType>
When I run the command:
egrep -wi 'error|exception' /temp/log.log | grep -v 'error'
gives output:
ERROR
EXCEPTION
exception
<STATUS>ERROR</STATUS>
<MessageType>ERROR</MessageType>
But if I run the command:
egrep -wi 'error|exception' /temp/log.log | grep -vi '<STATUS>ERROR</STATUS>'
It still gives the output as:
ERROR
EXCEPTION
exception
<STATUS>ERROR</STATUS>
<MessageType>ERROR</MessageType>
Whereas I am expecting:
ERROR
EXCEPTION
exception
<MessageType>ERROR</MessageType>
Why is this happening?
|
The problem occurs due to the egrep -w 'error|exception'. This command adds special characters before and after the pattern (ie. error or exception) for highlighting these patterns in the grep result.
It is as Harald mentioned. The 'ERROR' pattern did not match grep output statement as the 'ERROR' between the STATUS tags in grep output had some special characters added to it by the -w flag.
Hence when the egrep's result was sent to grep -v command, it failed to match the pattern.
| Grep -v filter not working [closed] |
1,494,543,840,000 |
Merge Two files depending on a column, nth occurrence of a string in column1 of file1 should be merged with nth occurrence of the same string in column1 of file2. I tried join but the results are not as expected.
join <(sort file1) <(sort file2)| column -t | tac | sort | uniq > file3
file1
CAAX-MC oracle.log.ApplicationScript 1 7 CRM
CAAZ-TEST-MC oracle.log.ApplicationScript 1 7 CRM
DAA oracle.log.ApplicationScript 1 7 CRM
DJF oracle.log.ApplicationScript 1 6 CRM
DJF oracle.apps.appslogger 5 6 CRM
file 2
CAAX-MC CRMDomain
CAAZ-TEST-MC CRMDomain
DJF CRMDomain
DJF CommonDomain,CRMDomain,FinancialDomain
file 3 -- desired output:
CAAX-MC oracle.log.ApplicationScript 1 7 CRM CRMDomain
CAAZ-TEST-MC oracle.log.ApplicationScript 1 7 CRM CRMDomain
DAA oracle.log.ApplicationScript 1 7 CRM
DJF oracle.log.ApplicationScript 1 6 CRM CRMDomain
DJF oracle.apps.appslogger 5 6 CRMCommonDomain,CRMDomain,FinancialDomain
|
Use awk:
awk 'FNR==NR{a[NR-1]=$0}
FNR!=NR{for(i in a){split(a[i],x," ");
if(x[1]==$1){$0=$0" "x[2];delete a[i];break}}print;}' file2 file1
Notice the argument order: file2 is before file1.
FNR==NR: applies only to the first file (in the arguments list): file2.
a[NR-1]=$0: fills an array a with the lines of file2.
FNR!=NR: applies to file1.
for(i in a): loop trough the prevously created array a
split(a[i],x," "): split the value (line of file2) at the space and store it in the new array with name x.
if(x[1]==$1): if the first element of x (x[1]) is equal to the first field ($1) of file1 (if the first field is found in the array) then:
$0=$0" "x[2] set the line to be printed with the new value at the end x[2].
delete a[i];break since you want the next occurence of that index when it appears again in file1 (for example DJF), we need to delete that element of the array a and fall out of the for loop (break).
print: whether the element is found in the array or not doesn't matter, the line (of file1) should be printed anyway.
The output:
CAAX-MC oracle.log.ApplicationScript 1 7 CRM CRMDomain
CAAZ-TEST-MC oracle.log.ApplicationScript 1 7 CRM CRMDomain
DAA oracle.log.ApplicationScript 1 7 CRM
DJF oracle.log.ApplicationScript 1 6 CRM CRMDomain
DJF oracle.apps.appslogger 5 6 CRM CommonDomain,CRMDomain,FinancialDomain
| Merge Two files depending on a Column, nth occurrence of a string in the column of file 1 to be merged with nth occurrence |
1,494,543,840,000 |
I am using a Raspberry Pi 2 running raspbian.
This little guy is going to basically be a media server.
I would like to be able to push messages when I ssh into the thing, but when I try to echo "message" |write, it doesn't show up on startx.
Is there away to create a popup that will flash in the corner on the television when its being used?
|
Plenty of ways. Like this simple one:
echo "message" | DISPLAY=:0.0 xmessage -geometry -0+0 -timeout 5 -file -
Any solution requires you have access to the X server, though. So perhaps switch to the user owning the display first?
| pushing messages as root to user using startx |
1,494,543,840,000 |
I am wondering if there is way to get the state of the CD drive in the command line if there is an audio cd inserted. Can you play a track on the CD and then get back somehow from the system the 'playing' status? Is there any record of the state of the audio cd in the system? (playing/stopped/idle/paused/etc)
|
Today, nobody tells the CD-Rom to play a CD but rather to read a CD and to send the read data to the audio interface of the computer.
When a CD is actually read, you cannot send/execute another SCSI command to the same drive, so there is no way to tell what you like.
What you can to is to call cdrecord -minfo to get the state of the currently inserted medium and to call cdrecord -toc to get the table of content.
| Is there a way to get the status of the audio CD (idle/playing)? |
1,494,543,840,000 |
I have thousands of image files that have a 10-digit number appended to the beginning of the filenames. Immediately following each string of 10 numbers is an underscore. They look like this:
1318487644_IMG_2158.jpg
I need to remove the 10dig number and the underscore, without disturbing what follows, the result of which should look like this:
IMG_2158.jpg
I'm using this command to find/replace other unwanted stuff in the filenames:
ls -1 | while read file; do new_file=$(echo $file | sed s/foo/bar/g); mv "$file" "$new_file"; done
How can I edit the above command to remove the leading 10dig+underscore combo(s) without altering the rest of the filename(s)?
|
Parsing ls output is considered bad practise by some. So it could look something like this (assuming posix shell):
for file in /path/to/file/*
do
part_to_remove=$(echo "$file" | grep -Eo '[[:digit:]]{10}_')
if ! [ -z $part_to_remove ]; then
new_file="${file#$part_to_remove}"
mv "$file" "new_file"
fi
done
| Remove leading 10-digit number from filenames |
1,494,543,840,000 |
So I want to create an alias called changeAllPermisions that accepts one parameter argument in such a way that when changeAllPermissions argument is called , both Group and Other do not have access to read, write or execute argument. If argument is a directory, then the permissions will be changed to argument as well as everything inside recursively.
Here is what I know
I know how to create an alias, for example
alias myAlias=ls
I also know how to list files recursively
ls -R
To change the permissions as stated by my problem, I would do
chmod go-rwx
But I´m having a hard time putting all this together.
Any help is appreciated.
Thanks friends.
|
chmod already has a recursive flag (-R). From the manpage:
-R, --recursive
change files and directories recursively
So, if you wanted a function to do this for you, you could write something like
function myFunc() {
chmod -R go-rwx -- "$1"
}
Or an alias:
alias myAlias='chmod -R go-rwx'
| change permissions to a file and everything inside , recursively |
1,494,543,840,000 |
I have multiple folders named as follow:
Name1
Name2
...
Name9
Name10
Name11
...
I need to rename them using mv command into:
Name01
Name02
...
Name09
Name10
Name11
...
Any ideas?
|
You seem to be actually renaming only 1-9, so that simplifies things tremendously:
for f in `seq 0 9`
do
mv Name${f} Name0${f}
done
If you start moving into triple digits, things get a bit more complicated, but not insurmountably so:
for f in `seq 0 95`
do
g=`printf %03.f $f`
mv Name${f} Name${g}
done
| rename multiple directories by adding one character [duplicate] |
1,494,543,840,000 |
eg:
CREATE VIEW AIPKEYITEM.SEASONGROUPNETSALES (
CALENDARID ,
PRODUCTGROUPID FOR COLUMN PRDGRPID ,
NETSALESDOLLARS FOR COLUMN NETSA00001 ,
NETSALESUNITS FOR COLUMN NETSA00002 )
AS
SELECT
(SELECT MIN(CALENDARID)
FROM AIPKEYITEM.KEYIT00002
WHERE FISCALYEAR = CAL.FISCALYEAR
AND FISCALSEASON = CAL.FISCALSEASON) CALENDARID
, PRODUCTGROUPID
, SUM(NETSALESDOLLARS) NETSALESDOLLARS
, SUM(NETSALESUNITS) NETSALESUNITS
FROM AIPKEYITEM.KEYIT00002 CAL
INNER JOIN AIPKEYITEM.WEEKG00001 WEEKGROUPORGUNITUNITSDOLLARS
ON CAL.CALENDARID = WEEKGROUPORGUNITUNITSDOLLARS.CALENDARID
GROUP BY CAL.FISCALYEAR, CAL.FISCALSEASON, PRODUCTGROUPID ;
In the above example I want to remove entire word starting from FOR COLUMN in which line it is present.
|
How about:
cat myscript | sed 's/FOR COLUMN.*$//g' > myscript.filtered
when myscript is the name of the file which is shown above? Is that what you wish to achieve?
| I want to delete from particular words in all the line if that particular word is present |
1,437,621,284,000 |
I want to use command line to show only the port numbers after ":" only
This is what I'm trying to do
sudo netstat -ant |grep LISTEN|grep :|sort -n|cut -c 45-
It shouldn't list any tcp6 info
|
With sed:
sudo netstat -4tln | sed '1d;2d;s/[^:]*:\([0-9]\+\).*/\1/' | sort -n
| how to show port numbers that are listening for the incoming connections under TCPv4? |
1,437,621,284,000 |
I've loaded a custom driver to linux via mknod. However, now any programs I write to use it need to be run using sudo, which is difficult to do when I'm using an IDE. Is there any way to adjust the driver privileges so I can run my program in my IDE?
|
It should work if you change the ownership and permissions of the device node you've created:
chown kalak:kalak node
chmod 600 node
(assuming your username is kalak, your main group is kalak and your device node is called node).
| Linux driver privileges |
1,437,621,284,000 |
I recently learned here how to open a program from the command line.
my question is, how can i make a set of programs open simultaneously with one command entered?
|
You can run a program simply by typing its name (and Enter, of course).
To run a program "in the background", giving you control of your terminal again, you can append &.
So:
gvim /etc/hosts # Runs gvim and waits until it's finished
But:
gvim /etc/hosts & # Runs gvim and returns control to the terminal
Therefore, this can be used to start three programs, one after the other:
kontact &
rekonq &
something_else &
If you can type fast enough they'll appear to start simultaneously. Or you can put all three commands on the same line, like this, so that the commands are executed only once you hit Enter:
kontact & rekonq & something_else &
| how to open sets of programs simultaneously with command line |
1,437,621,284,000 |
screen command is a nice program for us to run process background, but I found Ctrl + a w do not show screen windows list in xshell(Xmanager component) and Ctrl + a k do not kill this screen terminal for me. However Ctrl + a d to dettach session works! So what's wrong with Ctrl +a w to list sessions?
More serious, How do I know whether I am in screen window or normal bash window? Many times I try to dettach screen session, I got logout after ctrl+a d. Very embarrassing isn't it? So is there any hints to show me whether am I in a screen window or just normal tty terminal?
|
If you're getting logged out when you type ^A d I'm guessing you're still holding down the control key when you're pressing d. ^A ^D is bound to "detach" as well as ^A d. For ^A k and ^A w, try letting go of the control key before pressing the k or the w.
| Running background process with screen command in xshell |
1,437,621,284,000 |
I am running Ubuntu 14.04 on my machine. I am feeling lost when I see different command line commands (or what exactly are they called?) like sudo, apt-get, mkdir, -R, -n etc. while installing different software like Node JS, Mongo DB etc.
What are some good resources where I can find what do different Linux/Unix commands exactly mean?
|
man pages are your friend. Whenever you see a command you've never used, run man [name of command]
For example, man sudo will tell you:
NAME
sudo, sudoedit - execute a command as another user
and, lower down:
DESCRIPTION
sudo allows a permitted user to execute a command as the superuser or another user, as specified in the sudoers file. The real and effective uid and gid are set to
match those of the target user as specified in the passwd file and the group vector is initialized based on the group file (unless the -P option was specified). If
the invoking user is root or if the target user is the same as the invoking user, no password is required. Otherwise, sudo requires that users authenticate
themselves with a password by default (NOTE: in the default configuration this is the user's password, not the root password). Once a user has been authenticated, a
time stamp is updated and the user may then use sudo without a password for a short period of time (5 minutes unless overridden in sudoers).
If the wording is too complicated, that's when you do a Google search such as "Linux what does sudo do?"
| What are the meanings of different Unix commands? [closed] |
1,437,621,284,000 |
I've tried numerous thing such as uname -a, env, findsmb, amongst other things but that will only give the hostname which is not the server where the files are hosted... I get that there is nfs server somewhere that is sharing the files to the host that I am connected to, but, I am not sure which command to even determine what its name is or even its ip.
I'll be blunt about this is for a lab that I was unfortunately sick for and still waiting for an email response from my lab instructor with the same question, but, with the lab being due in a few hours I am going say this individual isn't going to respond in time, but, I digress. Ultimately, a gentle push in the right direction would be preferred as suppose to a direct answer (if possible).
|
To determine where a file is hosted this is what I do:
cd /path/to/folder
df -k .
The host name where the file is sourced from is at the beginning of second line.
| how to find out the name of the server on which the files are hosted on? [closed] |
1,437,621,284,000 |
I need to monitor a windows host using command line in Nagios.
As we can monitor Remote Linux host by NRPE (check_nrpe)using command line as :
/usr/local/nagios/check_nrpe -H localhost -c somecommand -t 30
What is the command in Linux to monitor windows host using check_nt plugin? I can monitor successfully by graphical method given in Nagios Core Documentation, but I want to parse its output to some code for further processing.
I have written a simple shell script to monitor by nagios for those Linux systems.
Now I have given this script to the developer who can use this script in his QT C++ code and produced output in format suggested to him.
But now we can only monitor Linux systems as we haven't found any command yet to append in the script, so that we can process in our code to produce output as we want.
|
Actually after searching for long I found this solution:
/usr/local/nagios/libexec/check_nt -H <host> -p <port> -v <command> -l <value>
So I have used this in my script as :
/usr/local/nagios/libexec/check_nt -H $myHost -p 12489 -v CPULOAD -l 5,80,90,10,80,90
/usr/local/nagios/libexec/check_nt -H $myHost -p 12489 -v USEDDISKSPACE -l c
/usr/local/nagios/libexec/check_nt -H $myHost -p 12489 -v MEMUSE
| Nagios : How to monitor windows host from linux "by command line"? |
1,437,621,284,000 |
I succesfully installed Picasa 3.9 under Crunchbang following this webupd8-tutorial. I also installed libwine-cms:i386. Everything works well and Picasa launches after installation.
The problem: once I close it, I cannot get it to relaunch. I tried the following and neiither works:
picasa
Picasa
Picasa39
wine picasa
wine Picasa39
On the wine-commands I get wine: cannot find L"C:\\windows\\system32\\picasa.exe", so I tried copying the .exe from "Program Files" to 'system32' but that also does not work.
|
Ok, this Rosetta-Stone question gave me the clue:
cd ~/.wine/drive_c/Program\ Files/Google/Picasa3
wine Picasa3.exe
| Launch Picasa under CrunchBang Waldorf |
1,437,621,284,000 |
How to start rexecd daemon/server on ubuntu-11.10 ?
If I try to run the command /usr/sbin/in.rexecd then an error appears as rexecd: getpeername: Socket operation on non-socket
rexecd is remote execution server
|
rexecd is supposed to be run by inetd, tcpd or xinetd daemons. You need to check which one is installed or install one. I would recommend xinetd as it allows the more flexible configuration.
| rexecd daemon for Ubuntu |
1,437,621,284,000 |
I want to archive family videos, films, etc. I'm currently using:
mencoder INPUT.avi -o OUTPUT.avi -ovc x264 -x264encopts bitrate=750 pass=1 nr=2000 -oac mp3lame -lameopts cbr:br=128
I want to retain good quality, but obtain the smallest video size. Can I do better than this? (Command-line applications only please.)
|
The trade-off between size and quality is a personal, subjective one. What you are doing there is transcoding an already-lossy-compressed video from one format to another. An analogy would be recording from one VHS tape to another: the source will have some imperfections in it which will be recorded onto the destination, in addition to new imperfections.
How big are your source videos? For archiving, I would recommend storing the best quality you have.
Other considerations are the longevity and availability of software to play back the format you choose. x.264 and mp3 (your choices, above) are not bad, on that score.
A useful set of articles for this subject are Mark Pilgrim's "A gentle introduction to video encoding": http://diveintomark.org/tag/give
| Video archiving method? |
1,437,621,284,000 |
I just read in a book (from 2000) how a (unix/linux) server would listen on a port waiting for connections associated with a utility.
An example in the book is about the finger utility.
The netstat output on the server is like this:
proto recv-q send-q local address foreign address state user
[..]
tcp 0 0 *.finger *:* listen root
[..]
So, finger is listening on the finger-port (which is 79).
A user then may connect (with telnet is this case) to the server at port 79, input a user name and see the output from the finger command:
... Connected to server
Escape character is '^]'.
guest <---
user input
Login: guest Name: guest <--- output from
finger(?)
Directory: /dev/null Shell /dev/null
Never logged in.
No mail.
No plan.
Does anyone have any good explanation for how this is done?
My approach would be to connect to ssh on the server and then run finger from the command line interface.
Thanks!
|
To answer myself; inetd was what I was looking for; launch a program to process the connection, e.g. netstat.
Linux – Create custom inetd service
Thank you for helping me.
| Binding a utility to an ip/port [closed] |
1,437,621,284,000 |
I have following answer for How to remove a single line from history?. When I do following, the line ( echo hello_world) is not saved into history. Please note that I am using zsh shell.
prompt$ echo hello_world
> # ^ extra space
$ history | tail -n1
1280 history | tail -n
$ echo hello_world
hello_world
$ history | tail -n1
1280 history | tail -n
But when I do run a command having a space at the beginning and right after do Ctrl+P, I can see it on the shell history, even though it is not save in history. Is it possible to prevent it? With the bash shell, this works when setting HISTCONTROL= ignorespace.
$ echo hello_world
$ # Press `CTRL-p` => " echo hello_world" shows up again
Setup: I have following configuration for the zsh shell:
## Save only one command if 2 common are same and consistent
setopt HIST_IGNORE_DUPS
setopt HIST_IGNORE_ALL_DUPS
## Delete empty lines from history file
setopt HIST_REDUCE_BLANKS
## Ignore a record starting with a space
setopt HIST_IGNORE_SPACE
## Do not add history and fc commands to the history
setopt HIST_NO_STORE
## Add timestamp for each entry
setopt EXTENDED_HISTORY
|
The behaviour that you're observing, i.e. that pressing Ctrl+P brings back the previous command even if it starts with a space and HIST_IGNORE_SPACE is set, is documented (my emphasis):
HIST_IGNORE_SPACE (-g)
Remove command lines from the history list when the first
character on the line is a space, or when one of the expanded
aliases contains a leading space. Only normal aliases (not
global or suffix aliases) have this behaviour. Note that the
command lingers in the internal history until the next command
is entered before it vanishes, allowing you to briefly reuse or
edit the line. If you want to make it vanish right away without
entering another command, type a space and press return.
The workaround, according to the manual, is to type a single space and press Enter to prevent Ctrl+P from accessing that command again.
| Is it possible to remove previous command from the zsh shell history when it starts with a space? |
1,437,621,284,000 |
I have a command that I run to try and list PID's and their CPU usage. I am using ps -ef.
Is there a (better) way to do this using top? Also, I had a question about my awk statement. My variable $vGRP is a regular expression. How do I test if $2 is in $vGRP? If so, this can cut out one of my grep calls.
I initially wrote this as a "one-liner" that I can just paste into a terminal session, so please forgive the formatting:
clear;
printf "Please enter process name: "; read vPNAME;
for i in $(pgrep "$vPNAME");
do vGRP="$vGRP$i|";
done;
vGRP="${vGRP::-1}";
printf "Seaching for processes: $vGRP\n PID\tUSAGE\n-------\t-------\n";
ps -ef | egrep "$vGRP" | egrep "$vPNAME" | awk '{print $2, "\t", $4 }';
vGRP=""; vPNAME="";
Ideally, I would like something a little cleaner, but I'm not as familiar with bash and I want awk to check for field 2 in string vGRP if possible.
ps -ef | awk -v vGRP="$vGRP" '$vGRP~/$2/ {print $2, "\t", $4 }';
However, this does not provide output because I assume that awk does not read external variables.
|
Reinventing the wheel apparently.
ps -o pid,pcpu -p $(pgrep "$vPNAME")
| Locating a process ID by CPU usage (Ubuntu) using AWK |
1,587,144,341,000 |
I'm trying to mask some sensitive data in a log file.
I first need to filter out specific lines from the file with a matching pattern and then for those specific lines I need to replace any text that is inside double quotes but leave alone any text that is not.
In the file, all lines that matches with the pattern, that has double quotes, anything inside double quotes needs to be be replaced in a way that any A-Z gets replaced by X, any a-z by x and any digit 0-9 by 0.
In one line, there can be multiple quoted strings. Inside quotes can be also special characters, like ',', '-', '.', '@' and those should be preserved as-is.
An example file contents (filtering word in this case is 'KEYWORD'):
2020-04-18 15:01:12 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "Replace This"}}} -> {:entry1 {:entry2 {:value "Replace ALSO this."}}}
2020-04-18 15:01:13 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "REplace. THIS 12345"}}}
2020-04-18 15:01:15 [EVENT] :this_has--the-KEYWORD: {:entry1 {:entry2 {:value "[email protected]"}}} -> {:entry1 {:entry2 {:value "[email protected]"}}}
2020-04-18 15:01:18 [EVENT] :log-event-without-keyword: {:entry1 {:entry2 {:value "Do NOT replace this."}}} -> {:entry1 {:entry2 {:value "Do-NoT replace this either"}}}
That file as input would be processed into this output:
2020-04-18 15:01:12 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "Xxxxxxx Xxxx"}}} -> {:entry1 {:entry2 {:value "Xxxxxxx XXXX xxxx."}}}
2020-04-18 15:01:13 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "XXxxxxx. XXXX 00000"}}}
2020-04-18 15:01:15 [EVENT] :this_has--the-KEYWORD: {:entry1 {:entry2 {:value "[email protected]"}}} -> {:entry1 {:entry2 {:value "[email protected]"}}}
2020-04-18 15:01:18 [EVENT] :log-event-without-keyword: {:entry1 {:entry2 {:value "Do NOT replace this."}}} -> {:entry1 {:entry2 {:value "Do-NoT replace this either"}}}
The changed lines need to be updated in the file or the whole file with these modifications should be thrown into standard output (also those lines that did not have the keyword(s), the line order, etc. details should be preserved.
Is this possible to accomplish this using bash scripting/command line tools like grep and/or sed?
|
awk '/KEYWORD/{
n=split($0,a,"\"")
for(i=2;i<=n;i=i+2){
gsub(/[A-Z]/,"X",a[i])
gsub(/[a-z]/,"x",a[i])
gsub(/[0-9]/,"0",a[i])
}
sep=""
for (i=1;i<=n;i++){
printf "%s%s",sep,a[i]
sep="\""
}
printf "\n"
next
}
1' file
For example, on your updated input file
2020-04-18 15:01:12 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "Replace This"}}} -> {:entry1 {:entry2 {:value "Replace ALSO this."}}}
2020-04-18 15:01:13 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "REplace. THIS 12345"}}}
2020-04-18 15:01:15 [EVENT] :this_has--the-KEYWORD: {:entry1 {:entry2 {:value "[email protected]"}}} -> {:entry1 {:entry2 {:value "[email protected]"}}}
2020-04-18 15:01:18 [EVENT] :log-event-without-keyword: {:entry1 {:entry2 {:value "Do NOT replace this."}}} -> {:entry1 {:entry2 {:value "Do-NoT replace this either"}}}
This awk produces the desired output
2020-04-18 15:01:12 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "Xxxxxxx Xxxx"}}} -> {:entry1 {:entry2 {:value "Xxxxxxx XXXX xxxx."}}}
2020-04-18 15:01:13 [EVENT] :log-event-with-KEYWORD: {:entry1 {:entry2 {:value "XXxxxxx. XXXX 00000"}}}
2020-04-18 15:01:15 [EVENT] :this_has--the-KEYWORD: {:entry1 {:entry2 {:value "[email protected]"}}} -> {:entry1 {:entry2 {:value "[email protected]"}}}
2020-04-18 15:01:18 [EVENT] :log-event-without-keyword: {:entry1 {:entry2 {:value "Do NOT replace this."}}} -> {:entry1 {:entry2 {:value "Do-NoT replace this either"}}}
| Replace specific characters inside quotes |
1,587,144,341,000 |
POSIX and GNU have their syntax styles for options.
For all the commands that I have seen, they accept option-like inputs as command line arguments.
Is it uncommon for a program to accept option-like inputs from stdin (and therefore to use getopt to parse option-like stdin inputs) ? Something like:
$ ls -l
-rw-rw-r-- 1 t t 31232 Jan 7 13:38 fetch.png
-rw-rw-r-- 1 t t 69401 Feb 6 14:35 proxy.png
$ myls
> -l
-rw-rw-r-- 1 t t 31232 Jan 7 13:38 fetch.png
-rw-rw-r-- 1 t t 69401 Feb 6 14:35 proxy.png
> -l fetch.png
-rw-rw-r-- 1 t t 31232 Jan 7 13:38 fetch.png
Why is it uncommon for stdin inputs to have option-like inputs, while common for command line arguments?
From expressive power point of view (similar to regular languages vs context free languages), are nonoption like inputs and option like inputs equivalent?
Let's not emphasize on shell expansions, because we never expect (or need) something like shell expansion happen on stdin input (nonoption like).
Thanks.
Similar questions for nonoption like inputs: https://stackoverflow.com/questions/54584124/how-to-parse-non-option-command-line-arguments-and-stdin-inputs (The post is removed due to no response and downvotes. but still accessible if you have sufficient reputations)
|
tl;dr This is very similar to wanting to use ls in scripts (1, 2), quoting only when necessary, or crossing the streams (which is almost literally what this does, by using stdin for two completely orthogonal things). It is a bad idea.
There are several problems with such an approach:
Your tool will have to handle all of the expansions which a shell already handles for you:
brace expansion
tilde expansion
parameter and variable expansion
command substitution
arithmetic expansion
word splitting
filename expansion, including globbing as @StephenHarris points out
If your tool does not handle any expansions (as you suggest, contrary to the example you've given which clearly has to do word splitting to not treat -l fetch.png as a single parameter) you will have to explain to developers at very great length why none of -l "$path", -l ~/Downloads and -l 'my files' do what they expect.
When your tool handles expansions differently from shells (which it will do, because shells handle expansions differently and nobody can afford to detect which shell you're running in and supporting all of them forever), you've just added a new syntax which needs to be learned, remembered and used by everyone using your tool.
Standard input is no longer just the tool input. Based on The Unix Philosophy, it can no longer trivially work together with other tools, because those other tools will have to do special things to pass input to your tool.
There is already a convention used by every major tool. Breaking convention in such a fundamental way would be a really bad idea in terms of usability.
Mixing options and input cannot be done safely for arbitrary input, unless you go to the considerable extra trouble of encoding or escaping one of them.
| Why is it uncommon for stdin inputs to have option-like inputs, while common for command line arguments? |
1,587,144,341,000 |
I am doing some challenges. This is one them. I am trying to brute-force 4 digit pin with the password to get my desired answer. After connecting to the port It prompts me to enter the password then space then 4 digit pin. I tried to brute-force the pin using the script:
#!/bin/bash
nc localhost 30002
sleep 2
for i in {0000..9999};
if [[ $(echo 'UoMYTrfrBFHyQXmg6gzctqAwOmw1IohZ $i' </dev/stdin) = ^Wrong*]];
then
continue
echo '[+] Pincode Cracked! Pincode = $i'
fi
done
but it seems that this doesn't input the pass and pin to stdin, before i tried doing something like this -> if [[ $(echo 'UoMYTrfrBFHyQXmg6gzctqAwOmw1IohZ $i') = ^Wrong* ]]; What am I doing wrong here?
UPDATE:
Okay, so after researching around. I wrote this:
for i in {0000..9999}
do
if [ (echo "UoMYTrfrBFHyQXmg6gzctqAwOmw1IohZ $i" | nc localhost 30002 | grep -o Wrong) == "Wrong" ]
then
sleep 0.1
continue
fi
echo "- - - - - - - - - - - - - - - - - - - - - - - - [$i]"
done
This might even work but as you can see it opens new connections in the loop which makes it really slow and exhaust the system.
|
That's because you're not telling your script to write anything to nc's standard input. Your script starts netcat, waits for it to terminate, and then sleeps for two seconds before executing the for loop. You probably want a construct such as:
for i in {0000..9999}; do
: stuff
done | nc localhost 30002
| Brute-force 4 digit pin with pass using shell script |
1,587,144,341,000 |
I'm trying to create an achive:
$ cd /tmp
$ tar -czf test1.tar.gz -C ~/Downloads/dir1 -C ~/Documents/dir2 -C ~/dir3/dir4/dir5
... which is supposed not to preserve the full path of the directories in it, hence -C
Result
tar: Cowardly refusing to create an empty archive
Try 'tar --help' or 'tar --usage' for more information.
Why? How to fix it?
|
Your command is conceptually equivalent to this:
tar -czf test1.tar.gz -C ~/Downloads/dir1 "" -C ~/Documents/dir2 "" -C ~/dir3/dir4/dir5 ""
In human terms:
tar -czf test1.tar.gz: "Make a gzipped tar archive named test1.tar.gz to the current working directory, as follows:..."
-C ~/Downloads/dir1 "": "...First, switch to directory ~/Downloads/dir1, and archive nothing from there..."
-C ~/Documents/dir2 "": "...then, switch to directory ~/Documents/dir2 and archive nothing from there..."
-C ~/dir3/dir4/dir5 "": "...and finally, switch to ~/dir3/dir4/dir5 and also archive nothing from there."
You probably want something like this instead:
tar -czf test1.tar.gz -C ~/Downloads/dir1 . -C ~/Documents/dir2 . -C ~/dir3/dir4/dir5 .
This replaces the "archive nothing" part with "archive the contents of the current directory", which has just been switched to using the -C option.
Note that when you extract the resulting archive, the archived contents of dir1, dir2 and dir5 will all be extracted to whatever is the current directory when extracting. If that's what you want, it's well and good, but it is a slightly unusual use case. (Such an archive is known as a tar bomb and generally disliked when encountered unexpectedly, as it won't neatly extract into a single directory created by the extraction process.)
For the sake of explanation, here is a slightly different command:
tar -czf test2.tar.gz -C ~/Downloads dir1 -C ~/Documents dir2 -C ~/dir3/dir4 dir5
When test2.tar.gz is extracted, it will result in:
./dir1
./dir2
./dir5
... and their contents.
| Creating a tar.gz archive of multiple directories of different locations -- "tar: Cowardly refusing to create an empty archive" |
1,587,144,341,000 |
I want to find all the files that has the words
“Who”, “What”, “Why”, “How”, “When”. All of the words, in any order. Case Insensitive
I tried:
grep -rl --include='*.adoc' "Who" . | xargs grep -l "What" | xargs grep -l "Why" | xargs grep -l "How" | xargs grep -l "When"
It is giving error like:
grep: Walkthrough/datatable/extras/Scroller/media/data/2500.adoc: No such file or directory
|
The problem you are having is that some of your filenames contain spaces. xargs will split that into multiple "filenames".
Add the -0 option to the xargs to make them split on NULs instead of whitespace, and the --null or -Z option to the grep command line to make it use NULs instead of newlines. (but omit the --null on the final grep if you want to read the output...). So putting it all together:
grep -r -iwlZ --include='*.adoc' 'who' . |
xargs -r0 grep -iwlZ 'what' |
xargs -r0 grep -iwlZ 'why' |
xargs -r0 grep -iwlZ 'how' |
xargs -r0 grep -iwl 'when'
Alternatively, eliminate the whitespace and other shell special characters from your filenames.
Otherwise, your solution is basically correct, though the answer by @James is correct that you need the -i option for case insensitive.
| Recursively search for txt files that has all the search strings |
1,587,144,341,000 |
I have a bunch of files and I want to rename them using regular expressions. I want to do this in a Linux Command line.
I have the files
report1.txt
report2.txt
report3.txt
How can I rename these so that they state
myreport1.txt
myreport2.txt
myreport3.txt
Please let me know, I've searched the internet and they have many complex examples.
|
To answer OP's question in comment: "Is there a way to do it without using a for loop?"
I usually use a dedicated utility for this: mmv
mmv 'report?.txt' 'myreport#1.txt'
(there are several other similar tools around, like rename)
| How do I rename these files at once? [duplicate] |
1,587,144,341,000 |
Have following files.
test.tar.gz.part00
test.tar.gz.part01
test.tar.gz.part02
…
test.tar.gz.part99
Need to replace .gz by .lz4… ideally without using dependencies (such as rename) that do not ship with Debian.
Thanks for helping out!
|
You can use rename for that.
rename gz lz4 ./*part*
Without rename:
for file in ./*part*; do mv "$file" "${file//gz/lz4}"; done
| How to rename subset of filename of all files in directory? |
1,587,144,341,000 |
I'm checking if file present with find command like following -
find ${pwd} | grep 'Test.*zip'
This command returns output with relative path like -
./ReleaseKit/Installable/Test-5.2.0.11.zip
Is there a way to get absolute path of found file using find command?
|
The problem with your
find ${pwd} | grep 'Test.*zip'
is that you don't have a variable called pwd. So this is the same as find | grep 'Test.*zip'. You want to give the current directory as the starting point.
Either use $(pwd) or $PWD instead of ${pwd}. $(pwd) runs the pwd program whilst $PWD uses the variable that bash and other POSIX shells maintains to give the current directory. Not all shells are POSIX. You should also quote the variable or the command substitution to defend against unusual characters in the directory path, s you end up with
find "$PWD" | grep 'Test.*zip'
| How to get absolute path of found file using 'find' command in Linux? |
1,587,144,341,000 |
I want to join these 2 files : File 1 (1 million lines) and File 2 (10,000 lines) in new File 3 (should be 1 million lines) using an awk command
File 1 :
471808241 29164840 1 10001 156197396
471722917 21067410 1 31001 135961856
471941441 20774160 1 7001 180995072
471568655 29042630 1 15001 157502996
471524711 20716360 1 4001 180226817
471873918 29583520 1 2001 128567298
471568650 29042631 1 15002 157502910
File 2
610146 156197396
531101 135961856
704011 180226817
502216 128567298
707012 180995072
615246 157502996
685221 157502910
Desired output :
471808241 29164840 1 10001 156197396 610146
471722917 21067410 1 31001 135961856 531101
471941441 20774160 1 7001 180995072 707012
471568655 29042630 1 15001 157502996 615246
471524711 20716360 1 4001 180226817 704011
471873918 29583520 1 2001 128567298 502216
471568650 29042631 1 15002 157502910 685221
|
I want to join these 2 files
So use the join command after sorting the files into key order: sort -b -k 5 file1 > sorted-file1
sort -b -k 2 file2 > sorted-file2
join -1 5 -2 2 -o 1.1,1.2,1.3,1.4,2.2,2.1 sorted-file1 sorted-file2
Further reading
"Utilities: join". Shell Command Language. Single UNIX Specification. Issue 7.
IEEE 1003.1. 2016. The Open Group.
| Merge files using a common column value [closed] |
1,587,144,341,000 |
I am following a tutorial
yes > /dev/null & top
Output
I do not understand what this line is doing.
Top only
It seems that I have one process less.
Why?
|
The significant part is the first line in the list of processes show by top. When you run
yes > /dev/null & top
you end up with a yes process using all the CPU it can get. The command above is equivalent to
yes > /dev/null &
top
because & not only puts a process in the background, it also acts as a command separator. So you’re running yes in the background, redirected to /dev/null, and top.
yes with no arguments outputs y followed by a newline continuously; since it’s redirected to /dev/null it can do so as fast as possible.
| Redirecting with yes > /dev/null & top |
1,587,144,341,000 |
The command line utilities in Linux accept for example:
tail file.log -fn0
But the utils in macOS don't, the options must be first arg:
tail -fn0 file.log
Is it possible to change this?
|
But the Zsh in MACOS doesn't accept, the options must be first arg:
This is very likely because macOS is a BSD-derivative which means that the common utilities (like grep, tail...) are the of BSD-variant, and not the GNU versions which are used on Linux
This means that there are some slight (and sometimes huge) variations in functionality, usage...
Is it possible to change this?
Yes, you can use Homebrew to install the GNU versions of the tools
Homebrew will not replace the default utilities (by default), but put them in your PATH and for most you can access them by prepending g (for GNU) to the tool name, in your case that would be gtail
For the relevant info and commands, see this Apple.SE answer
| Command line utilities in macOS only accept options at the first args |
1,587,144,341,000 |
I have a centos and an ubuntu VM in my PC, and whenever I run ls command in centos VM it prints out text with light font weight. But in ubuntu, the same command prints out text with bolder font. Why?
|
As others have noted, the colorization is controlled by the --color flag. Many Linux distributions put an alias for ls in ~/.bashrc or another shell startup file, and the alias includes this flag.
The colors themselves are controlled by the $LS_COLORS environment variable. This is usually also set up in one of your shell startup scripts by calling the dircolors command which reads a configuration file telling it how to construct $LS_COLORS. On CentOS and other RHEL derivatives, the default configuration file is /etc/DIR_COLORS. It just happens that CentOS and Ubuntu chose different color schemes for ls output.
| What is the difference between centos ls and ubuntu ls command? |
1,587,144,341,000 |
I used the * command and I seen this error:
bash: boot: command not found
Why did this error occur?
|
The first word that you type on the command line will be interpreted by your shell as the name of a command.
The shell will expand the * filename globbing character to all visible names in the current directory. The names will be sorted in lexicographical order.
You are in a directory in which the name boot is the name that sorts first. This means that typing just * in that particular directory will be the same as trying to run a command called boot with all other names from that directory as command line arguments.
On your system, there is no command called boot in your current $PATH, so the shell complains that it can't find it.
That's what happens.
Example on my system (running the zsh shell rather than bash, but it work the same in this respect):
% cd /
% ls
altroot bsd bsd.sp home sbin usr
bin bsd.booted dev mnt sys var
boot bsd.rd etc root tmp
% *
zsh: command not found: altroot
When I use just *, the shell tries to run a command called altroot, because the name altroot (which happens to be the name of a directory) sorts first in the expansion of the * filename globbing pattern in the directory I was in.
Selecting commands to run by means of filename globbing patterns is error prone and dangerous, and therefore best avoided.
As a somewhat related anecdote, I believe I've seen users create a file called -i in directories with important files. So they may have something like
$ ls -l
total 0
-rw-r--r-- 1 myself wheel 0 Apr 16 18:49 -i
-rw-r--r-- 1 myself wheel 0 Apr 16 18:49 important-file-1.txt
-rw-r--r-- 1 myself wheel 0 Apr 16 18:49 important-file-2.txt
-rw-r--r-- 1 myself wheel 0 Apr 16 18:49 important-file-3.txt
Notice how -i sorts first? This means that when they do rm -rf * (by mistake) in this directory, this happens:
$ rm -rf *
remove important-file-1.txt? n
remove important-file-2.txt? n
remove important-file-3.txt? n
That is, the -i name is inserted as an option to rm -rf, which makes rm ask for confirmation before removing any files. They then get a chance to abort the operation.
This is a fun little trick, but not at all the right solution to the issue of accidentally deleting files. The correct solution to that issue is to make regular backups.
| Why is the output of the “*” command boot? |
1,587,144,341,000 |
I'm trying to parse the Sublime Text 3 session file: Session.sublime_session. It consist of what look like JSON formatted stuff.
Using:
cat Session.sublime_session | grep -A13 "\"file\":"
I can get easily get a list (repeated for each file) like this:
"file": "/F/myHW/check_usb_switch.sh",
"semi_transient": false,
"settings":
{
"buffer_size": 873,
"regions":
{
},
"selection":
[
[
872,
872
]
--
How can I get a list like this:
/F/myHW/check_usb_switch.sh:872
...
(Are there other or more suitable tools for this? (E.g. jq etc?)
Requested info:
# Start of file:
{
"folder_history":
[
],
"last_version": 3176,
"last_window_id": 9,
"log_indexing": false,
"settings":
{
"new_window_height": 912.0,
"new_window_settings":
{
"auto_complete":
{
"selected_items":
[
[
"input",
"input_stream"
],
...
},
"windows":
[
{
"auto_complete":
{
"selected_items":
[
[
"file",
"fileName"
...
[
"json",
"json_response"
]
]
},
"buffers":
[
{
"file": "/F/xxxx.sh",
"settings":
{
"buffer_size": 7040,
"encoding": "UTF-8",
"line_ending": "Unix"
}
},
{
"file": "/C/xxxx.txt",
Request-2:
{
"buffer": 1,
"file": "/C/Users/xxxx/Desktop/tmp/xxxx.txt",
"semi_transient": false,
"settings":
{
"buffer_size": 6529,
"regions":
{
},
"selection":
[
[
3569,
3569
]
],
"settings":
{
"syntax": "Packages/Text/Plain text.tmLanguage",
"word_wrap": false
},
"translation.x": 0.0,
"translation.y": 0.0,
"zoom_level": 1.0
},
"stack_index": 46,
"type": "text"
},
|
jq -r '.windows[]|.buffers[]|.file' Session.sublime_session
This would use the JSON parser jq to parse out all the file nodes from each buffer of each window recorded in the Sublime Text 3 sessions file.
To get the file info together with the first integer of the selection bit, you will have to look elsewhere in the data:
jq -r '.windows[]|.groups[].sheets[]| "\(.file):\(.settings.selection[0][0])"' Session.sublime_session
Note that the file field is taken from a totally different place in the document compared to the first command.
On a small example file I played around with, this may generate
/Users/kk/hello:18
as output.
(tested on a session file on macOS where I worked on a file called hello in my home directory)
Unfortunately, I have not found any documentation on the schema used for these JSON files, so the parsing here is totally ad-hoc.
| How to get just two items of a json like file |
1,587,144,341,000 |
I have the file like this. ex.(test.txt)
$$BATCHCTRL=TEST-012017
$$STATE=CA AZ
$$FROM_DATE=01/10/2017
$$TO_DATE=01/30/2017
All I need to do is replace this $$STATE=CA AZ with first TWO bytes of this value.
i.e(CA).
The output file should be
$$BATCHCTRL=TEST-012017
$$STATE=CA
$$FROM_DATE=01/10/2017
$$TO_DATE=01/30/2017
|
I am assuming that these couples of character after STATE are in capital. If not then you should replace [A-Z] [A-Za-z].
You can use this simple command:
sed -Ei 's/^\$\$STATE=([A-Z]{2}) ([A-Z]{2})/\$\$STATE=\1/g' sed_file
It will match with lines starting with $$ like $$STATE=AB CD and will replace them with $$STATE and first subexpression \1.
Edit: If you want an extra single quote then use:
sed -Ei 's/^\$\$STATE=([A-Z]{2}) ([A-Z]{2})/\$\$STATE='"'"'\1'"'"'/g' sed_file
| Find String and Replace with SUBSTRING |
1,587,144,341,000 |
Say I'll start writing
sudo cp /etc/var/more/evenMORE/fileABC
and as the target of the copying, I'd like to define the same folder, except just a different filename (whether it is a new name like "randomNAME" or an iteration of the file "fileABC-backup01" shall be irrelevant). How can I reuse the input file so that I do not have to write it out again? Is this even possible?
|
sudo cp /etc/var/more/evenMORE/file{ABC,DEF} will copy fileABC as fileDEF in the same folder.
In general cp /xyz/{file1,file2} will copy /xyz/file1 as /xyz/file2. Basically, put anything that is common outside the {} and separate the source and destination names with a , in the {}.
| How can I address the path/file I just specified again? [duplicate] |
1,587,144,341,000 |
I have one production server. On that I have 1 directory for particular object, which will keep piling up the files after it gathered from different network nodes. So it have the files in subdirectories from May-2021 . It generally creates hourly subdirectories for a day and pushed the files into those subdirectories.
Subdirectory structure -
I have used the following command find . -type f -mtime +2 | xargs ls -ltr to list out the files which are 2 days older to get them deleted.
but when I ran the command to check I found that they are sorted in unexpected way.
As you can see above, ideally it should short 10-03 files then 11-03 ,but its doing exactly opposite. Another thing is it also listing the current day file(18-03).
So can someone shade some light on it?
Please note that directory size is 11G . And files generally be keep piling up in every minute ,so does it has any effect on this ?
Distro - Red Hat Enterprise Linux Server release 7.6 (Maipo)
|
find . -ctime +2 reports the files whose last change status time is 3 days old or older (where the difference between the time find was started and the ctime of the file, rounded down to an integer number of days is strictly greater than 2).
The ctime, which you can print with ls -lc is updated any time anything about the file is modified (except when its just its access time): when its entry in any directory is renamed, when it's (un)linked (from)to a new directory, when its contents or permissions or other metadata changes...
The timestamp that ls -l shows by default and that ls -t sorts on by default is the last modification time. That one is updated only when the contents is modified (though can also be set arbitrarily like with the touch command). That can be seen as the creation time of the contents of the file.
Beside those and the last access time, on some systems, there's also a birth time aka creation time, though it's generally less useful than the last modification time. With recent versions of GNU ls (not on your RHEL7 system), it can be displayed with ls -l --time=birth or ls -l --time=creation.
To find regular files that were last modified over 2 days ago, and pass them to ls so it lists them from oldest to newest, you'd do:
find . -type f -mtime +1 -exec ls -lrtd {} +
Don't use xargs which cannot process the output of find (other than with find -print0 | xargs -r0...).
Like xargs though, find -exec cmd {} + may run more than one instance of cmd, which would result in several independently sorted batches of files on output.
To avoid that, you could use zsh and do:
autoload zargs
zargs -- ./**/*(D.m+1Om) -- ls -Uld
Or with GNU xargs to do the splitting:
print -rNC1 -- **/*(ND.m+1Om) | xargs -r0 ls -Uld --
(-U being a GNU ls extension to not sort (unnecessary as zsh already sorted them). You could replace with -rt with other ls implementations).
You could always try without splitting at all, but might run into the limit on the length or arguments+environment that can be passed to a command and see a argument list too long error:
ls -lrtd -- **/*(D.m+1oN)
If you can't install zsh, with GNU implementations of the find, sort, sed, xargs and ls utilities (as found on RHEL7) you could do something like:
find . -type f -mtime +1 -printf '%T@:%p\0' |
LC_ALL=C sort -zn |
LC_ALL=C sed -z 's/^[^:]*://' |
xargs -r0 ls -Uld
Where we sort the files by mtime by hand by letting find print it as a number with %T@, use sort -n to sort it, sed to remove it and xargs to pass the list to as many ls invocations as needed each one being told not to bother sorting with -U. All done with NUL-delimited records so it can work with arbitrary file paths.
In any case, to remove those files, with GNU find, you'd just need to use its -delete predicate:
find . -type f -mtime +1 -delete
| Strange behavior "find command" while using +mtime , +mmin options |
1,587,144,341,000 |
I have taken a course in that suddenly I saw this, I understood until the pipe but the options used after the pipe for the command cut are bit confusing
|
With -c cut selects only specified characters or ranges of characters separated by comas:
N N'th byte, character or field, counted from 1
N- from N'th byte, character or field, to end of line
N-M from N'th to M'th (included) byte, character or field
-M from first to M'th (included) byte, character or field
so cut -c1-11,50- will print characters 1 to 11 and 50 to end of line from every line printed by ls -l.
So you will get the file permissions (first 11 characters) and the rest depends on length of your username, size of the files etc., but I assume the idea might be to print name of the file (using cut -f might be better if this was the goal, but generally parsing ls is not a good idea)?
| ls -l | cut -c1-11,50- Can Someone Explain the 2nd part after the pipe? |
1,587,144,341,000 |
Can someone explain to me why this doesn't partial match macaddress $mac?
#!/bin/sh
mac="f0:79:60:0f:d3:0e"
if [[ "$($mac)" = 'f0:79:60*' ]]
then
echo "true"
else
echo "false"
fi
As a note, I need to call "$($mac)" inside the if statement otherwise it will not substitute the variable.
|
First, $($mac) has to be fixed to $mac, as Jea said before.
Also just use case statement; double brackets is bash (or maybe zsh) (edit: actually available since ksh; bash and zsh adopted it too) specific but shebang just say '/bin/sh', not 'bash'.
Here are two solutions:
Replace 1st line with #!/bin/bash, #!/usr/bin/bash, or whatever depending on your environment to clarify you are using bashism, or,
Just use case statement as it's the most portable alternative, like this:
#!/bin/sh
mac="f0:79:60:0f:d3:0e"
case "$mac" in ('f0:79:60'*)
echo "true"
;;(*)
echo "false"
;;esac
| partial match string in /bin/sh |
1,587,144,341,000 |
I computed a clock on a file exerciceIV_2.C with the gedit command of unix,
And, by a night of september, I decided to use sleep 1000 in order to stop the clock.
I wasn't able to use my device for 1000 seconds!
hopefully I called a friend who told me to use CTRL+C.
How should we do in order to still stop the clock while having control?
|
while syntax is
while ( cond ) expr ;
you cannot add a ! before (
On a side note, this question would have better fit stackoverflow.
| how to still have control with sleep? [closed] |
1,587,144,341,000 |
Is there a way to know the meaning of the output of a command?
Example given: If I type ls -l, I get this ouput:
[root@localhost junk]# ls -l
total 8
-rw-r--r-- 1 root root 1862 2012-08-25 16:20 a
-rw-r--r-- 1 root root 0 2012-08-25 15:41 a.c
-rw-r--r-- 1 root root 1907 2012-08-25 16:18 b
Now I want to know here what all these fields (e.g. -rw-r--r--, 1862) stand for.
Is there a way to do that using man?
|
You can use info command to know more details about any command in coreutils.
Here is some portion in info ls, explain the -l option:
`-l'
`--format=long'
`--format=verbose'
In addition to the name of each file, print the file type, file
mode bits, number of hard links, owner name, group name, size, and
timestamp (*note Formatting file timestamps::), normally the
modification time. Print question marks for information that
cannot be determined.
........
| man command: is there a way to know the meaning of the output of a command > |
1,587,144,341,000 |
I want someone from windows to log in into my server using SSH, so he can edit files and install things. Is there a step to step how to do it? I need to:
Create his user account.
Configure it, giving him access to a single folder and nothing else (how?)
Generate a public key for him on Windows (how?)
Add his public key to authorized_keys correctly.
Tell him the command he needs to use to actually log in from Window's terminal.
I pretty much only know how to create the account. How to accomplish the latter steps?
|
(2) You may configure sshd to chroot() for this user. See man 5 sshd_config, search for ChrootDirectory and ForceCommand.
(3) You must create a key pair. The public key is used on the server, the private key is used by the client. See ssh-keygen. You may need ssh-keygen -e ... for converting the key so that it is usable by putty but maybe putty can do this conversion itself meanwhile.
(4) This is basically adding a line to a text file:
cat public_key_file >>/path/to/authorized_keys
(5) Your user will have to download the Windows SSH program putty and configure it to use the private key you supplied.
| How to enable someone to SSH to my server from Windows? [closed] |
1,587,144,341,000 |
I would like to install the pdf viewer Okular on a SUSE Linux Enterprise Server 11. I am familiar with apt under Ubuntu, but on SUSE, I only found rpm to install .rpm files. I am wondering how Okular (or essentially any other program) can be installed? I know Okular is KDE, but I saw it here so I hope it is possible to install it (I hope this is not a ridiculous question).
|
In SLES zypper is the equivalent to apt in Debian and yum on RHEL. You can install Okular with the following:
zypper in okular
Another option is to use the YaST interface.
| How to install Okular under SUSE (on a server)? |
1,587,144,341,000 |
What is the syntax to delimit the arguments of a C program.
For example, if I type :
./myprogram 1 2 3 | grep result
The | grep result will be interpreted as arguments (and passed as argv). So how to terminate the arguments after 3 ?
|
I don't think that's true. The shell is the one interpreting the command line arguments and passing them to the corresponding commands as it (the shell) is parsing them.
So your C program, when it finally get's executed will only see the arguments 1, 2, and 3. The pipe and everything after is the responsibility of the shell, and will not get passed in as arguments to the C program.
Here's an example, using a bash shell and bash shell script.
Example (shell script)
A sample script, test.bash:
#!/bin/bash
file=somefile
[ -f $file ] && rm $file
for var in "$@"
do
echo "$var" >> $file
done
cat $file
Now run the script, results are store in a file, somefile:
$ ./test.bash 1 2 3 '4 5'
1
2
3
4 5
Run it with additional command line arguments:
$ ./test.bash 1 2 3 '4 5' | echo hi
hi
$ cat somefile
1
2
3
4 5
In both cases the script, test.bash only saw the arguments leading up to the pipe (|). The Bash shell was responsible for parsing the commands, and so it never presented anything after, including the pipe (|).
Example (c program)
In case there's any questions about using a shell script, here's a c program that takes the command line arguments and you can see the same behavior with it as well.
The c program is called testc.c:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
printf("Program name: %s\n", argv[0]);
while ((argc > 1))
{
printf("%s\n",&argv[1][0]);
++argv;
--argc;
}
return (0);
}
Compile it like so:
gcc testc.c -o testc
Use it like so:
./testc
1
2
3
4 5
$ ./testc 1 2 3 '4 5' | echo hi
hi
You can see in the above that only the first 4 arguments are presented to my c program, testc.
| Delimiters of program arguments? |
1,587,144,341,000 |
run-parts executes all programs and scripts in a directory -- but when do I need it?
What are some common uses for it?
|
If you manage a server farm, or otherwise a number of similar systems, and e.g. have an application that requires a number of environment settings on each system, you could use run-parts to create a drop-in directory to which you could add and remove script snippets as required, so that each file would hold a group of settings associated for a particular servlet or other application component or configuration goal.
Your application start-up script (or equivalent) would then use run-parts to read all the settings from the drop-in directory before executing the actual application.
Managing the settings in the form of a drop-in directory can make it easier to synchronize and update the settings across a number of systems, especially when using tools such as Ansible, Salt or Puppet.
Even when managing just a single instance of a complex application, grouping the individual settings into files by purpose can make it easier to keep track what is going on, especially if the names of the script snippets are chosen descriptively.
When setting up a drop-in directory, it is a well-known practice to prefix the names of the scripts with a two- or three-digit number, to make the desired execution order explicit: run-parts will run scripts or programs in the directory in the "C" locale sort order (i.e. in the same order as LC_ALL=C /bin/ls will list them).
By using run-parts instead of rolling your own loop to execute all scripts in a specified directory, you can easily apply some common drop-in file naming rules (or use your own filename validation regex) to avoid processing editor backup files or other inappropriate files that may have been accidentally left in the drop-in directory.
If your distribution contains directories like /etc/profile.d/ or /etc/X11/Xsession.d/, their contents are most likely executed by using run-parts. The contents of system-wide cron job directories /etc/cron.(hourly|daily|weekly|monthly)/ are normally also executed using run-parts, either directly from /etc/crontab or indirectly via anacron's /etc/anacrontab.
| What are some common uses for `run-parts`? [closed] |
1,587,144,341,000 |
I've recently upgraded to Fedora 38 and when I try to sudo dnf update, I get the following error:
Problem: package libheif-freeworld-1.15.2-1.fc38.x86_64 requires libheif(x86-64) = 1.15.2, but none of the providers can be installed
- cannot install both libheif-1.16.1-1.fc38.x86_64 and libheif-1.15.2-1.fc38.x86_64
- cannot install the best update candidate for package libheif-freeworld-1.15.2-1.fc38.x86_64
- cannot install the best update candidate for package libheif-1.15.2-1.fc38.x86_64
==========================================================================================================================================
Package Architecture Version Repository Size
==========================================================================================================================================
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
libheif x86_64 1.16.1-1.fc38 updates 288 k
Transaction Summary
==========================================================================================================================================
Skip 1 Package
Nothing to do.
Complete!
I've tried dnf install libheif from Fedora_docs but that didn't work.
|
The libheif-freeworld package has simply not yet been pushed to rpmfusion-free-updates repository at the time of writing. It's still in rpmfusion-free-updates-testing and enabling that repository will let dnf resolve the dependencies and complete the transaction:
sudo dnf --enablerepo=rpmfusion-free-updates-testing update libheif
In fact, the RPM Fusion FAQ section has the above answer for exactly this kind of issue.
| Problem with sudo dnf update on Fedora 38 (recently upgraded) |
1,587,144,341,000 |
I constantly see IFS="value" command being referred to as a means of changing the IFS (Internal Field Separator) temporarily, only for the duration of the command provided, yet there are many situations where what looks to me to be similar usage fails.
I am sure these are errors on my part, but In trying to figure out what my misunderstanding was/is, I was completely unable to find any official documentation even speaking of the existence of this feature.
Perhaps this is a true case of simply not knowing which term to search for, as perhaps there is a name for this feature that I am unaware of and was unable to find through quite a reasonable amount of searching.
I've tried searching through the BASH manual, searching for relevant sections of the posix spec, asking Bing chat and more. The only results I seem to be able to find are people referring to the features existence but never any official documentation. Outside of knowing that it must exist because it clearly works for myself and others, it feels as if this feature, or rather official documentation for this feature simply does not exist.
My question therefore to StackExchange, is what in the universe is this feature called and where is it officially documented?
I am looking for any official documentation at this point whether that be for BASH, Posix, Linux, Unix, anything.
|
This is documented in the "Environment" section of the bash manual, which says:
The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described in Shell Parameters. These assignment statements affect only the environment seen by that command.
That means that the syntax you're asking about isn't specific to IFS; it holds for any variable. E.g., to set the variable HOME for a single command, we can run:
HOME=/tmp/somedir mycommand
This syntax is also documented in the bash(1) man page, which says (in the "SHELL GRAMMAR" section):
A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator.
And in the "SIMPLE COMMAND EXPANSION" section:
When a simple command is executed, the shell performs the following expansions, assignments, and redirections, from left to right, in the following order.
The words that the parser has marked as variable assignments (those preceding the command name) and redirections are saved for later processing.
The words that are not variable assignments or redirections are expanded. If any words remain after expansion, the first word is taken to be the name of the command and the remaining words are the arguments.
Redirections are performed as described above under REDIRECTION.
The text after the = in each variable assignment undergoes tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal before being assigned to the variable.
| `IFS="value" command` is a popular feature of many shells, but official documentation is hard to find. Where is it documented officially? [duplicate] |
1,587,144,341,000 |
I was looking at Most straightforward way of getting a raw, unparsed HTTPS response to make a GET request to a url over HTTPS, and receive the raw, unparsed response.
the following works:
echo 'GET / HTTP/1.1
Host: google.com
' | openssl s_client -quiet -connect google.com:443 2>/dev/null
However, I want to put the request in a text file and cat it to the command.
So I created raw-http.txt
GET / HTTP/1.0
Host: google.com
Just to be clear, there is a blank like after Host: google.com.
Now when I try:
cat raw-http.txt | openssl s_client -quiet -connect google.com:443 2>/dev/null
it just freezes for a long time then respond ^X%.
why echo is working here but cat not. what can I do about this?
|
there is a blank like after
There should be two.
$ echo '"'; cat raw.txt; echo '"'
"
GET / HTTP/1.0
Host: google.com
"
$ cat raw.txt | openssl s_client -quiet -connect google.com:443 2>/dev/null
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
As for your echo example it has an extra unneeded newline, so this will work:
echo 'GET / HTTP/1.1
Host: google.com
' | openssl s_client -quiet -connect google.com:443 2>/dev/null
| why does `echo` works but `cat` does not when pipeing to `openssl` |
1,587,144,341,000 |
I am following instructions in a tutorial.
Download the CIFAR-10 dataset and generate TFRecord files using the provided script. The script and associated command below will download the CIFAR-10 dataset and then generate a TFRecord for the training, validation, and evaluation datasets.
The command that I am supposed to run is:
python generate_cifar10_tfrecords.py --data-dir=${PWD}/cifar-10-data
For the {PWD} portion am I supposed to keep it {PWD} or am I supposed to change it to a working directory of my choosing?
|
The ${PWD} is a variable substitution of the shell and instructs the shell to insert, instead of this string, the value of the "environment variable" PWD which is always the absolute path of the directory you are currently in and therefore contains the same string you get when running
user@host$ pwd
on the command-line.
If your data is (to be) located in a sub-directory cifar-10-data under the directory you are running the command from, you can keep it literally. If not, you should instead replace it with the path to the data you want to apply your script to (or the path you want the data to be downloaded to; you should find the exact meaning of the path in the documentation of the script).
In principle, prepending a path with ${PWD}/ should not be necessary unless the command you are invoking requires absolute pathnames (which, of course, may be true in your case).
| What is --data-dir=${PWD} in my tutorial instructions? |
1,587,144,341,000 |
In the ~/.bash_profile I have added some echo statement
echo "omg!!"
echo "$(date) welcome to $HOME"
When I run command like sudo su - foo -c 'ls'
Output:
omg!!
Thu Oct 3 13:44:41 IST 2019 welcome to /home/foo
1.sh 2.sh 1.out 2.out
actually I want output as 1.sh 2.sh 1.out 2.out thats it
without changing in bash_profile and do not want to do any tail/head operations.
How can I do it?
|
Don't start a login shell:
sudo -u foo ls
or, if you have to go via the root account,
sudo su foo -c ls
The .bash_profile file is sourced for login shells, but not for non-login shells.
| do not source bash_profile or do not echo statements |
1,587,144,341,000 |
I have two big files, the first file contain some intervals with 85K rows:
head data.intervals
id id_uniq numberA numberB
1 g1 5 20
1 g2 6 29
1 g3 17 35
1 g4 37 46
1 g5 50 63
1 g6 70 95
1 g7 87 93
2 g8 3 15
2 g9 10 33
2 g10 60 77
2 g11 90 132
the second file contains some positions with over 2 million rows:
head data.posiitons
id number
1 4
1 19
1 36
1 49
1 90
2 1
2 20
2 89
2 93
2 120
What I want to do is this: For each value in the position file "number" column, search if it is equal to or between ANY of the "numberA" and "numberB" pair values of the data.intervals file.
Additionally, for this "numberA" and "numberB" pair values, its respective "id" must match the "id" in data.position. If this is all true, then I want to insert the respective "id.uniq" in the data.intervals to column of the respective row in data.posiitons file.
There is another problem here as well: some of these intervals overlap with each other and a position may fall within the range of 2 or more than 2 intervals. I want to assign them to each interval separately.
here is the final output that I desire to get (NA means that position does not fall within the range of any intervals):
id number assigned1
1 4 NA
1 19 g1,g2,g3
1 36 NA
1 49 NA
1 90 g6,g7
2 1 NA
2 20 g9
2 89 NA
2 93 g11
2 120 g11
is there any solution to do this task with a bash or perl script?
|
You could do this with Perl using the following method:
$ perl -lane '
my($id, $uniq_id, $lower, $upper) = @F;
$h{$id}{$uniq_id}{MIN} = $lower;
$h{$id}{$uniq_id}{MAX} = $upper;
push @{$order{$id}}, $uniq_id;
}{
while(<STDIN>) {
chomp;
my($id, $number) = split;
print join "\t", $id, $number,
join(",", grep { $h{$id}{$_}{MIN} < $number and $h{$id}{$_}{MAX} > $number } @{$order{$id}})
|| qw/NA/;;
}
' data.intervals < data.posiitons
Output:
1 4 NA
1 19 g1,g2,g3
1 36 NA
1 49 NA
1 90 g6,g7
2 1 NA
2 20 g9
2 89 NA
2 93 g11
2 120 g11
Works:
Read the intervals file first and build the data structure of a hash keyed on the ID, unique ID, and containing the range endpoints.
The %order hash stores the order in which the unique IDs were encountered for playback purposes in the same order. OTW, the hash ordering is random.
Next read the positions file and first unpack each record(or, line) and put them up in $id and $number scalars.
grep shall select the unique IDs that satisfy the constraint for the number to be in range. Else a "NA" is passed.
| how to assign values to intervals with overlapping regions? |
1,587,144,341,000 |
I'm trying to download a zip file from : https://download.sysinternals.com/files/ProcessExplorer.zip (no curl and no WGET). I want to do that by netcat i used this command :
echo -e "GET
https://download.sysinternals.com/files/ProcessExplorer.zip HTTP/1.1\r\nHost: download.sysinternals.com\r\n\r\n" | nc download.sysinternals.com 80 > q.zip
The file is written on the HDD but when I try to open it its corrupted.
|
So, as far as I know, netcat cannot use HTTPS, but in your code you were connecting to port 80, which means HTTP, not HTTPS.
After the GET you should add the relative address, not the full one.
Something like this will work:
echo -e "GET /files/ProcessExplorer.zip HTTP/1.1\r\nHost: download.sysinternals.com\r\n\r\n" | nc download.sysinternals.com 80 > q.temp
It will not close when the transfer is finished, you'll have to close it manually.
At this point the q.temp file have also the HTTP header included, you'll have to remove it. You can check the line number where the binary content starts with:
nl q.temp | less
In this case the binary content starts at line 16, so you can remove the header with:
tail -n +16 q.temp > q.zip
And there you have your zip file!
| Download zip file from remote HTTPS host using netcat |
1,587,144,341,000 |
I am trying to do both integer and string comparison in a statement as follows:
$ TimeHr=$(date +%_H)
$ Time=Night
$ echo TimeHr
1
$ if ((TimeHr>18 || TimeHr<5 )) && [ Time == "Night" ]; then echo "Night Time"; else echo "Day Time"; fi
Day Time
$ if ((TimeHr>18 || TimeHr<5 )) && [[ Time == "Night" ]]; then echo "Night Time"; else echo "Day Time"; fi
Day Time
But it is not printing the correct if-branch. How should I modify it?
Edit:
I prefer to use (( for numerical comparisons as the code looks more understandable.
|
I would combine the two conditionals into a single one
if [[ ( $TimeHr -gt 18 || $TimeHr -lt 5 ) && $Time == "Night" ]]; then echo "Night Time"; else echo "Day Time"; fi
Night Time
However your original test has a simple error; inside the [ and [[ tests you need to use $variable and not just variable.
So
if ((TimeHr>18 || TimeHr<5 )) && [[ $Time == "Night" ]]; then echo "Night Time"; else echo "Day Time"; fi
| Bash Conditional String and Integer together |
1,538,656,457,000 |
I can redirect output to logger like this:
nohup bin/mytask | logger
But the process hangs, and my cursor doesn't return to the terminal after the command is sent. (I would have to return to terminal with ctrl c and I don't want to quit the process.
So I try this command:
nohup bin/mytask & | logger
But I get this error:
bash: syntax error near unexpected token|'`
How can redirect the output to the logger and then return tom the terminal?
|
nohup bin/mytask | logger &
& is a command separator, just like ; and |, and you have to background a whole pipeline, not just one command in it.
| Use nohup and return to terminal and pipe to logger |
1,538,656,457,000 |
I have a string that is separated by spaces and I need to get concated the 2nd and 3rd “word”/field making sure that if more than one space separates the words/fields it is handled properly.
The following works fine:
tr -s " " |cut -d ' ' -f2 -f3 | tr " " "-"
I was wondering is there an even more succinct way of doing this?
|
awk will by default use any number of whitespaces as the field separator, so your issue could be solved by the single awk invocation
awk '{ printf("%s-%s\n", $2, $3) }'
with the data passed to the standard input of awk.
Doing the same thing in the shell (which also, by default, will split the input on whitespaces in pretty much the same way as awk does):
read -r number first second therest
printf '%s-%s\n' "$first" "$second"
with the data passed to the standard input of read.
If you want to only use tr and cut, I believe you already have the most compact solution for that.
| Concat specific fields separated by space(s) |
1,538,656,457,000 |
I have some output that is formatted like so:
foo: /some/long/path_with_underscores-and-hyphens/andnumbers1.ext exists in filesystem
foo: /another/one.ext exists in filesystem
bar: /another/path/and/file.2 exists in filesystem
I need to remove those files. How can I extract each path and remove the file? I know that awk can capture the path since its always the second element in the line but I don't know where to start to grab them all and feed them into a command like rm.
|
With awk and its system() function.
awk '{ system("echo rm "$2) }' list
Above is correct when you won't have whitespaces in your files name, if you have try below instead.
awk '{gsub(/^.*: | exists in filesystem$/,""); system("echo rm \"" $0"\"")}' list
Note that, none of the above will be able to detect newlines in files name if any.
Remove the echo to get ride of dry-run.
| How to use grep and/or awk to select multiple pathnames in file and remove those files? |
1,538,656,457,000 |
cat telephone.txt | cat | cat | sed -e "s/t/T/" | tee cible | wc -l
|
When you have a command like that, try each piece to see what it does.
For example, run each of these to see what they do:
cat telephone.txt
cat telephone.txt | cat
cat telephone.txt | cat | cat
cat telephone.txt | cat | cat | sed -e "s/t/T/"
cat telephone.txt | cat | cat | sed -e "s/t/T/" | tee cible
cat telephone.txt | cat | cat | sed -e "s/t/T/" | tee cible | wc -l
Once you do that, you'll see that:
cat telephone.txt <-- reads file
cat <-- reads stdin and prints to stdout (from comments)
cat <-- reads stdin and prints to stdout (from comments)
sed -e "s/t/T/" <-- replaces the first lower case t on each line with an upper case T
tee cible <-- reads stdin and prints to stdout and also writes it to a file called "cible"
wc -l <-- counts the lines of stdout from above
| What does this command do? |
1,538,656,457,000 |
I need to delete folder, subfolders and files if it exists. I am trying to do the following:
if [ ! -d folder ]; then rm -rf folder; fi
However it doen't work. How can I accomplish this?
|
The if [ ! -d folder ] part is wrong. It's false on both empty and non empty directories. The exclamation mark is the logical not operator: you're checking if the directory does not exist before you delete it.
Remove that exclamation mark.
| Delete folder if it exists [closed] |
1,538,656,457,000 |
Just letters are accepted in words, any other characters are delimiters.
I want to exchange the first word with the third word.
sed -E 's/([A-Za-z]+) [^A-Za-z] ([A-Za-z]+) [^A-Za-z] ([A-Za-z]+)/\3 \2 \1/' filename
I wrote this but don't works correctly
Example:
I 4want5to%change
This string I want to change to:
to 4want5I%change
Any idea?
|
Using character class [[:alpha:]] to match uppercase and lowercase and the negation [^[:alpha:]] to match all others:
sed -r 's/^([^[:alpha:]]*)([[:alpha:]]+)([^[:alpha:]]+[[:alpha:]]+[^[:alpha:]]+)([[:alpha:]]+)([^[:alpha:]]*)/\1\4\3\2\5/' file.txt
Example:
$ sed -r 's/^([^[:alpha:]]*)([[:alpha:]]+)([^[:alpha:]]+[[:alpha:]]+[^[:alpha:]]+)([[:alpha:]]+)([^[:alpha:]]*)/\1\4\3\2\5/' <<<'I 4want5to%change'
to 4want5I%change
$ sed -r 's/^([^[:alpha:]]*)([[:alpha:]]+)([^[:alpha:]]+[[:alpha:]]+[^[:alpha:]]+)([[:alpha:]]+)([^[:alpha:]]*)/\1\4\3\2\5/' <<<'4I 4want5to%change'
4to 4want5I%change
$ sed -r 's/^([^[:alpha:]]*)([[:alpha:]]+)([^[:alpha:]]+[[:alpha:]]+[^[:alpha:]]+)([[:alpha:]]+)([^[:alpha:]]*)/\1\4\3\2\5/' <<<'Spring&summer^winter'
winter&summer^Spring
| How to swap two words with sed and with multiple delimiters? |
1,538,656,457,000 |
I was using udisks to unmount and detach USB devices with following commands which work just fine on Ubuntu 10.04:
udisks --unmount /dev/sdb1
udisks --detach /dev/sdb
Because udisks is not available in Ubuntu 14.04, I was trying to use udisksctl. It works for unmount:
udisksctl unmount --block-device /dev/sdb1
But when I use udiskctl or umount to detach the device as:
udiskctl power-off -p /dev/sdb
or
umount -p /dev/sdb
it gives following error:
(udisksctl unmount:17787): GLib-GIO-CRITICAL **: g_dbus_object_manager_get_object: assertion 'g_variant_is_object_path (object_path)' failed
How can I detach device in Ubuntu 14.04 with other existing commands if any?
|
The problem may be that you are telling the path to the device instead of the path to the block device.
Try the next command:
udiskctl power-off -b /dev/sdb
With -b you are specifying the path to the device.
Source:
https://askubuntu.com/questions/342188/how-to-auto-mount-from-command-line
| Unable to detach a USB device on Ubuntu 14.04 |
1,538,656,457,000 |
Is there on Ubuntu an best way to execute an program than terminal command "./".
I have put PhpStorm into an folder and when I want execute PhpStorm, I must cd bin/ and execute with this command: "./phpstorm.sh". I think that not the official and best way.
|
./ is not a command. It's a directory (current directory). This just means that you run a file ./phpstorm.sh (file named phpstorm.sh that is in the current directory). Every command that you write is first searched in all the directories in $PATH environment variable. This is why, for instance, ls works and you don't have to write /bin/ls. Write
echo "$PATH"
to see what directories are searched.
/bin should already be in your PATH, so phpstorm.sh should always work, no matter in which directory you are (no ./). However, if you want to run commands from any current directory, then you add . (current directory) to the existing path. Usually by putting
export PATH=".:$PATH"
in your .profile file. This is not default but I always do it (if you write any scripts or applications by yourself, it's very inconvenient to always specify current directory directly). However, don't do it for root, it's a bit too powerful and you may break something by mistake.
The other "special" directory is .. which refers to the parent directory. This is why cd .. goes "up". cd . wouldn't do anything because you are already there.
| Is there on Ubuntu a best way to execute a program than terminal command "./" |
1,538,656,457,000 |
Pipe (|) and redirections (<, <<, >, >>) both using standard streams (stdin, stdout, stderr), but although only pipe can keep sudo privileges, why?
Works:
sudo echo "hello" | tee /root/test
Doesn't work:
sudo echo "hello" > /root/test
|
Pipe (|) and redirections (<, <<, >, >>) both using standard streams (stdin, stdout, stderr), but altough only pipe can keep sudo prividgles, why?
That's simply not true. You must be mixing things up
sudo echo "hello" | tee /root/test
here echo is run as root, but tee is run as your current user, which doesn't seem to be root.
This would be different
echo "hello" | sudo tee /root/test
here, the tee program is executed as root, and hence gets access to /root/test as file
| Why pipe keep sudo and redirection not? [duplicate] |
1,538,656,457,000 |
There are a lot of "Linux command-line cheat sheets" on the internet. But often they only list the commands, sometimes sort and describe them.
What I am looking for is something I would call a "task based" cheat sheet, where I can "ctrl+f" for what I want to do and find the corresponding command. Since beforehand I don't know how (i.e. the command) to perform the task.
Could someone provide a link or search terms?
Explanation:
When trying to do something on the command-line, I normally use google to find a solution. Depending on the complexity of the task, this takes some (unreasonably high) effort and often combining multiple solutions. Also internet access is mandatory for this to work.
I afterwards write this down in a text file and attach some search terms.
Expecting to find similar files on the internet, I search for: linux task OR action OR work based cheat sheet, linux howto collection common tasks. Those don't return what I look for.
|
This is a well-structured pdf with basic tasks on the command line:
Linux command line for you and me
Search term:
linux where to find good command line documentation
Further reading:
A list of sites which provide Unix documentation, for cases where man-pages are unsuitable.
Parser for GitHub Wikis. Those are not indexed by search engines and were a rich source of information in research.
Of those, matching the question best (in my opinion):
The Linux Documentation project
Arch Wiki (Arch is a Linux distribution)
GitHub Wiki by Ninna994 provides some how tags
Having asked the question in person to a few Linux users, I learned most people write this kind of information in a personal wiki.
| Task focused command-line cheat sheet for linux |
1,538,656,457,000 |
I'm using NordVPN version 3.12 on an Ubuntu 18.04.6 media server and I've run into many connection issues. The Linux CLI version is very unstable compared to the desktop version, with the VPN getting stuck in a reconnecting loop. When trying to reconnect, it will run a loading animation until I quit the SSH session and rejoin the server.
After SSHing back into the server, the VPN status will be displayed as "Reconnecting". Trying to run systemctl restart nordvpn.service | systemctl restart nordvpnd.service will restart the service but Nord will still get stuck in a connection loop. Trying to use kill -9 [ID] will stop the service but after restarting the service it will throw this error: Whoops! Cannot reach System Daemon. when running nordvpn connect. The only reliable way I've found to get the VPN started again is to restart the system. It seems these issues start after a few days of letting the system run on the VPN connection.
|
I ended up ditching the NordVPN cli client and going with Openpyn. It's an open-source Python script for OpenVPN and some Nord API services. Link to the project: https://github.com/jotyGill/openpyn-nordvpn
Command used: sudo openpyn ca -f -d -r -t 10 --allow [Ports to Allow] --silent --p2p
While not perfect, in the event that a server disconnects, it will automatically try to find another server to connect to. I've had Openpyn running in daemon mode for a few days now and the connection still works.
| Issues with NordVPN on Ubuntu 18.04 |
1,538,656,457,000 |
So i'm starting to get addicted to POSIX standards and simplicity yet i hate not having autocomplete using the arrows to go through characters and i want a cool shell prompt.
so is there anyway that i could have something like fish or Zsh or even bash on the front end so i can have autocomplete, a nice and informative prompt, being able to go forward and backward on the command with arrows but when i press enter the command itself is ran by Dash instead of the interactive thing unless the script requires bash fish or etc in the shebang?
i just want to feel comfortable when typing the command the rest can be harsh and simple i'm using Konsole although i'm thinking to changing to a console prompt that allows Überzug(yes i'm aware that GUI's exist for a reason, its just for LF)also forgot although sh redirects to dash on konsole i told it to go for bash
|
I thought I might have misunderstood your question, because what you seem to be asking for is pretty much the way most everyone operates anyway. We use an interactive shell for, well, interaction. But we expect that scripts are often run by a different shell.
This typically "just works", since the vast majority of the scripts you will execute will include a shebang line. That way the script itself can request the interpreter closest to the feature set it needs. If it needs POSIX compatibility, it should specify either:
#!/usr/bin/sh
or
#!/usr/bin/env sh
That may be dash (on Debian/Ubuntu flavors), or it could be BusyBox's ash (on Alpine), or something else. It's up to the distribution itself to make sh point to the most POSIX compliant shell that it provides by default. In the distant past, in some (most?) cases, this was even bash.
In the rare event that someone leaves out the shebang, well, be suspicious of the rest of the code-quality for starters. But running it isn't a problem -- Just sh scriptname.sh.
But no one expects you to actually operate interactively under today's basic shells interactively. Personally, I use fish, which is the furthest of the ones you mentioned from any form of POSIX compliance. But I can write my scripts in fish (preferably, since it has a much more modernized scripting language) or POSIX, if I need portability. And it doesn't matter, since it's the shebang that handles that.
The one area where this isn't the case is for scripts that need to be sourced, since they change the environment of the current shell instance. These do not get executed with the shebang-specified interpreter, but are executed by the interactive shell itself. I rarely find this to be a problem myself, since most all of the software that I use provides a fish config file if needed. But in those rare instances when you really do need to source a POSIX file, there are utilities such as bass and others to help you do this.
For bash and zsh, this isn't even a problem, since most all config files are written to a lowest-common-denominator that they can be sourced in the "close-enough-to-POSIX-to-not-matter" Zsh and bash.
| Having an interactive shell experience but running everything through dash |
1,538,656,457,000 |
Assume I run a custom app on an arbitrary port on a linux box - let's say 7890.
This is a go language web server. Runs an app on top of HTTP. No HTTPS. It runs as a dedicated but "normal" user with no sudo rights. The firewall rule for this port allows "everyone (world)" to access it.
All ssh access is secured and firewall rules allow only access via a set of known IPs. SSH access is configured to allow only key-based access.
Is this enough information to answer the following questions:
Can an attacker ever gain access to the command line via that exposed port?
If the previous question can be answered with "yes, if...", what are the most important things I need to take care of so that the app can't be exploited to get access to the box?
The assumption would be that there are no other known security holes.
Obviously I am no security expert. I hope this is enough information to give an informed statement. I assume there are myriads of related issues and no "absolute" yes/no can be given, but maybe some rough idea is enough for me to get an idea.
Thanks.
|
Can an attacker ever gain access to the command line via that exposed port?
If there are vulnerabilities in your app, yes.
what are the most important things I need to take care of so that the app can't be exploited to get access to the box?
That's a very difficult question.
Go is a pretty safe language, so ostensibly the app can only be hacked if you have issues with the application logic
To add an extra layer of security you can use 1) systemd features to limit what the daemon can do 2) write a SeLinux/Apparmor policy 3) run the daemon in chroot/virtualized environment 4) run it under docker or any other lightweight userspace container/hyperviser
| Can an open port be hacked to get access to the command line? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.