date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,477,679,498,000
this is my fist question on this forum. I'm currently using Linux Mint 17.2 Cinnamon and trying to run Aircrack-ng to show the password of my own Wi-Fi connection, I got the Handshake too. The problem is when I tried to run aircrack-ng CrackFile.cap -w /pentest/passwords/wordlists/darkc0de an error shows that there's no such directories on my system. I know this question's already asked so many times, but no one provide good solution, at least for me. Anyone know how to fix this? Thanks!
The reason is you probably copied that line from a hacking article that was based on BackTrack which may have had such a path, but since you said you are using Linux Mint, you don't have that. So it is exactly as the error says, you probably don't have that file. But you don't need that particular wordlist to test whether aircrack-ng works on your own wifi. It is your own wifi so you know the password, so you could just make a text file of your own, where each line is simply a password you make up, that you want aircrack-ng to try, and be sure you include your actual wifi password in there. You could have just two lines and the third line is your actual wifi password. For example you can edit with nano ~/mywordlist.lst and inside you could just have abc 123 xyz myactualpassword It doesn't have to be nano, whatever text editor you are comfortable with. It doesn't have to have .lst extension, it is just to make it easier for you to recognize it in the future as a list. When you are done saving it then you can use aircrack-ng again, only this time point it to the wordlist you just made: aircrack-ng CrackFile.cap -w ~/mywordlist.lst
Aircrack-ng: No /pentest/passwords/wordlists/darkc0de directories
1,477,679,498,000
I'm trying to set up a lab environment using Virtual box with a stand-alone network. To this end, I've set up 1 VM to communicate with the outside world with 2 NICs - 1 for the Internal Network and 1 in Bridged mode to download packages et all. The aim is to learn installing Linux services like Apache httpd, MySQL, DNS, FTP, NFS, Squid, Mail Servers etc after which I'd like to proceed to learn more complex areas like IPtables, Nginx and try out other services like Varnish, Docker, memcached, Puppet/Chef/Salt and much more. My questions are: In a production environment, how are these installed? Are they compiled from source with custom install locations or are they installed using package managers (like yum etc)? How do I go about testing these services? For instance, if I complete installing and configuring Apache httpd, should I test this from a client VM created within the Internal Network and/or should I add another NIC to test it from the main machine? The idea is to create an environment that closely resembles a production environment in order to learn to install, configure these services as it should be done at a work place (as opposed to simply doing a yum install). Any more feedback/suggestions about how to go about learning/setting this up would also be appreciated.
There are a few things you can do. My first recomendation would be to visit the Page for open scap and scan your system using the most recent security guideline configurations available for it. Go through your system and try to get it to at least 90% compliance. Focusing on such things as firewalls and SELinux While openscap and its related security guiders are more focused on a government/DoD environment, they are generally good security guidelines. Having those done as a baseline, will mean that any of the software you install, may run into problems related to the hardening, that will give you a better idea of the challenges involved in setting up software in a secure production environment. For something like apache I would recommend just trying to setup a simple wordpress site, that uses https if possible. That will also give you some experience in setting up a sql database. Again once those are done, Try to find some security guidelines to harden them, see what breaks, and learn how to fix it. Learn what allowances have to be made(In a production environment the only 100% secure system is a system that doesn't do anything, so different configurations require different security allowances. That is one of the reasons in a production environment it is best to not have all of your services on one machine. Dividing services among many servers means that not only does your infrastructure not go down in one attack, but there are less openings in any one server to allow an attack) as far as your direct questions go: In my production environment I will use yum install on anything and everything that is available for me to use it. That ensures that my patches are all managed/tested by redhat/centos/Oracle depending on the Distro being used. So there is a higher likely hood that things will NOT break when patched. Setup a client machine that is on the same network as your server, and see if it can be accessed through there. In a more true test you'd configure it to be accessible by your local machine too, but that will open you to having to do more work than strictly necessary. As for Puppet,chef, or salt. Setup 2-3 VM's that will be clients/minions, and go about writing states/recipes/whatever puppet calls their things, that will enforce the security settings I recommended you apply above. That will give you good experience setting up systems as they would be configured naturally, as well as making sure that after security is applied, your salt/puppet/chef servers still can communicate with the clients. Also, a lot of what you want to learn, is actually covered in pretty good lab scenario's for many study guides for the Redhat Certified System admin and redhat certified Engineer tests, looking up labs for that might be a good place to find information too.
Setting up a lab environment in VirtualBox
1,477,679,498,000
When tar-ing up large directories (ie. a home folder for a backup/OS reinstall), it is often okay to exclude certain large files such as multi-GB videos. However, due to the all-encompassing nature of a home folder, it is often unrealistic to remember each and every file that may be useful to exclude (with --exclude) before starting. I am looking for some sort of input that I can give to tar to tell it to quit whatever file it's on and move onto the next, leaving the quitt-ed file out of the archive. Perhaps like a control-C, but instead of stopping the entire process, simply stop the current file. Specifically, I am referring to a long running tar -cvf or tar -cvzf. As both of these commands contain -v it is easy to determine what file tar is currently on. Using any sort of GUI tool is not an option, as tar is often run in a minimal (CLI only) environment on a broken system before a reinstall. This is the specific case I am asking about.
I don't think that is possible, but you might simply just exclude large files automatically from your tar. For example, find mydir ! -type f -o \( -type f -size -1000k \) | tar cv --no-recursion -T - -f /tmp/tar which does not save files bigger than 1000k. Here's a script to ask interactively for a "n" reply to stop big files being archived: find mydir \( -type f -size +1000k -exec /tmp/biggie {} \; \) -o -print | tar c --no-recursion -T - -f /tmp/tar where /tmp/biggie is the script #!/bin/bash if ! read -t 10 -n 1 -p "$1 ok ?" reply || [ n != "$reply" ] then echo >&2 echo "$1" else echo " ignoring $1" >&2 fi which does a bash-specific read with timeout of 10 seconds of 1 char (-n), with the filename as prompt (-p). If you type "n" within 10 seconds the file is ignored.
Remove Specific File From Tar Archival Process After the Process Starts
1,477,679,498,000
When you run yum history (or dnf history) it will give you the list of action starting from the last, most recent action being on top of the list. In case you have a lot of history steps your recent action happens to be on top of the list and you need to do a lot of scrolling to get to it. Is there a way to list yum/dnf history in ascending order, so the latest step would be at the bottom of the results?
Yes, as mentioned in the comments, tac command does what was asked. Simply run: dnf history | tac
List `yum/dnf` history in ascending (reversed) order
1,477,679,498,000
Many times, there are slight inconsistencies between two similar linux machines, where (for example) tmux supports colours on Machine M1, and does not support colours on Machine M2, or vi adds comments and formatting automatically on M1 but not on M2, or bash prompt has line-warping on or off, ssh options may not match. Usually, we can try "strace bash" or "strace ssh" or "ssh -vvv" and "man vi" and look for standard files getting accessed. But , in nonstandard installations , (including customised compilations) these locations may not be complete. In some cases, we can not even use strace or pass verbose options, eg logon shells, or some script calling some other scripts which call the tool in question. So my question is : is there any standard method and tool which can help in finding all configuration files accessed by some tool ? Specific example : on M1, bash has line-warp, while on M2, it does not, even though all relevant parameters/files (.bashrc/.inputrc) are as expected and same on both M1 and M2.
Based on a comment by @0xC0000022L , I found http://security.blogoverflow.com/2013/01/a-brief-introduction-to-auditd/ which seems to be useful, so adding it as a community wiki answer, in case other folks search for such questions ; I would not want them to simply move on, thinking that there was no answer here.
How to generate a list of configuration files accessed by a tool (eg bash or vi)?
1,426,697,407,000
There are two messages in /var/mail/test. mail Mail version 8.1.2 01/15/2001. Type ? for help. "/var/mail/test": 2 messages 2 new >N 1 test@test Tue Feb 17 15:07 18/628 *** SECURITY information fo N 2 test@test Tue Feb 17 15:25 18/628 *** SECURITY information fo How to get the whole subject line of message 1 displayed in my console? How to get the body of the message 1 displayed in my console?
Use Linux command line syntax as Read email if you have only email. echo p|mail |grep -A 1 "^Subject:" Read 1st email, change the number you want to read email. echo 'type 1'|mail |grep -A 1 "^Subject:"
How to display entire subject line in `mail`?
1,426,697,407,000
I recently installed Voyager Linux, an linux distribution based on Xubuntu. The problem is that I cant boot it anymore. I have a dual boot system with Windows 8.1, but that's not the problem. I installed the latest build of WPS Office a16 I think, but when installing things got weird, and apps from the dock started to disappear. I wasn't able to restart, so I turned off the notebook. Next time, it wont boot, staying at this screen: I already did some things, but with no results. I already checked if there were some packages missing or something with this guide with no effect. I also tried: apt-get install xfce4 because I thought the desktop files may be missing, and it installed some packages, 50 MB. But after starting again the os, no results, same problem. I don't know what is going on, I researched a lot, but nothing related to this.
Sounds like your installing the "latest build of WPS Office a16 I think" caused problems. Can you remove (purge?) it and see if anything's fixed? Your screenshot indicates that "LightDM" failed, maybe reinstall / reconfigure that might help too. It's not normally a part of XFCE, maybe Voyager uses it (XFCE uses the XFWM4 window manager), so maybe install/reinstall/reconfigure xfwm4 might help too.
Can't boot Voyager Linux based on Xubuntu
1,426,697,407,000
I'm running a terminal application via "su" in this fashion: su -c "/path/to/app --args" username This is done from a root context, and 'username' is a less privileged user in the system. The application has signal handlers for CTRL-C and CTRL-Z (SIGINT and SIGTSTP, respectively). One odd problem I've come across is that CTRL-Z does not appear to propagate to the application when run via 'su' in this fashion. If I remove the 'username' from the end of the command above, then it works fine. Only when there is a user switch using su does this signal not work. Is there a way to allow TSTP to propagate through to a program being executed as a different user? Tested using CentOS 6.4; GNU Coreutils 8.4
Using 'sudo' instead of 'su' solved the problem, as suggested by mdpc.
How to propagate TSTP while running a program as a different user
1,426,697,407,000
Short story I'm looking for the command to enter first found foo-something directory like: cd foo-* but without using wildcard (or other special shell characters). Long story As part of the remote drush build script, I'm trying to find the way of entering folder which folder name could change, but it has common prefix. Same example: drush -y dl ads or drush -y dl ads --dev downloads either ads-7.x-1.0-alpha1 or ads-7.x-1.x-dev). To make the things more tricky, the command can't consist either wildcard or escaped semicolon, because drush is heavily escaping shell aliases. So ls * is escaped into ls '\''*'\''' and ending up with Command ls '*' failed. error. I've tried also using find, but I can't use -exec primary, because semicolon needs to be escaped, and drush is double escaping it into ('\''\;'\''). Therefore I'm looking to enter foo-* folder without using wildcard (or any other special characters, parameter expansion, command substitution, arithmetic expansion, etc.) if possible. I believe the logic of shell escaping is here and it is intended to work the same way that escapeshellarg() does on Linux. What it does, it's escaping each parameter.
Does drush mung backticks and vertical bars?  If not, you could use cd `ls | grep foo- | head -n 1` If backticks don't work, but |, $, ( and ) do, then you could change the above to cd $(ls | grep foo- | head -n 1) If | doen't work, but $, ( and ) do, then you could do cd $(myprog) where myprog is a script that you write to determine the directory name. Also -- I don't understand how you might be able to use find to help you do a cd, but, can you end your -exec with a +?
How to enter foo-* folder without actually using wildcard or shell expansions?
1,426,697,407,000
I have been trying to get adb, fastboot, avd and others to work with the latest version of Linux Mint (16 "Petra") I originally got this error when trying to run ADB: ➜ platform-tools ./adb ./adb: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory After looking around I found suggestions saying that you need to install 32-bit compatibility libraries. So, I tried to install the package ia32-libs, which, however, failed horribly. ➜ ~ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: bluez-alsa:i386 but it is not going to be installed Depends: gstreamer0.10-plugins-base:i386 but it is not going to be installed Depends: gstreamer0.10-plugins-good:i386 but it is not going to be installed Depends: gtk2-engines:i386 but it is not going to be installed Depends: gtk2-engines-murrine:i386 but it is not going to be installed Depends: gtk2-engines-oxygen:i386 but it is not going to be installed Depends: gtk2-engines-pixbuf:i386 but it is not going to be installed Depends: gvfs:i386 but it is not going to be installed Depends: ibus-gtk:i386 but it is not going to be installed Depends: libacl1:i386 but it is not going to be installed Depends: libao4:i386 but it is not going to be installed Depends: libasound2:i386 Depends: libasound2-plugins:i386 but it is not going to be installed Depends: libasyncns0:i386 but it is not going to be installed Depends: libattr1:i386 but it is not going to be installed Depends: libaudio2:i386 but it is not going to be installed Depends: libcanberra-gtk-module:i386 but it is not going to be installed Depends: libcap2:i386 but it is not going to be installed Depends: libcapi20-3:i386 but it is not going to be installed Depends: libcups2:i386 but it is not going to be installed Depends: libcupsimage2:i386 but it is not going to be installed Depends: libcurl3:i386 but it is not going to be installed Depends: libdbus-glib-1-2:i386 but it is not going to be installed Depends: libesd0:i386 but it is not going to be installed Depends: libfontconfig1:i386 but it is not going to be installed Depends: libfreetype6:i386 but it is not going to be installed Depends: libgail-common:i386 but it is not going to be installed Depends: libgconf-2-4:i386 but it is not going to be installed Depends: libgdbm3:i386 but it is not going to be installed Depends: libgettextpo0:i386 but it is not going to be installed Depends: libglapi-mesa:i386 but it is not going to be installed Depends: libglu1-mesa:i386 but it is not going to be installed Depends: libgphoto2-6:i386 but it is not going to be installed Depends: libgphoto2-port10:i386 but it is not going to be installed Depends: libgtk2.0-0:i386 but it is not going to be installed Depends: libmpg123-0:i386 but it is not going to be installed Depends: libncursesw5:i386 but it is not going to be installed Depends: libnspr4:i386 but it is not going to be installed Depends: libnss3:i386 but it is not going to be installed Depends: libodbc1:i386 but it is not going to be installed Depends: libopenal1:i386 but it is not going to be installed Depends: libpulse-mainloop-glib0:i386 but it is not going to be installed Depends: libpulsedsp:i386 but it is not going to be installed Depends: libqt4-dbus:i386 but it is not going to be installed Depends: libqt4-network:i386 but it is not going to be installed Depends: libqt4-opengl:i386 but it is not going to be installed Depends: libqt4-qt3support:i386 but it is not going to be installed Depends: libqt4-script:i386 but it is not going to be installed Depends: libqt4-scripttools:i386 but it is not going to be installed Depends: libqt4-sql:i386 but it is not going to be installed Depends: libqt4-svg:i386 but it is not going to be installed Depends: libqt4-test:i386 but it is not going to be installed Depends: libqt4-xml:i386 but it is not going to be installed Depends: libqt4-xmlpatterns:i386 but it is not going to be installed Depends: libqtcore4:i386 but it is not going to be installed Depends: libqtgui4:i386 but it is not going to be installed Depends: libqtwebkit4:i386 but it is not going to be installed Depends: librsvg2-common:i386 but it is not going to be installed Depends: libsane:i386 but it is not going to be installed Depends: libsdl-image1.2:i386 but it is not going to be installed Depends: libsdl-mixer1.2:i386 but it is not going to be installed Depends: libsdl-net1.2:i386 but it is not going to be installed Depends: libsdl-ttf2.0-0:i386 but it is not going to be installed Depends: libsdl1.2debian:i386 but it is not going to be installed Depends: libsqlite3-0:i386 but it is not going to be installed Depends: libssl0.9.8:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed Depends: libstdc++5:i386 but it is not going to be installed Depends: libstdc++6:i386 but it is not going to be installed Depends: libxaw7:i386 but it is not going to be installed Depends: libxml2:i386 but it is not going to be installed Depends: libxp6:i386 but it is not going to be installed Depends: libxslt1.1:i386 but it is not going to be installed Depends: libxss1:i386 but it is not going to be installed Depends: libxtst6:i386 but it is not going to be installed Depends: odbcinst1debian2:i386 but it is not going to be installed Depends: xaw3dg:i386 but it is not going to be installed Depends: libgl1-mesa-dri:i386 but it is not going to be installed Depends: libgl1-mesa-glx:i386 but it is not going to be installed Depends: libpam-winbind:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. Any help would be very much appreciated. EDIT: After following Rmano's suggestion to run this: sudo apt-get install libc6:i386 libgcc1:i386 gcc-4.6-base:i386 libstdc++5:i386 I got this error about unmet dependencies: ➜ ~ sudo apt-get install libc6:i386 libgcc1:i386 gcc-4.6-base:i386 libstdc++5:i386 libstdc++6:i386 [sudo] password for insanity: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libgcc1 : Breaks: libgcc1:i386 (!= 1:4.8.1-10ubuntu9) but 1:4.8.1-10ubuntu8 is to be installed libgcc1:i386 : Depends: gcc-4.8-base:i386 (= 4.8.1-10ubuntu8) but it is not going to be installed Breaks: libgcc1 (!= 1:4.8.1-10ubuntu8) but 1:4.8.1-10ubuntu9 is to be installed libstdc++6:i386 : Depends: gcc-4.8-base:i386 (= 4.8.1-10ubuntu8) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. EDIT 2: It seems that I have to downgrade to an older version of a view core utils, however, I'm not sure that it is a safe idea.
I reinstalled Linux in the end. However, downgrading the packages worked fine.
Error Installing "ia32-libs" to run ADB and Fastboot Linux Mint 16 "Petra"
1,426,697,407,000
I made an alias to search for and display all of the processes associated with a specific user account that seem to auto initiate every time I log in, which are about 15 and through a process of elimination I found the parent process. Basically I want the alias to display, just the parent process and not the whole list, I know I will have to pipe but beyond that I'm not sure. Example: ps -u *someuser* | grep <parent process name/PID> EDIT #1 This is not exactly related to the process tree in which I'm referring, but I opened a man page so I could paste the associated processes: 966 man pidof --- 969 sh -c (cd '/usr/local/share/man' && (echo ".ll 12.8i"; echo ".nr LL 12.8i"; /usr/bin/gunzip -c '/usr/local/share/man/ --- 970 sh -c (cd '/usr/local/share/man' && (echo ".ll 12.8i"; echo ".nr LL 12.8i"; /usr/bin/gunzip -c '/usr/local/share/man/ --- 975 sh -c (cd '/usr/local/share/man' && (echo ".ll 12.8i"; echo ".nr LL 12.8i"; /usr/bin/gunzip -c '/usr/local/share/man/ --- 977 /usr/bin/less -is` how `kill 966` kills all the rest
You can try using the Unix command pstree to get a list of the process names in a tree structure. Example $ pstree init-+-NetworkManager-+-dhclient | `-2*[{NetworkManager}] |-abrtd |-acpid |-atd |-auditd-+-audispd-+-sedispatch | | `-{audispd} | `-{auditd} |-autossh---ssh---ssh |-avahi-daemon---avahi-daemon |-bonobo-activati---2*[{bonobo-activat}] |-chrome-+-3*[chrome] | |-chrome-sandbox---chrome-+-chrome-+-25*[chrome---3*[{chrome}]] | | | |-4*[chrome---4*[{chrome}]] | | | `-chrome---6*[{chrome}] | | `-nacl_helper_boo | `-31*[{chrome}] ... ... You can also provide a username if you just want processes related to a particular user. Example $ pstree saml autossh---ssh---ssh bonobo-activati---2*[{bonobo-activat}] chrome-+-3*[chrome] |-chrome-sandbox---chrome-+-chrome-+-25*[chrome---3*[{chrome}]] | | |-4*[chrome---4*[{chrome}]] | | `-chrome---6*[{chrome}] | `-nacl_helper_boo `-31*[{chrome}] clock-applet---{clock-applet} ... ...
Make terminal print name of parent process OSX [closed]
1,426,697,407,000
So this works: foo -a -b -c "path/file.ext" And this too if I want to pass all files from directory: foo -a -b -c path/* But if I add the quotes foo -a -b -c "path/*" It doesn't work anymore: it says "no such file..." And I think I need to add the quotes in order to escape arguments (I'm using PHP and escapeshellarg).
The escapeshellarg docs say it turns it into: "a single safe argument" But you want it to be interpreted as multiple arguments. Try doing the expansion using glob first.
How to pass wildcards in command line [closed]
1,426,697,407,000
I am trying to install Sublime Text 2 on Linux Mint (Mate) from this tutorial and I'm stuck on: Next, to create a menu icon press Alt+F2 and type: gksu gedit /usr/share/applications/sublime.desktop When I press Alt+F2 nothing happens; is there another way I can run this command?
You need to understand that the following command is just one way to create/open a file with an editor - here gedit - using root permissions: gksu gedit /usr/share/applications/sublime.desktop Apparently, the tutorial you're following assumes you have these utilities installed and therefore you're getting troubles with that command. To do the exact same thing using more common tools, you could run the following command in a terminal: sudo nano /usr/share/applications/sublime.desktop This will open the file in nano so you can insert text and save it with Ctrl+o and Ctrl+x.
ALT+F2 doesn't work in Linux Mint Mate
1,426,697,407,000
I need to do it for over 600 folders with varying names and the .nfo file inside doesn't necessarily have the same name as the folder, he is just a few of them. m0j0@unity ~/files/TV.TL/TEST $ ls -lr total 28 drwxrwx--- 2 m0j0 m0j0 162 Nov 30 19:57 G.S01E07.720p.AMZN.WEB-DL drwxrwx--- 2 m0j0 m0j0 164 Nov 30 19:57 G.S01E07.1080p.AMZN.WEB-DL drwxrwx--- 2 m0j0 m0j0 148 Nov 30 19:57 G.S01E06.S.1080p.AMZN.WEB-DL drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E06.HDTV drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E06.720p.WEB drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E06.720p.HDTV drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E05.HDTV drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E05.720p.WEB drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E05.720p.HDTV drwxrwx--- 3 m0j0 m0j0 4096 Nov 30 19:57 G.S01E05.1080p.WEB m0j0@unity ~/files/TV.TL/TEST $ find . -iregex '.*\.\(nfo\)' -printf '%Tc %f\n' Mon 13 Nov 2017 10:02:05 AM +08 g.s01e06.720p.hdtv.nfo Wed 22 Nov 2017 08:17:40 AM +08 G.S01E07.1080p.AMZN.WEB-DL.nfo Wed 22 Nov 2017 08:17:12 AM +08 G.S01E07.720p.AMZN.WEB-DL.nfo Tue 14 Nov 2017 02:47:07 AM +08 G.S01E06.1080p.AMZN.WEB-DL.nfo Mon 06 Nov 2017 09:58:54 AM +08 g.s01e05.1080p.web.nfo Mon 06 Nov 2017 10:01:02 AM +08 g.s01e05.hdtv.nfo Mon 13 Nov 2017 10:02:23 AM +08 g.s01e06.hdtv.nfo Mon 06 Nov 2017 09:57:15 AM +08 g.s01e05.720p.web.nfo Mon 06 Nov 2017 10:01:27 AM +08 g.s01e05.720p.hdtv.nfo Mon 13 Nov 2017 09:57:36 AM +08 g.s01e06.720p.web.nfo
I was able to do it with the following command: for i in *;do touch -r "$i/"*.nfo
How to change folder's creation date to match the creation date of the .nfo file inside?
1,479,135,807,000
I have a folder with more than 10000 text files. The files can be of two types: Type1: called “DNA” Format: header information starting with “>” Line 2 onwards contains only the following letters: “A”, “T”, “G”, “C”, “N” Example: Filename: “ABC123.tab” >DNA1_example TGTTGTTGTTGTTGCTGCTGTTGTTGCTGCTGTTGTTGTTGTTGTTGCTGCTGTTGTTGTTGTTGTTGCTGCTGCTGTTGTTGCTGTTGTCTTTGAGGTTGGAGATTAGGACGATTCGGCATGTTGTTGTTCCATGATCCGATCCCAACACCAGGACTAGGCTGTCCTTGCAAACTGATACCGGGACTCGATCTGGCACCAACTCCTGGCTGCGGAGAAAGTTGGGATCCGTGTTGTTGTTGTTGAAAACCTTGTGGAGGTGGTCCTATGCGAGGCGACACTTGAGCCGAATTAAACGGTGATAGCCGAGAAGATGGACCTCCAGGAGCAAAATTATTGCCGTTGTTGTTATTGACAATTTGTGCCTGAGGGCTTTGATTGTAGTTGCCACTATTGGCCGTGCTCAAACTGCTCATCGGACCGTGAGGTGAAAAAGGTGGTTGCATTGGGCGCTGACTGGGGGAGATTTGAGACGCTAGTGGCCCGCTACCTATTGGACTGC Type 2: called “protein” Format: header information starting with “>” Line 2 onwards contains only the following letters: G,A,L,M,F,W,K,Q,E,S,P,V,I,C,Y,H,R,N,D,T Example: Filename: “DEF123.tab” >Protein1_example MRCVLCYKNIAGNKLARFCVFSTSILLSLLSTQAQLSIIPQDELLAAEKMVNSSRWRLLD What I would like to do is: 1) Open the file. 2) Skip line beginning with “>” 3) Check if it contains either of these alphabets occurring: L,M,F,W,K,Q,E,S,P,V,I,Y,H,R,D in other lines. 4) If yes, print “Protein”, else print “DNA”
Incase someone is interested in the future: Here is my quick and dirty way of doing it using perl: #!usr/bin/perl use warnings; use strict; open(FILE, "ABC123.fa"); my $line_=<FILE>; $line_=readline(*FILE) if $line_=~/>/; close(FILE); if($line_ =~ /L|M|F|W|K|Q|E|S|P|V|I|Y|H|R|D/){ print "Protein\n" } else { print "Nucleotide\n" } I execute it using: perl format_tester.pl before running this code each time, I just replace "ABC123.fa" to "DEF123.fa" using sed approach: sed -i 's/ABC123.fa/DEF123.fa/g' format_tester.pl
What Type Of Data does the Input File Contain?
1,479,135,807,000
We want to add the folwing lines: export KAFKA_HEAP_OPTS="-Xmx8g -Xms8g" and: export KAFKA_JVM_PERFORMANCE_OPTS=" -XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80" in "content" line can we get suggestion how to edit the file with awk,sed,perl one liner , etc example of the json before update { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "items" : [ { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "tag" : "version1527250007610", "type" : "kafka-env", "version" : 8, "Config" : { "cluster_name" : "HDP", "stack_id" : "HDP-2.6" }, "properties" : { "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi", "is_supported_kafka_ranger" : "true", "kafka_log_dir" : "/var/log/kafka", "kafka_pid_dir" : "/var/run/kafka", "kafka_user" : "kafka", "kafka_user_nofile_limit" : "128000", "kafka_user_nproc_limit" : "65536" } } ] expected output ( example of the json after update ) { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "items" : [ { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "tag" : "version1527250007610", "type" : "kafka-env", "version" : 8, "Config" : { "cluster_name" : "HDP", "stack_id" : "HDP-2.6" }, "properties" : { "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi\nexport KAFKA_HEAP_OPTS=\"-Xmx8g -Xms8g\"\nKAFKA_JVM_PERFORMANCE_OPTS=\"-XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80\n"", "is_supported_kafka_ranger" : "true", "kafka_log_dir" : "/var/log/kafka", "kafka_pid_dir" : "/var/run/kafka", "kafka_user" : "kafka", "kafka_user_nofile_limit" : "128000", "kafka_user_nproc_limit" : "65536" } } ] other example of the content line after update "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi\nexport KAFKA_HEAP_OPTS=\"-Xmx8g -Xms8g\"\nKAFKA_JVM_PERFORMANCE_OPTS=\"-XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80\n"",
Not a one-liner, but ... $ new_lines='\\nexport KAFKA_HEAP_OPTS=\\"-Xmx8g -Xms8g\\"\\nexport KAFKA_JVM_PERFORMANCE_OPTS=\\" -XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80\\"' $ new_content=$( jq '.items[0].properties.content' file.json | sed 's/"$/'"$new_lines"'"/') $ jq '.items[0].properties.content = '"$new_content" file.json { "href": "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "items": [ { "href": "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "tag": "version1527250007610", "type": "kafka-env", "version": 8, "Config": { "cluster_name": "HDP", "stack_id": "HDP-2.6" }, "properties": { "content": "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi\nexport KAFKA_HEAP_OPTS=\"-Xmx8g -Xms8g\"\nexport KAFKA_JVM_PERFORMANCE_OPTS=\" -XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80\"", "is_supported_kafka_ranger": "true", "kafka_log_dir": "/var/log/kafka", "kafka_pid_dir": "/var/run/kafka", "kafka_user": "kafka", "kafka_user_nofile_limit": "128000", "kafka_user_nproc_limit": "65536" } } ] } To verify the new content readably: $ printf "$new_content\n" " #!/bin/bash # Set KAFKA specific environment variables here. # The java implementation to use. export JAVA_HOME={{java64_home}} export PATH=$PATH:$JAVA_HOME/bin export PID_DIR={{kafka_pid_dir}} export LOG_DIR={{kafka_log_dir}} export KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}} export JMX_PORT=9997 # Add kafka sink to classpath and related depenencies if [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/* fi if [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then . /etc/kafka/conf/kafka-ranger-env.sh fi export KAFKA_HEAP_OPTS="-Xmx8g -Xms8g" export KAFKA_JVM_PERFORMANCE_OPTS=" -XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80""
awk/sed/perl one liner to edit json file
1,479,135,807,000
Is there anyway to compact the follow UNIX command: chmod 755 scriptA.ksh | chmod 755 scriptB.ksh | chmod 755 scriptC.ksh | chmod 755 scriptD.ksh The above command makes every KornShell (ksh) script executable, so the compacted command will be shorter in length and still make every ksh script executable.
You can send as many files as you want to chmod; you don't have to individually chmod each file if they all are being set to the same permission set. You have many options here: chmod 0755 *.ksh # If you want to set these permissions on all *.ksh files chmod 0755 script?.ksh # If you want to set these permissions on all files named script[any single character].ksh chmod 0755 scripta.ksh scriptb.ksh scriptc.ksh # The plainest form - simply list the files chmod 0755 script{a..c}.ksh # Use brace expansion In short, this is a cat with many skins to filet. What you do not need to do, though, is pipe the output from one chmod into the standard input of the next.
How to compact a chmod UNIX command
1,479,135,807,000
I have a Music folder, in which I have some music tracks, which aren't organized too well. Thus, if I want to find a particular track, I usually type: ls Music | grep <keyword>, where <keyword> is some keyword I expect the filename to have. Then my command line will return the name of the file for which I am looking, <name>, and then to open the music file, I will type vlc <name> &. My question is whether it would be possible to streamline all of this into one operation? I tried using ls Music | grep <keyword> | vlc, but this was unsuccessful. How would I go about doing this? (More generally, how would I use the pipeline for this sort of purpose?)
You can use xargs for this. It will take what is piped in on stdin and use it as arguments to a subcommand. So in your case it might look like this: ls Music | grep <keyword> | xargs vlc & Now, this command sequence will probably still have some issues, notably whitespace. By default, xargs will split its input on any whitepsace, so if you had a file named like Artist Name - Track Name.mp3, then xargs will send 5 separate arguments to vlc: Artist, Name, -, Track, Name.mp3. Luckily there's a way around this. If you use the -0 option to xargs, it will use null \0 to split its input into arguments to the sub command. And as it turns out the find command supports writing out file name matches with a null separator (and find is a better tool for finding files than ls | grep anyway). So this may be a better pipeline: find Music -iname '*<keyword>*' -print0 | xargs -0 vlc &
Using Pipeline to Direct Files to Program that Opens Them
1,479,135,807,000
From bash manual about shebang Most versions of Unix make this a part of the operating system’s command execution mechanism. If the first line of a script begins with the two characters‘ #!’, the remainder of the line specifies an interpreter for the program. Thus, you can specify Bash, awk, Perl, or some other interpreter and write the rest of the script file in that language. The arguments to the interpreter consist of a single optional argument following the interpreter name on the first line of the script file, followed by the name of the script file, followed by the rest of the arguments. Bash will perform this action on operating systems that do not handle it themselves. Note that some older versions of Unix limit the interpreter name and argument to a maximum of 32 characters. Does "a optional argument" mean an argument to an option, or an argument which may be there or might not be? Why "a single optional argument"? does it not allow multiple "optional arguments"? If a script looks like #! /path/to/interpreter --opt1 opt1-arg --opt2 opt2-arg --opt3 nonopt-arg1 nonopt-arg2 ... when I run the script in bash as $ myscript arg1 arg2 arg3 what is the actual command being executed in bash? Is it $ /path/to/interpreter --opt1 opt1-arg --opt2 opt2-arg --opt3 nonopt-arg1 nonopt-arg2 myscript arg1 arg2 arg3 Thanks.
The arguments to the interpreter in this case are the arguments constructed after interpretation of the shebang line, combining the shebang line with the script name and its command-line arguments. Thus, an AWK script starting with #! /usr/bin/awk -f named myscript and called as ./myscript file1 file2 results in the actual arguments /usr/bin/awk -f ./myscript file1 file2 The single optional argument is -f in this case. Not all interpreters need one (see /bin/sh for example), and many systems only allow at most one (so your shebang line won’t work as you expect it to). The argument can contain spaces though; the whole content of the shebang line after the interpreter is passed as a single argument. To experiment with shebang lines, you can use #! /bin/echo (although that doesn’t help distinguish arguments when there are spaces involved). See How programs get run for a more detailed explanation of how shebang lines are processed (on Linux).
What is the actual command executed when running a script with a shebang as its name with same arguments?
1,479,135,807,000
I cannot figure out what this command does. It produces 3 numbers in my terminal but with no explanation of what those numbers are. I understand ls -l lists all files in log list format and wc is word count but those numbers don't seem to match anything. Any help?
Whenever you don't understand a command, read its manual. In this case, man wc would show you that: DESCRIPTION Print newline, word, and byte counts for each FILE, and a total line if more than one FILE is specified. [...] A word is a non-zero-length sequence of characters delimited by white space. So, the three numbers are i) the number of lines; ii) the number of words; and iii) the number of bytes. Therefore, if I run it in this directory: $ ls -l total 0 -rw-r--r-- 1 terdon terdon 0 Oct 19 14:29 file1 -rw-r--r-- 1 terdon terdon 0 Oct 19 14:29 file2 -rw-r--r-- 1 terdon terdon 0 Oct 19 14:29 file3 It will return: $ ls -l | wc 4 29 152 That's because, as you can see above, there are 4 lines of output, which contain 29 "words" (a word is defined by whitespace) and a total of 152 bytes (note that this includes the newline (\n) character at the end of each line). For a simpler example, try: $ echo "foo" | wc 1 1 4 The command echo "foo" actually prints foo\n (the \n is the newline character), so that's one line, one word and 4 bytes. Beware that the third field is the number of bytes, not characters. This particularly important in locales where characters can be made of several bytes like when using UTF-8 (tends to be the norm nowadays). $ echo fée | wc 1 1 5 In UTF-8, the é character is made of two bytes. You can use the -m option to get the number of characters (m is for multibyte characters). $ echo fée | wc -m 4
What does ls -l | wc do?
1,479,135,807,000
This is how I'm extracting all the files in a folder (recursively): find -iname \*.epub -exec unzip -o {} \; But the extracted files end up all in the parent folder: Parent (Extracted Epub files) Child (Epub files) Child (Epub files) How to change that command, so that they are extracted in their own folders? Parent Child (Epub files and extracted Epub files) Child (Epub files and extracted Epub Files)
If you’re using GNU find, use its -execdir action: find -iname \*.epub -execdir unzip -o {} \; This will run unzip from each directory where files are found, ensuring that the files are extracted in the appropriate subdirectory. If you specify the start directory this will also work on at least some BSDs (OpenBSD in particular): find /path/to/start -iname \*.epub -execdir unzip -o -- {} \;
How to extract files recursively but keep them in their own folders?
1,479,135,807,000
I am trying to remove all ruby gems before uninstalling ruby. The command I am using is: sudo gem uninstall --all to achieve this. However, I have to type y continually to remove things. Is there a way I could achieve the same thing without having to type y to remove all of the dependencies?
Use yes: yes | sudo gem uninstall --all
Is there a way to use this command without having to type 'y' all the time?
1,479,135,807,000
i have a file is name -ksh.l.15092015.log to delete this file i do this: rm -rf -ksh.l.15092015.log but i have this error: rm: Not a recognized flag: k Usage: rm [-firRe] [--] File... i have do this: rm -rf *ksh* but i have the same errors, why ? Thx for your help !
Execute this: rm -rf ./-ksh.l.15092015.log
can't delete a file starting with a '-' [duplicate]
1,479,135,807,000
Do any of the common Unix/Linux platforms provide a way to query the system administrator contact information from the command line. Obviously a system admin might not include this information when they set up the server, but is there a way to ask for it when it is there (e.g., name or email)?
Assuming there's no organizational procedure, you can't ask your coworkers, and you don't know the company who runs the server... You could see if there's any contact information when you log in, e.g. in /etc/issue, /etc/motd Or you could try emailing root@<this hostname>. Or you could look at who is in the root or wheel group, and contact them. Either they are the system administrator, or they know who is. You can get their info with e.g. getent group 0, getent passwd <username>, and maybe finger.
Get contact information of system administrator from the terminal
1,479,135,807,000
As for now, I use my CLI (Command Line Interface) with either rbash, bash, dash, or sh. Given this fact, one can assume that the CLI is not shell dependent, and that even if we will delete all of these shells, we could use some primal/basic/ultralimited CLI. My question If I delete all the aforementioned shells in my GUIless operating system, will I still have a primal CLI of some sort? Notes I assume that that CLI won't be part of the kernel, because as I understand, the kernel is usually accessible only via proxy, like a shell). I was thinking about tmux and screen too but removed them from the headline and the question.
No. Your premise that these different shells are all running on top of some more basic CLUI, because they are all fairly similar, is incorrect. Each shell is separately implementing a CLI interface to the kernel, which all look somewhat similar (because they are all 'Unix' shells, which conform more or less rigidly to an accepted standard, and they all run on the same sort of terminal device). The CLUI is coded into each shell program separately - they are all independent and are not sharing some underlying CLUI. If you delete all the shells, then you will have no CLUI. That makes Tux cry :(
Using a CLI after deleting all shells (rbash, bash, dash, and sh)
1,479,135,807,000
I have a file like: A B C D E F I can get line 2 to 4 using sed -n 2,4p How can get all the lines except 2 to 4?
Your sample command is indeed the inverse of what you want. Read the man page and note that -n disables sed's default behaviour, which is to print each line that is processed. You disable the printing of lines, and then explicitly print only lines in the range 2,4. One solution would be to enable the default printing of lines, but tell sed to delete lines within your range: $ sed 2,4d << EOF > A B C D E F > EOF A E F
get all lines except x to y
1,479,135,807,000
When I type echo "${PATH}" | tr -s ':' '\n' | nl from inside a bash script and at the terminal I get the same result: 1 /home/nikhil/Documents/Git/Cs/Architecture/bin 2 /home/nikhil/.local/bin 3 /home/nikhil/opt/.nvm/versions/node/v16.13.0/bin 4 /home/nikhil/opt/bin 5 /usr/local/sbin 6 /usr/local/bin 7 /usr/sbin 8 /usr/bin 9 /sbin 10 /bin 11 /usr/games 12 /usr/local/games 13 /snap/bin 14 /home/linuxbrew/.linuxbrew/bin 15 /home/linuxbrew/.linuxbrew/sbin 16 /home/nikhil/.cargo/bin 17 /home/nikhil/.cabal/bin 18 /home/nikhil/opt/go/bin 19 /home/nikhil/.ruby/bin 20 /home/linuxbrew/.linuxbrew/opt/fzf/bin But when I type the following inside a bash script and on terminal I get different results: # From Terminmal $ type pandoc pandoc is aliased to `/usr/bin/pandoc' pandoc is /usr/bin/pandoc pandoc is /home/linuxbrew/.linuxbrew/bin/pandoc pandoc is /home/nikhil/.cabal/bin/pandoc # From inside bash script pandoc is /usr/bin/pandoc Why does type have different output from inside the bashscript and from the terminal? How can I make the bash script type output to be the same as terminal's?
It looks like you have type aliased to type -a. Aliases are not inherited by any shell scripts you run from your terminal, and scripts are run in non-interactive mode by default. Because scripts are run in a non-interactive shell, ~/.bashrc won't be sourced when bash runs the script, so aliases defined there won't be loaded. Without -a, type will "indicate how it would be interpreted if used as a command name" - i.e. it will show you what would actually be run. With -a, it will show you all possible matches - executables in $PATH (both direct and via following symlinks), aliases, functions) e.g. on my system, grep is aliased: $ type grep grep is aliased to `grep --directories=skip --binary-files=without-match' $ type -a grep grep is aliased to `grep --directories=skip --binary-files=without-match' grep is /bin/grep $ type -P grep /bin/grep My aliases aren't inherited if I run type in a (non-interactive) instance of bash: $ bash -c 'type grep' grep is /bin/grep If I force bash to be run in interactive mode, it will source ~/.bashrc (which, in turn, sources my ~/.bash-aliases file). $ bash -i -c 'type grep' grep is aliased to `grep --directories=skip --binary-files=without-match' NOTE: it's not a good idea to just make your scripts use bash -i as their interpreter. Instead, define any aliases or functions needed in your script in the script itself, or source them from another file. Or just use the command with whatever options are needed in the script - aliases are a convenience to minimise repetitive typing, which isn't really needed in a script. BTW, type's -P option is generally the most useful option in a script. See help type: type: type [-afptP] name [name ...] Display information about command type. For each NAME, indicate how it would be interpreted if used as a command name. Options: -a display all locations containing an executable named NAME; includes aliases, builtins, and functions, if and only if the `-p` option is not also used -f suppress shell function lookup -P force a PATH search for each NAME, even if it is an alias, builtin, or function, and returns the name of the disk file that would be executed -p returns either the name of the disk file that would be executed, or nothing if `type -t NAME` would not return `file` -t output a single word which is one of `alias`, `keyword`, `function`, `builtin`, `file` or ``, if NAME is an alias, shell reserved word, shell function, shell builtin, disk file, or not found, respectively Arguments: NAME Command name to be interpreted. Exit Status: Returns success if all of the NAMEs are found; fails if any are not found.
`type` command inside bash script, is not showing all paths
1,479,135,807,000
Let's say I have the following file: A random Title 1 BLOCK 1- a block of text that can contain any character and it also can contain multiple lines BLOCK A random Title 2 BLOCK 2- a block of text that can contain any character and it also can contain multiple lines BLOCK A random Title 3 BLOCK 3- a block of text that can contain any character and it also can contain multiple lines BLOCK This file can have multiple blocks of text like these ones. I'd like to distribute the parameters from this text on the following JSON: [ { "title": "A random Title 1", "body": "1- a block of text that can contain any character\nand it also can contain multiple lines" }, { "title": "A random Title 2", "body": "2- a block of text that can contain any character\nand it also can contain multiple lines" }, { "title": "A random Title 3", "body": "3- a block of text that can contain any character\nand it also can contain multiple lines" } ] I know I could solve this problem by creating a loop that goes char by char inside this file and then I could create the logic to divide all the variables properly inside the JSON. But I'm wondering if there's a more simple solution using the command line. Can I use AWK for distributing parameters that I get inside a file on a JSON output? Or am I missinterpreting the functionality of AWK in this case?
In the TXR language, we could do it like this: $ txr data.txr data [{"title":"A random Title 1","body":"1- a block of text that can contain any character\nand it also can contain multiple lines"}, {"title":"A random Title 2","body":"2- a block of text that can contain any character\nand it also can contain multiple lines"}, {"title":"A random Title 3","body":"3- a block of text that can contain any character\nand it also can contain multiple lines"}] Where the code in data.txr is: @(bind vec @(vec)) @(repeat) @title BLOCK @(collect) @lines @(until) BLOCK @(end) @(cat lines "\n") @(do (vec-push vec #J^{"title" : ~title, "body" : ~lines})) @(end) @(do (put-jsonl vec)) We build up a vector of hashes: the underlying data structure corresponding to the desired JSON. The #J prefix indicates a JSON literal embedded in Lisp. Here, we have a ^ which indicates the literal is being quasiquoted; ~ characters indicate the unquotes which insert values into the template: the title, and the expression that calculates the body from the collected lines catenating strings with a newline. put-jsonl means put-json, with a newline after it. By default, on the *stdout* stream. Indentation is recommended, which looks like this: @(bind vec @(vec)) @(repeat) @ title BLOCK @ (collect) @ lines @ (until) BLOCK @ (end) @ (cat lines "\n") @ (do (vec-push vec #J^{"title" : ~title, "body" : ~lines})) @(end) @(do (put-jsonl vec)) It could be done with a sort of Awk; the Awk macro in TXR Lisp: $ txr data.tl data [{"title":"A random Title 1","body":"1- a block of text that can contain any character\nand it also can contain multiple lines"}, {"title":"A random Title 2","body":"2- a block of text that can contain any character\nand it also can contain multiple lines"}, {"title":"A random Title 3","body":"3- a block of text that can contain any character\nand it also can contain multiple lines"}] Code: (awk (:set rs "\n\n" fs "\n") (:let (vec (vec))) ((and (equal [f 1] "BLOCK") (equal [f -1] "BLOCK")) (vec-push vec #J^{"title":~[f 0], "body":~(cat-str [f 2..-1])}) (next)) (t (error "bad data")) (:end (put-jsonl vec))) The (:set ...) block is for initializations, and we use that to set up the record separator rs and field separator fs, which are analogous to the original Awk RS and FS. With a field separator that is a newline and a record separator that is a double newline, we get each information block as a record whose fields look like this: "title" "BLOCK" "body1" "body2" ... "bodyn" "BLOCK" In the awk macro, the fields are available as the list named f. The main logic is a (condition action) pair. The condition is: (and (equal [f 1] "BLOCK") (equal [f -1] "BLOCK")) which is true if the second element of f and the last element are the string "BLOCK". If that is true, the action is executed, which extracts the pieces and adds an item to vec with the help of a JSON quasiquote like in the first program. We also execute (next) to move to the next record to avoid hitting the next condition-action pair. The next condition-action pair, (t (error ...)) always executes because t is true, and throws an exception. We print the JSON in the (:end ..) block, which is like END { ... } in classic Awk. Speaking of error checking, the first program tolerates bad data to some extent; there are ways to fine tune it to reject bad inputs. For instance there can be junk between the records which is silently skipped, and if the last closing BLOCK is missing, that's okay.
Can I use AWK to distribute parameters inside a JSON file?
1,479,135,807,000
I'm looking for a quick and not-so CPU intensive solution to convert 100,000+ lines of text into decimal format. # random ascii string='QPWOEIRUTYALSKDJFHGZMXNCBV,./;[]75498053$#!@*&^%(*' convert () { for ((b=0; b<${#string}; b++ )) do # convert to dec, append colon character, add to array arr+=$(printf '%d,' "'${string:$b:1}"); done; # show array contents printf '%s' "${arr[@]::-1}" } time convert The above works well for short lines, the task is completed in less than a second: $ ./stackexchange.sh 81,80,87,79,69,73,82,85,84,89,65,76,83,75,68,74,70,72,71,90,77,88,78,67,66,86,44,46,47,59,91,93,55,53,52,57,56,48,53,51,36,35,33,64,42,38,94,37,40,42 real 0m0.059s user 0m0.032s sys 0m0.016s But it's not a viable solution for files that contain many characters. The below function causes my CPU to spike and basically never completes the task. Well, I press Ctrl+c to stop it after several minutes. Here's the same script with a modified string variable. # random ascii string="$(cat /tmp/100000-characters.txt)" convert () { for ((b=0; b<${#string}; b++ )) do arr+=$(printf '%d,' "'${string:$b:1}"); done; printf '%s' "${arr[@]::-1}" } time convert I tried a while loop, as well. It managed to convert the 100,000 characters file but still takes a long time to complete. string="$(cat /tmp/100000-characters.txt)" convert () { # iteracte through each like while read -r -n1 char; do arr+=$(printf '%d,' "'$char"); done <<< "$string" printf '%s' "${arr[@]::-3}" } time convert Is there an elegant/simple solution to convert a massive text file into colon-seperated decimal values?
Perl to the rescue! perl -nE 'say join ",", map ord, split //' < file -n reads the input line by line and runs the code for each line split on an empty regex // splits the input to individual characters map maps each character to its ord join creates a string back from the characters, inserting commas between them say outputs the result More tweaking might be needed if you don't want to process the input line by line.
Bash: convert 100,000+ characters to decimal format?
1,479,135,807,000
Call me a dreamer, but imagine a world where "every" CLI tool we use had an option to produce a stable output, say in JSON.  Programmatic use of CLI tools like ls, free, df, fdisk would be a breeze.  The way GNU standardized argument syntax conventions, can it standardize the output along the lines of "--json produces a tool-specific report formatted according to JSON spec"?  Has this been attempted and rejected perhaps?  If not, how do we push for something like this?
You would advocate for this on the mailing lists dedicated to the specific tools you are interested in. The available GNU mailing lists are available here: https://lists.gnu.org/mailman/listinfo/ If one or other of the tools you are interested in is not represented by any GNU mailing list, then you would have to investigate who's maintaining it and whether there's an associated mailing list that they maintain. Note that feature requests to open source projects have a much higher chance of getting accepted if you can provide a patch of the source code that implements the feature and that works.
How to advocate for GNU to add a "--json" parameter for all CLI commands to be compliant? [closed]
1,479,135,807,000
w lists all logged in users. Is there any way to get the working directory for the logged in users?
The current working directory is a property of each process, not of users. On Linux, you can get the current working directory of a process of id $pid by doing a readlink() on /proc/$pid/cwd for instance by using the readlink/realpath command or the :a/:A/:P glob qualifiers in zsh. Unless you're superuser, that only works for your own processes though (the current working directory like what other file a process is currently accessing is a potentially sensitive information). $ ps PID TTY TIME CMD 9467 pts/1 00:00:00 zsh 14074 pts/1 00:00:00 ps $ readlink /proc/9467/cwd /usr/local $ printf '%s\n' /proc/9467/cwd(:P) /usr/local More portably, you can use lsof: $ lsof -ap 9467 -d cwd COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME zsh 9467 chazelas cwd DIR 253,0 4096 786604 /usr/local Then you can combine it with -u user instead of -p pid to get the cwd of all the processes running as that user: sudo lsof -au user -d cwd On some systems, like FreeBSD, sudo (to run the command with superuser privileges) is not required as access to that information is not restricted there.
Get working directory of logged in users
1,479,135,807,000
I have a file that looks like this: Heading1,Heading2 value1,value2 And another one that looks like this: Row1 Row2 How can I combine the two to become: Row1,Heading1,Heading2 Row2,value1,value2 Effectively appending a column in the place of the first column?
Job for paste: paste -d, f2.txt f1.txt -d, sets the delimiter as , (instead of tab) With awk: awk 'BEGIN {FS=OFS=","} NR==FNR {a[NR]=$0; next} {print a[FNR], $0}' f2.txt f1.txt BEGIN {FS=OFS=","} sets the input and output field separators as , NR==FNR {a[NR]=$0; next}: for first file (f2.txt), we are saving the record number as key to an associative array (a) with values being the corresponding record {print a[FNR], $0}: for second file, we are just printing the record with the value of record number-ed key from a prepended Example: % cat f1.txt Heading1,Heading2 value1,value2 % cat f2.txt Row1 Row2 % paste -d, f2.txt f1.txt Row1,Heading1,Heading2 Row2,value1,value2 % awk 'BEGIN {FS=OFS=","} NR==FNR {a[NR]=$0; next} {print a[FNR], $0}' f2.txt f1.txt Row1,Heading1,Heading2 Row2,value1,value2
Append first column to file
1,479,135,807,000
I want to change my root password everyday based on the date. The password will be like a combination of a string and the date. The below code is working fine. echo -e "pass"$(date +"%d%m%Y")"\n""pass"$(date +"%d%m%Y") | passwd root But how to call it each time the system starts and at mid night when the date changes (If the system is on.)?
I'm not sure why you would want to do that. If you're concerned about security, if someone discovers your password on 1 July, they'll know it on 31 July or 15 September... To answer your question, if you want to ensure that the password update is done either at a scheduled time or when the system restarts, you want to install anacron. It can do periodic scheduling without assuming the system is on all the time. I'm not sure what distribution you're using, but it should be in your package archives. Alternatively, you can use a mixture of traditional cron (changing the password at midnight) and an init script (to handle the case of rebooting) to ensure that the password is always up-to-date. In either case, put the commands to change the password into a script (say, /usr/local/sbin/rootpass.sh) and then call that script using cron or anacron and from your init script.
Dynamic change of Linux root password everyday
1,479,135,807,000
I have a.txt,b.txt,c.txt. Each has different numbers as below: a.txt: 12 14 111 1 15 2 b.txt 12 18 22 23 1 2 c.txt 12 14 15 16 17 1200 The output should contain all the numbers from each file, but without any duplication. Is there a command to export such a thing into a text file? The actual text files include hundreds of rows.
You could do like this if there are more number of files, grep '' *.csv | cut -d: -f2 | sort -u > output.csv
how to export all numbers that are unique in a few text files into another file?
1,479,135,807,000
How can I delete all 'nohup.out' files within a directory recursively from my terminal? I'm using CentOS.
There can't be multiple files named nohup.out in a single directory, so I assume you mean that you want to remove it recursively: find . -name nohup.out -exec rm {} + If you are using GNU find, you can use -delete: find . -name nohup.out -delete In bash4+, you can also use globstar: shopt -s globstar dotglob rm -- **/nohup.out Note, however, that globstar traverses symlinks when descending the directory tree, and may break if the length of the file list exceeds the limit on the size of arguments.
Delete all 'nohup.out' within a directory recursively
1,479,135,807,000
is there a mysql command that will show me the tables and how many rows they have in them?
Starting in MySQL 5 you can query the virtual table information_schema which contains metadata about the tables within your MySQL database. To find out the number of rows for every table across every database: $ mysql -u root -p \ -e "select table_schema,table_name,table_rows from information_schema.tables;" +---------------------+---------------------------------------+------------+ | table_schema | table_name | table_rows | +---------------------+---------------------------------------+------------+ | information_schema | CHARACTER_SETS | NULL | | information_schema | COLLATIONS | NULL | | information_schema | COLLATION_CHARACTER_SET_APPLICABILITY | NULL | ... ... | arrdb01 | active_part | 24 | | arrdb01 | audit_record | 19 | | arrdb01 | code | 8 | | arrdb01 | part_obj | 0 | | arrdb02 | active_part | 24 | | arrdb02 | audit_record | 14 | | arrdb02 | code | 9 | | arrdb02 | part_obj | 1 | | cacti | cdef | 8 | | cacti | cdef_items | 22 | | cacti | colors | 215 | ... ... The above command is selecting 3 columns from the information_schema table: table_schema (database name) table_name table_rows To see all the fields that it contains you can use the describe command: $ mysql -u root -p -e "describe information_schema.tables" +-----------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+-------+ | TABLE_CATALOG | varchar(512) | YES | | NULL | | | TABLE_SCHEMA | varchar(64) | NO | | | | | TABLE_NAME | varchar(64) | NO | | | | | TABLE_TYPE | varchar(64) | NO | | | | | ENGINE | varchar(64) | YES | | NULL | | | VERSION | bigint(21) | YES | | NULL | | | ROW_FORMAT | varchar(10) | YES | | NULL | | | TABLE_ROWS | bigint(21) | YES | | NULL | | | AVG_ROW_LENGTH | bigint(21) | YES | | NULL | | | DATA_LENGTH | bigint(21) | YES | | NULL | | | MAX_DATA_LENGTH | bigint(21) | YES | | NULL | | | INDEX_LENGTH | bigint(21) | YES | | NULL | | | DATA_FREE | bigint(21) | YES | | NULL | | | AUTO_INCREMENT | bigint(21) | YES | | NULL | | | CREATE_TIME | datetime | YES | | NULL | | | UPDATE_TIME | datetime | YES | | NULL | | | CHECK_TIME | datetime | YES | | NULL | | | TABLE_COLLATION | varchar(64) | YES | | NULL | | | CHECKSUM | bigint(21) | YES | | NULL | | | CREATE_OPTIONS | varchar(255) | YES | | NULL | | | TABLE_COMMENT | varchar(80) | NO | | | | +-----------------+--------------+------+-----+---------+-------+ References Overview of MySQL information_schema Database With Practical Examples Chapter 20. INFORMATION_SCHEMA Tables MySQL command to show list of databases on server
What mysql command can show me the tables in a database and how many rows there are? [closed]
1,479,135,807,000
I using a line like this to search for a bunch of files: find . -name "page.php The results are hundreds of lines and I can't see it all. (I'm trying to just copy/paste it into excel to analyze it). I tried this: find . -name "index1.php | less That did something, but I was in this screen that I couldn't figure out how to exit out of. I had to close putty and open it up again. What is the best way to just limit the results to the viewable area so I can copy, then hit return and get the next group. Or, is there a way to make putty not truncate the results? Thanks in advance.
You want to get this into Excel? Why copy and paste? find . -name "index1.php" > out.txt Copy out.txt to your Excel machine (SCP is the easiest way), open it up.
Limit find results in SSH
1,479,135,807,000
I found out that to look for file size in bytes, I use 'c'. So I can look for file size with 1000 bytes by using: find . -size 1000c But what about different kind of size such as Mb, Gb or even bits? What character or letters do I need to use?
POSIX only specifies no suffix or a c suffix. With no suffix, values are interpreted as 512-byte blocks; with a c suffix, values are interpreted as byte counts, as you’ve determined. Some implementations support more suffixes; for example GNU find supports b for 512-byte blocks c for bytes w for 2-byte words k for kibibytes M for mebibytes G for gibibytes
What are the file size options for "find . -size" command?
1,479,135,807,000
I have a CSV file with only 2 columns (but lots of rows) and the occasional irregular row which always starts with an asterisk (*) character and may span more than two columns. Using just the Linux command line, the intended behavior is: If 3 or more consecutive data rows have the same value for column two, delete the middle rows. Start and end rows are retained. Retain the irregular rows that begin with an asterisk For example, if I have a CSV with this content: 0,Apple 1,Apple 2,Apple * Checkpoint * Another checkpoint 3,Apple 4,Apple 5,Box 6,Box 7,Citrus 8,Box 9,Apple 10,Apple 11,Apple 12,Dove 13,Citrus * Sudden checkpoint, * Leftover checkpoint note 1, * Leftover checkpoint note N 14,Citrus 15,Citrus 16,Citrus 17,Apple 18,Citrus It should look like below after: 0,Apple * Checkpoint * Another checkpoint 4,Apple 5,Box 6,Box 7,Citrus 8,Box 9,Apple 11,Apple 12,Dove 13,Citrus * Sudden checkpoint, * Leftover checkpoint note 1, * Leftover checkpoint note N 16,Citrus 17,Apple 18,Citrus In the example above, lines 1 to 3, 10, and 14 to 15 were removed. Thank you very much in advance for any answers. Cheers
Using awk: BEGIN { FS = "," } /^[*]/ { print; next } { if (NR > 1 && $2 == word) { tail = $0 ++count } else { if (count) print tail word = $2; count = 0 print } } END { if (count) print tail } This awk script unconditionally prints all lines that start with *. If the line is not such a line, and if the word in the second field is a word that we have remembered, store the record in the variable tail ("tail" as in the last record of a run of records with the same word in the second field). If the second field is not the same as in the previous, then print the tail record if there were more than one record in the previous run of records, then remember the new word and print the current record (the first record in a new run of one or more records with the same word in the second field). Testing it on the data provided and assuming it is simple CSV (meaning no embedded delimiters or newlines etc.): $ awk -f script file 0,Apple * Checkpoint * Another checkpoint 4,Apple 5,Box 6,Box 7,Citrus 8,Box 9,Apple 11,Apple 12,Dove 13,Citrus * Sudden checkpoint, * Leftover checkpoint note 1, * Leftover checkpoint note N 16,Citrus 17,Apple 18,Citrus Similar to the above but using Miller (mlr), which is CSV-aware and would be able to handle CSV records with complex quoted strings: if (is_not_null(@word) && $2 == @word) { @tail = $*; false # omit this record for now } else { is_not_null(@tail) { emit @tail # emit the tail record } @word = $2; @tail = null; true # emit this record } end { is_not_null(@tail) { emit @tail } } This is an expression for Miller's filter sub-command to include or omit records from the input data set using very similar logic to the awk code above. We can make Miller pass through the lines starting with the character * by using --pass-comments-with='*' on the command line. Using --csv with -N treats the input as header-less CSV. $ mlr --pass-comments-with='*' --csv -N filter -f script file 0,Apple * Checkpoint * Another checkpoint 4,Apple 5,Box 6,Box 7,Citrus 8,Box 9,Apple 11,Apple 12,Dove 13,Citrus * Sudden checkpoint," * Leftover checkpoint note 1"," * Leftover checkpoint note N" 16,Citrus 17,Apple 18,Citrus
Process CSV file through command line: Remove only middle rows between consecutive row entries if consecutive entries have the same 2nd column value
1,479,135,807,000
When we run !! in a shell session, it prints and run last executed command I was wonder if it is an alias for another long-written bash built-in command and would like to know where it is defined I also know we can interact with shell history with fc command. So, what exactly is !!?
Bash natively provides a 'history' feature. Commands typed in on the shell are saved to a file, which allows each command to be recalled and executed at a later point. Two built-in commands are used to work with this feature. The fc command is used to select one or more commands from history, modify them if required, and then execute them. The history itself is managed using the history command, which includes options to save or clear the command history. Another part of this feature is history expansion, which is a way to re-use commands/arguments from history as part of input. To use history expansion, you would specify the history expansion character ('!' by default) followed by an identifier. This identifier can consist of three components, each separated by a colon (':'). The components are: Event designators, which identify the target line in the history list. Word designators, which identify the target word. Modifiers, which alter the expanded word. Since event designators refer to specific lines, one of the forms in which they can be specified is !n - to refer to the nth line of history. Similarly, !-n refers to the nth line from the end of the file. !! is defined as a synonym for !-1, which refers to the last line of history and therefore the last executed command. For more details, see the Bash manual's section on Using History Interactively.
!! bash command
1,479,135,807,000
Create files touch a1.txt a2.txt a3.txt touch s1.mp3 s2.mp3 s3.mp3 then I do find . -name "*.txt" -or -type f -print And it's showing only s1.mp3 s2.mp3 s3.mp3. Why it's not showing .txt files?
Because of the precedence of the operators: the implicit AND (-a) between -type f and -print has higher precedence than the OR (-o); your command is similar to find . \( -name "*.txt" \) -or \( -type f -print \) while you probably want find . \( -name "*.txt" -or -type f \) -print to print all the files.
Find command output doesn't give desire result
1,479,135,807,000
I am trying to save the list of all files including size in a text file. Using the command find *> yourfilename.csv I can export only the names of the files. how can I also add the file size?
With the bash shell on Linux (or on a system with a compatible stat implementation): shopt -s globstar dotglob nullglob for pathname in ./**/*; do if [[ -f $pathname ]] && [[ ! -h $pathname ]]; then stat -c $'%s\t%n' "$pathname" fi done >outfile.csv For bash on OpenBSD (or on some other BSD systems with a similar implementation of stat), use shopt -s globstar dotglob nullglob for pathname in ./**/*; do if [[ -f $pathname ]] && [[ ! -h $pathname ]]; then stat -f '%z%t%N' "$pathname" fi done >outfile.csv Both of these would iterate over all names in or below the current directory. For each regular file, stat is called and the apparent file size in bytes and the files pathname is printed with a tab-character in-between. Using only the zsh shell: zmodload -F zsh/stat b:zstat zstat -n +size ./**/*(D.) This makes the built-in zstat command available in the shell and then uses that to get the pathname and apparent file size in bytes of each regular file in or below the current directory (including files with hidden names). The output might look something like this for a shallow directory with only a few small files: ./file1 136 ./file2 136 ./somedir/file3 136 ... so not very CSV-like. This could also be done in a simple loop, which would allow us to format it a bit more precisely: zstat -L -A data -n +size ./**/*(D.) for name size in $data; do printf '%s\t%s\n' $size $name done >outfile.csv This would call zstat for all the regular files in or below the current directory, store the result in the array data, and then loop over the entries of that array (will be alternating pathnames and file sizes). For each pathname/size pair, print the size in bytes first, followed by a tab and the pathname. Here, the output is put into outfile.csv in the current directory. A shortened version that gets rid of the loop but pays for it in somewhat more obscure code (unless you're familiar with printf format strings): zstat -L -A data -n +size ./**/*(D.) printf '%2$s\t%1$s\n' $data >outfile.csv The printf format %2$s\t%1$s\n specifies that two arguments should outputted as strings, but in reverse order (the data array stores pairs of pathnames and file sizes, in that order), with a tab in-between them. The format will be reused for each pair of pathnames and file size until the data array is exhausted. None of the above variations takes care to CSV-quote pathnames that contain tabs or newlines.
save in a text file list of files with the size of each file
1,479,135,807,000
Let's say I want to search a file for a string that begins with a dash, say "-something": grep "-something" filename.txt This throws an error, however, because grep and other executables, as well as built-ins, all want to treat this as a command-line switch that they don't recognize. Is there a way to prevent this from happening?
For grep use -e to mark regex patterns: grep -e "-something" filename.txt For general built-ins use --, in many utilities it marks "end of options" (but not in GNU grep).
Stop executables and built-ins from interpreting a string argument starting with - as a switch? [duplicate]
1,479,135,807,000
I have a list of files named EX5_##.bak. I want to each one in a directory named EX5_##. Example EX5_01.bak EX5_02.bak EX5_03.bak and I want to put them in directories. So when I typy ls -l I get : EX5_01 EX5_02 EX5_03 and so forth where those are directory names and the files of the same name are in the directory. How do I go about this? Is there a single command or Bash script that I can write to achieve this?
A simple shell loop: #!/bin/sh for file in ./EX5_??.bak; do dir=${file%.bak} mkdir -p "$dir" && mv -i "$file" "$dir" done This would iterate over all your EX5_??.bak files in the current directory (? matches a single character). For each file, it creates a directory name by stripping the .bak suffix off from the filename (this is what ${file%.bak} does). It then creates the directory if it did not already exist and, if there was no issue with creating the directory, moves the file over into it. If you need to be more precise with the selection of files, you may want to use ./EX5_[0-9][0-9].bak as the pattern to iterate over. This could be useful if you also have files like EX5_AA.bak that you don't want to include in the loop. The -p option to mkdir makes the utility not treat it as an error that the directory already exists (it also makes it create intermediate directories, but that's not really used in this instance). The -i option to mv makes it ask for confirmation before overwriting any files in the target directory. We use it here as a safety catch.
Name a series of directories after file names
1,479,135,807,000
I just got a script who upload image file on to a web hoster; at the end I got a file with all the links (one link per line) and I would like to add [img] at the beginning of each link and [/img] at the end.
One way, with the stream editor, sed: sed -e 's/^/[img]/' -e 's!$![/img]!' < input > output Here I've changed the delimiter for the second search & replacement from / to ! so that the forward-slash in the replacement text doesn't need to be escaped. GNU sed would allow an in-place edit with the -i option: sed -i -e 's/^/[img]/' -e 's!$![/img]!' input Alternatively, you could edit the file in-place with ed: ed -s input <<< $'1,$s/^/[img]/\n1,$s!$![/img]!\nw\nq'
Script to add balise [img][/img] on each line of text linux? [duplicate]
1,479,135,807,000
I have such a problem, I'm trying to output a list of movies without the names of directories in the file, but I have a bug, the argument is not found in the -exeс, below is the code $ find . -name "*.avi" -o -name "*.mkv" -exec basename \{} \ > ~/Bash/test/rm/films.txt
Try this instead : $ find . \( -name "*.avi" -o -name "*.mkv" \) -exec basename {} \; > ~/Bash/test/rm/films.txt
Not found argument in -exec [duplicate]
1,479,135,807,000
I have a file named ~/myScripts/assignments.sh which contains various assignments such as variables and aliases. Here's a taste from that file: drt="/var/www/html" rss="/etc/init.d/php*-fpm restart && systemctl restart nginx.service" alias drt="cd ${drt}" alias rss="${rss}" I use these assignments frequently from the moment I finished installing my operating system, especially to write neater scripts for installation, configuration and maintenance of my webserver and adjacent software. Thus, it's vital that this file will always be exported, that its data will always be available in all Bash sessions, immediately after any Bash session has started (also after a reboot). To achieve that, I thought of the following lousy script: source ~/myScripts/assignments.sh # Immediate availability; printf "\n%s" "source ~/myScripts/assignments.sh" >> ~/.profile cat > "cron_daily.sh" <<< "source ~/myScripts/assignments.sh" crontab <<-"CRONTAB" 0 0 * * * ~/myScripts/cron_daily.sh # Permanent availability (after the one minute gap); CRONTAB What will be a good approach to achieve the state I described above? Update The reason I'd think to avoid sourcing the file, then add source ~/myScripts/assignments.sh inside bash.bashrc is that I've seen some devops reluctant from sourcing bash.bashrc in general. Although, when the file isn't customized, or has just such small change it is generally not a problem.
If the assignments are necessary for "all" bash-sessions, simply put the file in somewhere like /etc/assignments and source it globally from /etc/bash.bashrc. Append this into /etc/bash.bashrc: source /etc/assignments That way, you have all your definitions available in all bash-sessions, for every user, and can maintain the information in a separate file.
Keep file data always available to Bash, without repeated execution (also after reboot)
1,479,135,807,000
I'm trying to learn linux commands, but the output wasn’t what I expected.
ls -1 (one) and ls -l (ell) both work, but produce different listings. Your textbook shows ls -l; in your terminal you have typed ls -1. man ls on your local system for an explanation of all the available options.
Why doesn't the “ls -1” command work correctly? [closed]
1,479,135,807,000
I need to run same command separately for each file at the same time. I've tried for file in *.txt; do ./script <"$file"; done but It starts the first one and waits until it get finished then goes to the next one. I need to start them all together.
If script doesn't require any input from the user, you could use the shell's job processing features to run it in the background for file in *.txt; do ./script <"$file" & done When you append & to a command it's run in the background. Look up job control in the man page for bash (or your preferred shell) for details.
run same command for multiple file at the same time [duplicate]
1,465,302,621,000
How can I include spaces as part of a variable used in an svn command for RHEL bash scripting? Or if there's something else wrong with the following, please advise. The SVN URL variable has no spaces, and this section is working: svn checkout $svnUrl $checkoutDir --username $username --password $password --non-interactive --depth immediates --no-auth-cache But the SVN update command that works when hard coded is not working as a variable: updateSubProject="&& svn update --set-depth infinity --no-auth-cache --username $username --password $password --non-interactive" cd project-dir $updateSubProject cd ../another-project $updateSubProject
Better would be to make a function to do it like updateSubProject() { pushd "$1" svn checkout "$svnUrl" "$checkoutDir" --username "$username" --password "$password" --non-interactive --depth immediates --no-auth-cache popd } updateSubProject project-dir updateSubProject path/to/another-project this way you aren't trying to store code in a variable, and you'll avoid a lot of the word splitting issues.
What's the proper way to use a variable with spaces in part of a shell script command? [duplicate]
1,465,302,621,000
With bash, how to count new lines appended to file the last 1 minute? What would be smartest if you want to count new lines for the last 1 minute in multiple files simultanously and get output? I have tried a few things, but i can not find a good solution. I might try to use a programming language instead of Bash
Use wc -l twice and subtract the results. before=$(wc -l < yourfile) sleep 60 after=$(wc -l < yourfile) let dif=after-before echo "$dif" You may also just print the last $dif lines: tail -n$dif yourfile Although more lines could have been appended in the mean time, none of the operations are atomic here. If you want to track incremental changes (at least the number of added lines), just log the output of wc -l every minute. However, there is no way to do this without knowing in advance that you will need this. Unless you have timestamps on every line, you can't know what part of the file was added when.
Count new lines in file last 1 minute
1,465,302,621,000
What is 1? What is 3? What are the numbers called and is there a full list with explanations? $ whatis nvim nvim (1) - edit text $ whatis printf printf (3) - formatted output conversion printf (1) - format and print data Thanks :)
This is section numbers, according to man man: The table below shows the section numbers of the manual followed by the types of pages they contain. 1 Executable programs or shell commands 2 System calls (functions provided by the kernel) 3 Library calls (functions within program libraries) 4 Special files (usually found in /dev) 5 File formats and conventions, e.g. /etc/passwd 6 Games 7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7), man-pages(7) 8 System administration commands (usually only for root) 9 Kernel routines [Non standard] Ex: man 3 printf # C Linux Programmer's Manual man 1 printf # User Commands man -k printf [...] sprintf (3) - formatted output conversion swprintf (3) - formatted wide-character output conversion vasprintf (3) - print to allocated string [...]
What is the number inside the parenthesis on a Linux command? [duplicate]
1,465,302,621,000
I want to create a hash of more than one source in Bash. I am aware that I can: echo -n "STRING" | sha256sum or sha256sum [FILE] What I need is: STRING + FILE FILE + FILE STRING + STRING STRING + FILE + STRING For example STRING + FILE Save the hash of STRING in a variable and the hash of the [FILE] in a variable. Compute and create a hash of the sum. Save the hash of the STRING in a file and the hash of the [FILE] in the same file and create a hash of this file. Can I create a hash using a single command? For example: echo "STRING" + [FILE] | sha256sum How can I accomplish this, and what is the recommended or correct method? UPDATE With Romeo Ninov's answer, EXAMPLE 1: echo -n "STRING" && cat [FILE] | sha256sum When I do: EXAMPLE 2: echo $(echo -n "STRING" | sha256sum) $(sha256sum [FILE]) | sha256sum What should I use? I'm getting different results. What is the correct method to achieve this?
You could create a script like this to hash multiple files, and then hash the concatenation of their hashes. Hashing in two parts like this instead of concatenating all data first should work to prevent mixups where the concatenation loses information on the borders between the inputs (e.g. ab+c != a+bc). #!/bin/bash # function to get the hashes H() { sha256sum "$@" | LC_ALL=C sed ' s/[[:blank:]].*//; # retain only the hash s/^\\//; # remove a leading \ that GNU sha256sum at least # inserts for file names where it escapes some # characters (such as CR, LF or backslash).' } # workaround for command substitution removing final newlines hashes=$(H "$@"; echo .) hashes=${hashes%.} # just for clarity printf "%s\n" "----" printf "%s" "$hashes" printf "%s\n" "----" # hash the hashes final=$(printf "%s" "$hashes" | H) echo "final hash of $# files: $final" An example with two files: $ echo hello > hello.txt $ echo world > world.txt $ bash hash.sh hello.txt world.txt ---- 5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 ---- final hash of 2 files: 27201be8016b0793d29d23cb0b1f3dd0c92783eaf5aa7174322c95ebe23f9fe8 You could also use process substitution to insert a string instead, this should give the same output: $ bash hash.sh hello.txt <(echo world) [...] final hash of 2 files: 27201be8016b0793d29d23cb0b1f3dd0c92783eaf5aa7174322c95ebe23f9fe8 Giving the same input data (hello\nworld\n) with a different separation gives a different hash: $ bash hash.sh <(printf h) <(printf "ello\nworld\n") [...] final hash of 2 files: 0453f1e6ba45c89bf085b77f3ebb862a4dbfa5c91932eb077f9a554a2327eb8f Of course, changing the order of the input files should also change the hash. The part between the dashes in the output is just for clarity here, it shows the data that goes to the final sha256sum. You should probably remove it for actual use. Above, I used sed to remove the filename(s) from the output of sha256sum. If you remove the | sed ... part, the filenames will be included and e.g. hash.sh hello.txt world.txt would instead hash the string 5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 hello.txt e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 world.txt The sub-hashes are the same, but the input to the final hash is different, giving f27b5175dec88c76dc6a7b368167cd18875da266216506e10c503a56befd7e14 as the result. Obviously, changing the filenames, including going from hello.txt to ./hello.txt would change the hash. Also using process substitution would be less useful here, as they'd show up with odd implementation-dependent filenames (like /dev/fd/63 with Bash on Linux). In the above, the input to the final hash is the hex encoding of the hashes of the input elements, with newlines terminating each. I don't think you need more separation than that, and could technically even drop the newlines as the hashes have a fixed length anyway (but we get the newlines for free and they make it easier to read for a human). Though note that sha256sum gives just plain hashes. If you're looking for something to generate authentication tags, you should probably look into HMAC or such, and be wary of length-extension attacks (which a straightforward H(key + data) may be vulnerable to) etc. Depending on your use-case, you might want to consider going to security.SE or crypto.SE, or hiring an actual expert.
How can I create a hash or sha256sum in Bash using multiple sources or inputs? What is the recommended method?
1,465,302,621,000
I have a csv file approximately 16,000 rows long, with two fields. The first field contains a list of numeric values, and the second field contains a list of first and names delimited by semi-colons e.g. 3, Jack Mackie; Hanna Jones; Mike Freeland; Ollie Downs; Farrah Anderson; Judy John 9, Jewel Woodley; Jean Sullivan; Marcia Robin; Kerry Morton; Joelle Armour; Zakiya Pulwarty; Karen Thornhill; Shurm Ahmet; Ed Aslan; Adam Condell; Zeliha Manners; Joan Johnson 5, Haydn Smart; Andre Henry; Tamara Brownbill; Kelly Withers; Eden Anderson; Naomi Casa; Azaria Amritt; Jamile Newton; Nabahe Durand The name listed in the second field which corresponds to the numeric position in the first field, is the team leader e.g. the team leader in the first row is Mike Freeland (position 3), in the second row is Ed Aslan (position 9), and in the third row is Eden Anderson (position 5). I need to extract all the names of the team leaders. I'm trying to write a shell script to extract all the names of the team leaders, run it against my csv file, then output it to a new file. I have been researching how to use 'grep', or 'awk' plus 'FS' (FS to specify the semi-colon as the delimiter instead of whitespace) to find the information, but I don't know how to incorporate the value in the first field as the selection criteria. Every example that I have seen uses these commands to search for a known value or string. In this case, however, I only know the position of the value (first and last name). Am I looking at the right commands? I have not been able to come up with a script. How do I extract the names of the team leaders?
$ awk -F, '{split($2,names,";"); print names[$1]}' file.csv Mike Freeland Ed Aslan Eden Anderson
Find a value in a specific position when the only information you have is the position
1,465,302,621,000
Greetings Stack Exchange, My Goal:Execute ls to search the entire directory structure and grep to search for cats.py. Use cat to read the file cats.py. I know that sounds like Gnu/Linux inception. I currently am new to bash and not really familiar with Stack Exchange so forgive me if this formatted terribly. I execute the following ls -R -l -t -r | grep cats this returns the following rw-r--r-- 1 user user 2179 Mar 18 08:53 cats.py I have tried to use cat to read the file returned above, but I am having issues with how to assign the placeholder for the results of the grep command this is what I have tried and the results. ls -R -l -t -r | grep cats | cat cats.py cat: cats.py: No such file or directory I believe the issues is the way I am executing the cat function should this look something like : ls -R -l -t -r | grep cats | cat '{}' or ls -R -l -t -r | grep cats && cat cats.py
Use the find command. It can find the file, and run a command on it. ls has problems, especially if the -l option is given, as then you have a lot more data than you need (file-mode, date, owner, ... ). i.e. (substitute the bits in the «») find «directory» -name '«file-name-glob-pattern»' -exec «command» {} \; e.g. find . -name 'cats.py' -exec cat {} \;
Question about Gnu/Linux command line interface, with grep, ls, and cat
1,465,302,621,000
(self-migrated from ask-ubuntu because it's linux-related, not ubuntu, and my os isn't ubuntu) I'm trying to make a grep that looks like this: grep -r 2019 | grep -riv FAILED | grep -rl DSL I want to get filenames (-l) of files containing 2019 in them, AND NOT (-v) containing FAILED AND containing DSL. Here, only the last grep is executed. I understand it's because of the -r, so each grep greps on all files instead of the previous result. But I can't figure out how to make it work without -r. Maybe there's another way to use multiple patterns on a grep but with "positive" and "negative" match I haven't found anything.
The last grep in the pipeline would be reading from the previous grep (if it hadn't used the -r option, see later), so it would have no idea from what file the data came from, which in turn means it can't report the pathname of the file. Instead, consider using find like so: find . -type f \ -exec grep -q 2019 {} \; \ -exec grep -q DSL {} \; \ ! -exec grep -qi FAILED {} \; \ -print This would take each regular file from the current directory and any subdirectory (recursively) and test whether it contains the strings 2019, DSL, and FAILED (case insensitively). It would print the pathnames of file that contain the first two string but that does not contain the third. If a file does not contain 2019 the other two tests will not be carried out, and if it does not contain DSL, the last test will ont be carried out, etc. Note that instead of grep -v -qi FAILED I'm using a negation of grep -qi FAILED as the third test. I'm not interested in whether the file contains lines not containing FAILED, I'm interested in whether the file contains FAILED, and in that case I'd like to skip this file. Related: Understanding the -exec option of `find` The issue with your pipeline, grep -r 2019 | grep -riv FAILED | grep -rl DSL is that the last grep will look recursively in all the files in the current directory and below and will ignore the input from the previous stages of the pipeline. The two initial grep invocations may produce some data, but they would fail to forward this through the pipeline and will eventually be killed when the last grep is done. Also, as I already noted above, the middle grep would not find files that does not contain FAILED, it would find files that contain lines with things other than FAILED. Incidentally, it would also ignore the input from the preceding grep.
Piped greps for looking inside files
1,465,302,621,000
if I run nrolland@mactoasty ~ $ type -p skhd skhd is /usr/local/bin/skhd I can't compose it nicely with other command like nrolland@mactoasty ~ $ la `type -p skhd` ls: is: No such file or directory ls: skhd: No such file or directory lrwxr-xr-x 1 nrolland admin 29B Jun 4 09:35 /usr/local/bin/skhd -> ../Cellar/skhd/0.2.2/bin/skhd What is the cleanest way to get only the second part ? (I am using zsh if that's any help)
Use command -v skhd instead: ls -l "$( command -v skhd )" The command utility is a POSIX standard utility, and by using its -v flag it will output the path to the given utility, if it is found in $PATH, unless it's a function, alias or shell built-in utility.
getting only the path out of `type -p prog` command
1,465,302,621,000
I think the position of parameter of any command is not fixed. For example, cp -r ./abc ./def and cp ./abc ./def -r are the same, grep -rnH hello . and grep hello . -rnH are the same... However, today when I was using ldd, I found that I was wrong. Because ldd -r x.so and ldd x.so -r are not the same. The second command gave me an error: ldd: ./-r: No such file or directory Why can't we change the position of parameter of ldd?
Some GNU utilities silently reorganises the command line parameters so that the options and option-argument comes before the operands. This is not standard behaviour. Standard-compliant utilities expect options and option-arguments to come first, and when the command line parser finds the first non-option argument, the rest of the arguments are treated as operands: cp -i file1 file2 In the above, the first argument is an option while the last two arguments are operands. cp file1 file2 -i The above has three operands, and a non-GNU implementation of cp would copy file1 and file2 into the directory called -i (or give an error message if no such directory existed). GNU cp on the other hand, treats -i as an option and asks whether I would want to overwrite file2 if that file exists. This behaviour is remedied by setting the environment variable POSIXLY_CORRECT: $ cp file1 file2 -i cp: overwrite 'file2'? n $ POSIXLY_CORRECT=1 cp file1 file2 -i cp: target '-i' is not a directory Or you could use -- to explicitly mark the end of options (which would work whether the command parses its options the GNU way or not): $ cp -- file1 file2 -i cp: target '-i' is not a directory That's something to keep in mind in things like: grep 'PATTERN' *.txt Which with GNU grep you need to write: grep -- 'PATTERN' *.txt or grep -e 'PATTERN' -- *.txt In case the PATTERN or the name of some of the .txt files start with -. Your ldd (which on GNU systems is a bash script that parses options by hand, not using the GNU getopt_long() API) does not parse its command line arguments in the "GNU way", which (IMHO) it's doing correctly. From the GNU documentation of getopt_long(3): By default, getopt() permutes the contents of argv as it scans, so that eventually all the nonoptions are at the end. [...] If the first character of optstring is + or the environment variable POSIXLY_CORRECT is set, then option processing stops as soon as a nonoption argument is encountered.
the position of parameter of command
1,465,302,621,000
I'm trying to compose myself a bash script with the intent of it being used commonly in a terminal. It should start an application as a background process and discard it's stderr output. Here's what I got: for app in $@; do $app 2>/dev/null done It seemed to work just fine with bare applications started without parameters, like script.sh firefox gedit but failed to do the following: script.sh "vlc video.mp4" My question is: How can I enhance this basic script to handle applications which take parameters/files as their input? Maybe there already is a tool I can use?
There are two issues: The use of $@ without quotes. This would make the loop iterate over vlc and video.mp4 as two separate items, even if these were within the same quoted string in the invocation of the script. Using a command in a variable. If the command is anything more complicated than a single simple command, then this won't work. You would have to eval the given string instead. Taking this into account, your script could look like #!/bin/sh for cmd do # or: for cmd in "$@"; do eval "$cmd" 2>/dev/null done Calling this as ./script 'echo "hello world"' 'vim "$HOME/.profile"' 'tr a-z A-Z <"$HOME/.profile" | grep -c EXPORT' would first run echo "hello world", and when that finishes, it would open vim for editing the named file in that second command. The last command is more complex but handled by the fact that we use eval (it just changes all alphabetic characters to uppercase and counts the number of times the string EXPORT occurs in a file). It is run as soon as the vim session exits. With this, you could even do ./script 'd=$HOME' 'ls "$d"' i.e., set variables that are used in later commands. This works because the commands invoked by eval are run in the same environment as the script. This would not work if you start the commands as background tasks though, as the title of your question suggests.
Create an abbreviation for "2>/dev/null &"
1,465,302,621,000
We all have (or know someone who has) unintentionally misused the destroy-disk (dd) command. What ways exist (if any) to change the command in the following or a similar way: If /dev/sda is given as the output file (of=/dev/sda) the command doesn't run or prompts for confirmation with something like "are you sure about that"? Could you achieve something like that using your .bashrc file? Is there a way, in general, to stop certain commands from running when certain arguments are passed? Edit: The command is run as root.
As Arkadiusz said, you can create a wrapper: dd() { # Limit variables' scope local args command output reply # Basic arguments handling while (( ${#} > 0 )); do case "${1}" in ( of=* ) output="${1#*=}" ;; ( * ) args+=( "${1}" ) ;; esac shift || break done # Build the actual command command=( command -- dd "${args[@]}" "of=${output}" ) # Warn the user printf 'Please double-check this to avoid potentially dangerous behavior.\n' >&2 printf 'Output file: %s\n' "${output}" >&2 # Ask for confirmation IFS= read -p 'Do you want to continue? (y/n): ' -r reply # Check user's reply case "${reply}" in ( y | yes ) printf 'Running command...\n' >&2 ;; ( * ) printf 'Aborting\n' >&2 return ;; esac # Run command "${command[@]}" } Example: $ dd if=/dev/urandom of=file.txt bs=4M count=5 Please double-check this to avoid potentially dangerous behavior. Output file: file.txt Do you want to continue? (y/n): y Running command... 5+0 records in 5+0 records out 20971520 bytes (21 MB, 20 MiB) copied, 0.443037 s, 47.3 MB/s Modify it to fit your needs (make it POSIX-compliant, check for other conditions, etc).
Can you make the `dd` command safe?
1,465,302,621,000
I have two BASE64 encoded strings and I would like to get the BASE64 encoding of the binary concatenation of the two string using just the command line. Example: > $ echo -n "\x01\x02" |base64 AQI= > $ echo -n "\x03\x04" |base64 AwQ= > $ echo -n "\x01\x02\x03\x04" |base64 AQIDBA== So the input values to my problem would be AQI= and AwQ=, the desired output is AQIDBA==
Probably easiest to decode the inputs and encode again: $ echo "AQI=AwQ=" | base64 -d | base64 AQIDBA== (Or just run the decoder separately for each string if reading the string past the = padding offends your sensibilities.) $ (echo "AQI=" |base64 -d ; echo "AwQ=" |base64 -d) | base64 AQIDBA==
Concatenate 2 binary strings in base64 form
1,465,302,621,000
I have the Ecplise Platform (the programming environment, see https://eclipse.org/) on my system. It can be run by typing "eclipse" into the terminal. Now I installed eclipse prolog (see http://www.eclipseclp.org/ ). I followed the instructions from http://eclipseclp.org/Distribution/Current/6.1_224_x86_64_linux/Readme.txt ) and now I want to start it. In these instructions they say that it can be run by typing "eclipse" into the terminal. But if I do that, only the Eclipse programming environment starts, not the eclipse prolog thingy. What do I do now? I am using Linux Mint 17, 64 bit.
Figure out where the new eclipse is installed, and don't just enter eclipse but the full path: /where/the/new/eclipse/is/installed/bin/eclipse If this new eclipse becomes your first choice, you may want to define an alias in your startup files (e.g. .profile for sh): alias eclipse=/where/the/new/eclipse/is/installed/bin/eclipse Now, if you enter eclipse, the new one will be run. To execute the old one, you will have to specify its full path. You can even define two aliases, one for each eclipse: alias eprolog=/where/the/new/eclipse/is/installed/bin/eclipse alias eplatform=/where/the/old/eclipse/is/installed/bin/eclipse ... and enter either eprolog or eplatform at the shell prompt.
How to run a program via terminal if it shares its name with another program
1,465,302,621,000
How can we replace the following commands netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n by our special command like this: ab1 my mean is I want to use my command ab1 instead of netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n How can I do this?
You could use an alias, or install an executable script in some directory included in your $PATH Assuming bash, a "global" alias could be defined in /etc/bash.bashrc (or ~/.bashrc if a single user need this alias to be defined). Assuming ksh, it would be in /etc/ksh.kshrc (or ~/.kshrc). The alias definition can be done adding a line such as: alias ab1='netstat -nat | awk "{print \$6}" | sort | uniq -c | sort -n'
How to replace a sequence of commands by a single command in linux?
1,465,302,621,000
When I run: cat filename | cut -f3 | head -1 I get the following result: apple However when I save this to a file by using: cat filename | cut -f3 | head -1 > newfile I then open this using php with the following: $variable = file_get_contents("newfile"); echo $variable; // PRINTS "apple" But when I do the following the output is 6!!! echo strlen($variable); // PRINTS 6 WHEN I EXPECT 5! $variable = "apple"; echo $variable; // NOW PRINTS 5 Any idea how to avoid this? I need to use this variable in a lookup function and it wont match my lookup due to the extra character which I cannot identify. When I echo the following: $variable = file_get_contents("newfile"); echo "TEST1"; echo "TEST2"; echo $variable; echo "TEST3"; echo "TEST4"; I get the following output: TEST1TEST2apple TEST3TEST4 So it must be printing a new line somehow....?!?
Indeed, that's your \n, that is counted by strlen In PHP, you have rtrim (http://php.net/manual/fr/function.rtrim.php) to remove all \n, \t, \r, \0 & \x0B from the right end of your string.
cat filename | cut -f2 | head -1 > newfile contains more characters than expected
1,465,302,621,000
I have a file with data as follows: "google1|yoo|dummy|yes|wow|/" + VARIABLE + "/" "google2|hub|lab|dummy|yes|/" + VARIABLE + "/" "google3|short|lab|yoo|/" + VARIABLE + "/" "google4|hello|good-guy|bad-girl|lol|dummy|/" + VARIABLE + "/" "google5|good-guy|a4-123|yoo|/" + VARIABLE + "/" "google6|bad-girl|b4-124|hub|/" + VARIABLE + "/" Now, I want to get a list of strings between delimiter "|" (pipe). Output should be as yoo dummy yes wow hub hello good-guy bad-girl a4-123 b4-124 dummy lol short lab Basically, I want to have unique values from the list of strings after delimiter filter. I tried using awk as awk -F"|" '{gsub(/\).*/,"",$2);print $2}' file But, I get wrong data.
If you have grep with pcre option: $ grep -oP '\|\K[^|]+(?=\|)' ip.txt | sort -u a4-123 b4-124 bad-girl dummy good-guy hello hub lab lol short wow yes yoo -o print only matching pattern -P use pcre regex \|\K positive lookbehind to see if | is there before our string to be extracted similarly, (?=\|) positive lookahead to see if there is | after our string to be extracted [^|]+ string to be extracted - simply negate | and get one or more of such character sort -u to get unique value If you want to preserve order in which these strings are found: $ grep -oP '\|\K[^|]+(?=\|)' ip.txt | awk '!seen[$0]++' yoo dummy yes wow hub lab short hello good-guy bad-girl lol a4-123 b4-124
filtering data based on delimiter in shell
1,465,302,621,000
I have a *.sh script that's missing the shebang from the first line. Can I fix it with sed?
Insert (i) the shebang with sed, in place operation: sed -i '1 i #!/bin/bash' file.sh With backing up the original file with a .bak extension: sed -i.bak '1 i #!/bin/bash' file.sh Replace #!/bin/bash with actual shebang you want. Example: % cat foo.sh echo foobar % sed '1 i #!/bin/bash' foo.sh #!/bin/bash echo foobar
Sed Usage: Add shebang missing from first line of script [duplicate]
1,465,302,621,000
I run to run a command using the su -s command to start a process. Since I do not want the root user to own the process. I try to do this by issuing the command su -s "$CATALINA_HOME/bin/catalina.sh run" tomcat which returns su: /opt/apache-tomcat/bin/catalina.sh run: No such file or directory How can I run the su -s command along with arguments to not generate this error?
If you're running su as root, you can use -s to specify a different shell (running as root is necessary here since your tomcat user doesn't have a valid shell), and -c to specify the command to run: su -s /bin/sh -c "$CATALINA_HOME/bin/catalina.sh run" tomcat You might find start-stop-daemon useful; it has a whole slew of options to specify the user and group to use, how to start the daemon etc. The tomcat8 initscript used in Debian might provide useful inspiration. Or you could look at writing a systemd unit or whatever is appropriate for your system's init.
Run su -s with arguments
1,465,302,621,000
In some circumstances, I need to make an official alias for a set of command, which acts just the same as the original software. Like: alias ipython3="source /Users/zen1/miniconda2/bin/activate py3k; ipython; source /Users/zen1/miniconda2/bin/deactivate;" But as I was lazy, I'd like to use another alias for alias ipython3, let it be ipy3, I could do it by just copying ipython3's content. But that would be too clumsy. Is there some magic_func so that alias ipy3=magic_func("ipython3") can get the alias content of ipython3 when using ipy3? PS: ipython3 should be kept is because it is the official command to trigger ipython for python 3, which can't be installed properly on my computer.
Use this: alias ipy3='ipython3'
Can I make alias for alias in bash?
1,465,302,621,000
I'm using Linux Mint 17.2 Cinnamon. Somehow, the exit command isn't working.
Nothing's wrong. You were logged in as a root user. When you exited, your session was closed - but you opened this session as a user, so you get back to your user account. It's kind of like opening full screen game - you open it from your desktop, and it looks like it took over your PC - yet when you close it, you go back to desktop - the place of origin - because you only closed your game, not the whole desktop.
What's wrong with exit command on my terminal?
1,465,302,621,000
I am pretty green when it comes to bash scripts and completely new to command line functionality in bash. I tried my hand at a script which is supposed to be useable both with command line arguments as well as manual setting of variable values, if the user prefers to edit the code directly. The general idea is this: Define command line arguments for different functions using the while/case/getopt structure. Set the value of the variables of each option inside the respective case. Later, check if the command line argument was actually provided using an if case. if not, set the value to a default parameter there. This way one can either use myscript.sh -i somestring, or just set the variable associated with -i inside the if case manually. This way the script can also be run just by doing ./myscript.sh. I have figured out that the if cases I have in my code don't actually do anything, or at least not what I want them to. When I run the script without commandline arguments and then echo out the default values that should be set by the if cases, they are empty. This means that the for loop later on in the script cannot work, since these variables haven't been set to anything. The line that the script gets stuck on is slurm_file_string=$(cat $slurm_template_file | sed "s/INPUT_FILE/$gs_file/") So idk how to implement what I'm trying to achieve and how to fix this "getting stuck issue". I need to change the if cases somehow so that the default values inside the if cases actually do something, but idk how. Here's my code: #!/bin/bash # TODO: # Need to figure out if I can launch all slurm jobs with "&" and let slurm handle the rest. # Add scratch removal logic to slurm file template. Makes more sense to do it per run in the file that actually runs gaussian. # Add commandline options for: # input folder (-i) # verbose mode (-v) # extension name (-x) # slurm template name (-t) # Define function be_verbose() which wraps the logic for printing additional info if -v is set to true. # INFO: Script for running a batch of gaussian jobs using the SLURM scheduler. # General program flow: # The script finds all gaussian input files in inps_folder (needs to be manually specified) # The input files are identified by the file extension in "extension" # The script iterates over all gaussian input files. # For each file a slurm file is created based on the template: submit_gaussian_template.slurm # Inside the template the string "INPUT_FILE" is replaced by the current input file. # The new slurm file is copied to the input files folder # The script changes directories to the common pwd of the slurm file and gaussian input file # The slurm file is executed and then the next iteration starts. # The cleanup is handeled by the shell code inside the slurm file (deleting temp files from /scratch) # IMPORTANT: # This script is designed for a certain folder structure. # It is required that all your gaussian input files are located in one directory. # That folder must be in the same dir as this script. # There must be a template slurm file in the directory of this script # There must be a string called INPUT_FILE inside the template slurm file ################################################################################################################################################ # this implements command line functionality. If you do not wish to use them, look for MANUAL_VARS_SPEC below to specify your variables that way. while getopts "i:vx:t:" opt; do case $opt in i) inps_folder="$OPTARG" ;; v) verbose=true ;; x) extension="$OPTARG" echo "$extension" ;; t) slurm_template_file="$OPTARG" ;; \?) echo "Usage: $0 [-i input_folder] [-s value_s] [-k value_k]" exit 1 ;; esac done # MANUAL_VARS_SPEC: Change the varibles to the appropriate values if you do not wish to use the comman line options. # These are essentially the default settings of the script. # folder for input files if [ -n "$inps_folder" ]; then # "if the commandline option is an empty string (not set), set the variable to this value." inps_folder="testinps" fi # verbose mode if [ -n "$verbose" ]; then verbose=0 fi # file extension of your gaussian input files if [ -n "$extension" ]; then echo "AFASGSG" extension="gin" fi # slurm template file name if [ -n "$slurm_template_file" ]; then slurm_template_file="submit_gaussian_template.slurm" fi # HELPER FUNCTIONS function be_verbose(){ local print_string="$1" # set the first argument provided to the funtion to the local var "print_string". if [ $verbose = true ]; then echo "$print_string" fi } echo "$inps_folder" echo "$verbose" echo "$extension" echo "$slurm_template_file" #### START OF MAIN LOOP. files="${inps_folder}/*.${extension}" # iteratable for all gaussian input files. for file in $files; do gs_file=$(basename $file) # get the file without the preceeding path gs_file_name="${gs_file%.*}" # get the file name without the extension # Make a new slurm file for the current job based on the template slurm file in the pwd. slurm_file_string=$(cat $slurm_template_file | sed "s/INPUT_FILE/$gs_file/") # get template and replace INPUT_FILE with gaussian input file. FAIL!!!! slurm_file="${gs_file_name}.slurm" echo "$slurm_file_string" > "$slurm_file" # write the string of the new slurm file to a new file mv "$slurm_file" "${inps_folder}/${slurm_file}" # move the new slurm file to the inps_folder cd "$inps_folder" # change directories so that slurm files can be executed echo "Is running ${gs_file}" #PUT HERE WHATEVER THE COMMAND FOR RUNNIGN A SLURM FILE IS & cd .. done Thanks in advance for any input.
Generally it's easier to help if there's a "minimum verifiable example" of code - something just sufficient to demonstrate the problem. However, here I think I can see the problem. When parsing options you have code like this: while getopts 't:' opt do case "$opt" in (t) slurm_template_file="$OPTARG" ;; esac done But later on you have this, which says "if the variable has a value, set it to a fixed value": if [ -n "$slurm_template_file" ] then slurm_template_file='submit_gaussian_template.slurm' fi I would have expected this code to use -z instead of -n so that if the variable was unset or empty you assigned a default value. My approach would be to set variables to have their default values and then override them with command-line switches. If necessary for later decisions I would also set a flag to indicate that the user set the value explicitly, but in my experience this isn't often needed. slurm_template_file='submit_gaussian_template.slurm' slurm_template_file_isset=false while getopts 't:' opt do case "$opt" in (t) slurm_template_file="$OPTARG" slurm_template_file_isset=true ;; esac done if "$slurm_template_file_isset" then echo "User overrode the default" >&2 fi echo "The working value is: $slurm_template_file" >&2 Alternatively you can set default values at the top of the code and use them whenever the working value is unset. With long variable names the code is a little more unwieldy but with care it can be clearer: template_default='submit_gaussian_template.slurm' template= while getopts 't:' opt do case "$opt" in (t) template="$OPTARG" ;; esac done if [ -n "$template" ] then echo "User overrode the default: $template" >&2 fi echo "The working value is: ${template:-$template_default}" >&2
Bash script with command line options gets stuck and doesn't set default values for variables
1,465,302,621,000
I have the following file. //TESTCASES=3 //MARK=9 [runscript] nc dec s10 s11 [/runscript] [checks] [/checks] [testcase] // List: 1, 2, 3, 5, 0xA, -1 .global LIST .data LIST: .word 1, 2, 3, 5, 0xA, -1 [/testcase] I am trying to get the text between [runscript] and [/runscript] using grep and regex. I have verified that the regex works on its own. (?<=\[runscript\]\n)(.|\n)*(?=\[\/runscript\]) However, grep returns nothing. Is it an issue of the options? I have tried many of them alone and combined. -P, -e, -E, -w, -o What am I missing?
I wouldn't use grep but rather awk: awk ' $0=="[runscript]" {rs++; next} $0=="[/runscript]" {rs--} rs {print} ' file Output nc dec s10 s11 If you really want to use grep, this will work with PCREs and NUL-delimited data. But I would suggest it's harder for people to understand (and maintain) then the awk version, and less portable too: grep -zoP '(?<=\[runscript\]\n)(.|\n)*?(?=\[\/runscript\])' file Output (with an invisible trailing NUL) nc dec s10 s11 I've tweaked your RE to cope with multiple matches should there be any.
Grep with regex in CLI returns nothing
1,465,302,621,000
I am completely stupefied by this. I am running a weekly script to move log files into a directory. mkdir 2022-04-30 mv *.* 2022-04-30/ when I do so - only files with extension are moved but not ones without. If I try *. is says no files found If I try * it does what I want but it also tries to move the directory 2022-04-30 into itself and prints an error. It doesn't affect script, but I would like to understand whats going on. I am pretty sure * means zero or more characters. so *.* should move all the files and not directories. Whats going on and how to make a script which does what I need without moving directories? CentOS 9 / bash EDIT: file names example 2022-04-29abc.cde 2022-04-29 App
* means zero or more of any character, . means the single character dot. So *.* means any names with a(t least one) dot in there somewhere. On Unix-likes, the dot is just a regular character in filenames, and also there's no difference between files and directories in a regular glob. (If you wanted to match directories only, you could use */.) Here, it looks like the names of your files contain letters while the directories don't, so to target on that, you could use *[[:alpha:]]* to match names that have at least one letter. That would still match directories, but would avoid names like 2022-04-30. Alternatively, if what you're doing is something like mv 2022-04-29* 2022-04-29 (i.e. move files starting with the date to a matching directory), you could instead use mv 2022-04-29?* 2022-04-29 The ? matches exactly one character, so ?* requires at least one character after the date. In zsh (but not in Bash), you could use *(.) to match just regular files (and similarly *(/) to match just directories). In all shells, */ would match only directories, but there's no equivalent for matching just regular files. (The slight difference is that with */ the names are produced with the trailing slashe.)
How to use wildcard for files without extension
1,465,302,621,000
I was using the term CLI for describing commands like ls. While updating my list of useful commands, I was wondered about one thing. How do we call a shell-like program like mysql for example? doesn't CLI stands for Command Line Interface or Command Line Interpretor? Isn't it more logical to use this term for shell-like programs? After a few research, I was even more confused. Some website used the CLI term to describe the commands, some others for the shell, some for both, some were making a difference between Command Line Interface and Command Line Interpretor, and some were just even more confusing. So, what exactly is a CLI? What is the term to describe a shell-like program? And a command program? Why is the exact definition so blurry? why isn't everyone in agreement on this?
Wikipedia has a good definition: A Command-Line Interface (CLI) processes commands to a computer program in the form of lines of text. The program which handles the interface is called a command-line interpreter or command-line processor. The term CLI is used in opposition to GUI (Graphical User Interface) which denotes a way for the user to interact with a system through a point-and-click interface, icons, windows, and other graphical components. Both terms are very wide. To throw in some example, in the Linux/UNIX world, shells (sh, bash, zsh, ...) are CLIs while X Windows environments (GNOME, KDE, ...) are GUIs. Git can be used both through a CLI (git command) and a GUI client. To answer your question, the term CLI can be used to describe the commands typed on the terminal, and also the shell. As said, it's a broad term.
Is "CLI" describing the shell or the commands? [closed]
1,465,302,621,000
using AWK I am trying to fill down a HTML table (rows/columns) based on the previous value. Similar to excel. eg: table user$ csv2html.awk table.csv rowing | fast | good | fast | good swim | | | | slow | | increase | late | golf | red | bad I want this html table to then become the following: table rowing | fast | good rowing | fast | good swim | fast | good swim | fast | slow swim | fast | increase swim | late | increase golf | red | bad The table can have any number of columns/rows and the values can vary across many different words. I am simply trying to understand how to parse the html and then fill down the value that I find in each column/row. Output must be to a new html file that keeps formatting. UPDATE: <html><body><table> <tr> <th>Column1</th> <th>Column2</th> <th>Column3</th> </tr> <tr> <td>rowing</td> <td>fast</td> <td>good</td> </tr> <tr> <td></td> <td>fast</td> <td>good</td> </tr> <tr> <td>swim</td> <td></td> <td></td> </tr> </table></body></html>
You need to add something like this to you awk script: for(i=1;i<=NF;i++){ if($i==""){$i=last[i]} last[i]=$i } As you don't show us your script, you have to do that yourself. Beginning with your resulting table, it could look like this: $ awk -F ' *\| *' ' BEGIN{OFS="|"} { for(i=1;i<=NF;i++){ if($i==""){$i=last[i]} last[i]=$i }$1=$1 }1' table rowing|fast|good rowing|fast|good swim|fast|good swim|fast|slow swim|fast|increase swim|late|increase golf|red|bad However! I would suppose you do the whole thing with a proper html parser instead of awk. I can recommend python's beautifulsoup module. Or even better, use a proper data analysis tool, e.g. pandas, which provides exactly this functionality with its ffill method: ffill: propagate last valid observation forward to next valid #!/usr/bin/env python3 import pandas as pd with open('file.html') as f: html = f.read() df = pd.read_html(html)[0] df = df.ffill() df.head() Output: Column1 Column2 Column3 0 rowing fast good 1 rowing fast good 2 swim fast good See here.
Fill down multiple columns in HTML file with AWK
1,465,302,621,000
Consider the following file layout: . ├── dir_a │   └── file_1 └── file_2 Invoking find . \( -name dir_a -prune \) -a -print gives ./dir_a but invoking find . \( -name dir_a -prune \) -o -print gives ./file_2 Why logical OR (-o) does not include results from the logical AND (-a)?
From Find's specification (GNU Find manual has a similar wording): expression -o expression Alternation of primaries; the OR operator. The second expression shall not be evaluated if the first expression is true. dir_a Since the -name matches and -prune is always true, \( -name dir_a -prune \) is true, thus Find doesn't get to -print for dir_a. file_1 Not considered by Find, because dir_a is pruned, so not printed. file_2 The -name test does not match, thus \( -name dir_a -prune \) is false and Find reaches the -print primary.
GNU find logical operators with -print
1,465,302,621,000
I've run into situations where symlinks point to other symlinks causing me to have to run multiple ls commands to try to trace where the base file is stored. For example if I want to know the location of java program I run on centos I usually start with /bin/java and have to go 3-4 symlinks deep before I find the actual file's location. Is there a simple clean command that will trace along all symlinks until it finds a real file and give me the location of the base file?
Use realpath, it will expand all symbolic links until a file target. For example, for me: > ls -al /usr/bin/java lrwxrwxrwx 1 root root 22 Jul 13 2019 /usr/bin/java -> /etc/alternatives/java > realpath /usr/bin/java /usr/lib64/jvm/java-11-openjdk-11/bin/java Also readlink alone would give the direct target, while readlink -f will give the final file: > readlink /usr/bin/java /etc/alternatives/java > readlink -f /usr/bin/java /usr/lib64/jvm/java-11-openjdk-11/bin/java
How to find the original file through multiple symlinks? [duplicate]
1,465,302,621,000
If I issue ln -s source.txt symlink.txt and symlink.txt does not already exist, is a link file automatically created called symlink.txt or is the command a noop? If it is a noop, if I just create a blank symlink.txt (touch symlink.txt) and then run the previous command will the operation works as planned? Thanks for the help
Well, that's easy to test: $ mkdir test; cd test test$ ln -s source.txt symlink.txt test$ ls -l total 0 lrwxrwxrwx 1 ilkkachu ilkkachu 10 Oct 23 18:24 symlink.txt -> source.txt test$ cat symlink.txt cat: symlink.txt: No such file or directory (Representing that output as text doesn't do justice to GNU ls, and the coloring support it has.) The ln -s commands creates the symlink symlink.txt regardless of if source.txt exists. Trying to access the file through the symlink doesn't work, though, since the pointed-to file doesn't exist. With output coloring, ls would show the link name and target as red (or whatever the setting is, something other than a live link, anyway.) If symlink.txt exists, ln -s source.txt symlink.txt gives an error, predictably. Use ln -sf to overwrite the target file.
Symbolic Link command behavior ln -s
1,465,302,621,000
I have a directory that contains some subdirectories. I know there are two types of files, such as *A*.txt and *B*.txt, contain an "oldString" under that directory. I want to replace them with a "newString". Can I do this in one command? That is can I add the "*B*.txt" somewhere in this command: find . -type f -name "*A*.txt" -exec sed -i 's/oldString/newString/g' {} \;
You can try: find . -type f \( -name "*A*.txt" -o -name "*B*.txt" \) \ -exec sed -i 's/oldString/newString/g' {} + (here also using + instead of ; to avoid running one sed invocation per file; also has the benefit of returning a non-zero exit status if any of the sed invocations return with a non-zero exit status).
can I replace a string in two different files with one command?
1,465,302,621,000
Question: Suppose there is a folder with files log1.txt, log2.txt, log3.txt, etc. I would like to find the smallest integer N such that log<N>.txt does not exist. Is there a simple command/way to achieve this? Example: if the folder is empty, the command should return log1.txt. If the folder has log1.txt, log2.txt, the command should return log3.txt.
As bash script: #!/bin/bash i=1 while [ -f "log${i}.txt" ]; do ((i++)) done echo "log${i}.txt" The while-loop increments variable $i as long as file log${i}.txt exists. The echo outputs the non-existing filename with the next number.
Finding the smallest index such that the indexed log file does not exist in a particular directory
1,465,302,621,000
I want to rename files with zero padded numbers while keeping extension. Example a.abc b.cde c.xyz to be renamed as 001.abc 002.cde 003.xyz :~/x$ rename -n -v 's/.+/our $i; sprintf("%03d.jpg", 1+$i++)/e' * #output> rename(a.abc, 001.jpg) rename(b.cde, 002.jpg) rename(c.xyz, 003.jpg) #then :~/x$ echo "a.abc" a.abc :~/x$ echo ${_##*.} #output> abc so I tried> :~/x$ rename -n -v 's/.+/our $i; sprintf("%03d.${_##*.}", 1+$i++)/' * Global symbol "$i" requires explicit package name (did you forget to declare "my $i"?) at (user-supplied code). Missing right curly or square bracket at (user-supplied code), within string syntax error at (user-supplied code), at EOF Any suggestions using "rename" command?
rename -n -v 'our $n; my $zn=sprintf("%03d", ++$n); s/[^.]*/$zn/' * This would probably do what you intended. Instead of putting the Perl code inside the substitution, we run it before the substitution. The regular expression [^.]* would match any length string up to (but not including) the first dot in the filename. To match up to the last dot, use .*\. instead, and insert the dot on the replacement side: rename -n -v 'our $n; my $zn=sprintf("%03d", ++$n); s/.*\./$zn./' * Note that this would also rename directories. Alternatively, using a simple shell loop, assuming you would want to enumerate the files in the order they are expanded by the * shell glob, and that you use bash: n=1 for filename in *; do [ ! -f "$filename" ] && continue zn=$( printf '%03d' "$n" ) mv -i -- "$filename" "$zn.${filename##*.}" n=$(( n + 1 )) done This additionally skips any name that does not refer to a regular file (or a symbolic link to one). Apart from that, it follows very closely the Perl rename variation above in that it keeps a counter (n) and a zero-filled variant of the counter (zn). The variable n is a simple counter, and $zn is has the same value as $n, but as a zero-filled three-digit number. The value of $zn.${filename##*.} would expand to the zero-filled number, followed by a dot and the final filename suffix of the original filename. If more than one dot is present in the original filename, everything up to the last dot will be replaced by the zero-filled number. Change ## to # to replace up to the first dot. This assumes that you run the loop on files in the current directory only.
Rename files with zero padded numbers while keeping extension using "rename" command
1,465,302,621,000
$ cat test15.sh #!/bin/bash # extracting command line options as parameters # echo while [ -n "$1" ] do case "$1" in -a) echo "Found the -a option" ;; -b) echo "Found the -b option" ;; -c) echo "Found the -c option" ;; *) echo "$1 is not an option" ;; esac shift done $ $ ./test15.sh -a -b -c -d Found the -a option Found the -b option Found the -c option -d is not an option $ -d represents debug or delete as a command line option. So why is it not an option when we included it in options on command line for some script?
-d represents whatever it is programmed to represent, which will not necessarily be delete or debug. In curl for example -d is the option for data. In your script -d is not a valid option. Your options are -a, -b, and -c. All of which essentially do nothing. while [ -n "$1" ] do case "$1" in -a) echo "Found the -a option" ;; -b) echo "Found the -b option" ;; -c) echo "Found the -c option" ;; *) echo "$1 is not an option" ;; esac shift done To add support for -d you must add it to your case statement like the following: while [ -n "$1" ] do case "$1" in -a) echo "Found the -a option" ;; -b) echo "Found the -b option" ;; -c) echo "Found the -c option" ;; -d) echo "Found the -d option" ;; *) echo "$1 is not an option" ;; esac shift done A better way to handle command line options would be with getopts which would look something like the following: while getopts abcd opt; do case $opt in a) echo "Found the -a option";; b) echo "Found the -b option";; c) echo "Found the -c option";; d) echo "Found the -d option";; *) echo "Error! Invalid option!" >&2;; esac done abcd is the list of expected arguments. a - check for option -a without parameters; gives error on unsupported options. a: - check for option -a with parameter; gives errors on unsupported options. The parameter would be set to the OPTARG variable. abcd - check for options -a, -b, -c, -d; gives errors on unsupported options. :abcd - check for options -a, -b, -c, -d; silences errors on unsupported options. opt is the variable the current parameter will be set to (also used in the case statement)
Command Line Options for a script
1,465,302,621,000
I know I can use & at the end of a command to make it run in background and && to connect multiple commands at the same line. However, when I use them together it seems not work. Below is an example. json-server --watch db.json & && python -m http.server 7777 The error message is: bash: syntax error near unexpected token `&&' Is it possible to make them work together? Thanks.
It seems you want this json-server --watch db.json & python -m http.server 7777 The && is not to connect multiple commands at the same line, that is ;. The && is a logical AND. You can't use an AND if you don't wait for the command to exit.
How to run multiple command in one line and some commands in background at the same time?
1,465,302,621,000
I am trying to take a directory and create an archive of it in my home directory from another location. I know that the -C option can be used for this, but tar seems to be ignoring it. I've tried $(basename $DEST)_$(date +%F_%H%M%S).tar.gz -C $HOME $(basename $DEST) where $DEST is the path to the directory I want to archive. Instead of creating the archive in my home directory, tar keeps creating it in the directory I'm executing it from. Is there anything wrong with the way I'm executing tar? Everything else seems to work properly, it's just that the -C flag is being completely ignored. I'm on Linux Mint 18.3 XFCE edition. Edit: The full command is tar czf Pictures_$CREATION_TIMESTAMP.tar.gz -C /home/$USER Pictures. I was executing it from /home/$USER/coding/python_code/.
-C doesn't affect where the archive is created. It only affects which files are added to the archive. So, for example, given tar cvf foo.tar a -C /b c -C /d e, tar will add a from the current directory, switch to /b and add c, switch to /d and add e. foo.tar itself will be created in the current directory (where a was). If no files are given on the command line for adding, but -C /some/dir is used, then tar will switch to /some/dir and add everything in it to the archive. (Correspondingly, when extracting, -C doesn't affect where tar looks for the archive file. It only affects where the extracted files go to.) So: tar czf Pictures_$CREATION_TIMESTAMP.tar.gz -C /home/$USER Pictures from /home/$USER/coding/python_code/ will always create the archive in /home/$USER/coding/python_code/, with the Pictures directory from /home/$USER. If you want the Pictures directory from /home/$USER in an archive created in /home/$USER, you'd have to either cd to /home/$USER and create the archive: cd "/home/$USER"; tar czf "Pictures_$CREATION_TIMESTAMP.tar.gz" Pictures Or, specify the path to the archive: tar czf "/home/$USER/Pictures_$CREATION_TIMESTAMP.tar.gz" -C "/home/$USER" Pictures
Tar ignores --directory option
1,465,302,621,000
My question is basically a follow up question on this topic. I have a file like this: 1000 | line1 100 | line2 10 | line3 I want to do something on $2 if $1 is greater than 20. I wrote something to mimic the 2nd answer but it doesn't work: for a, b in $(cat file.text|cut -d"|" -f 1,2); do if ($1>20) echo $2 done; How can I achieve this? Thanks!
awk -F'|' '$1 > 20 { system("/path/to/another/script.sh "$2 }' < file.text This tells awk to split the input up into fields based on the pipe symbol. Any first field whose value is greater than 20 triggers the system call to ... anything you want. Keep in mind that the argument (here $2, but it could be $0 or any other calculation you do in awk) is passed to the script via a shell call, so if those values can contain shell-special characters, carefully quote it. I'll refer to one of Stéphane's answers for an example of how to do that: awk 'function escape(s) { gsub(/'\''/,"&\\\\&&",s) return "'\''" s "'\''" } { system("/path/to/another/script.sh" escape($2)) }'
looping through a file of two columns
1,516,282,436,000
When I save a command output which contains several lines to a variable directly into my terminal, I have the following results : $ dirs=$(ls -1d /mnt/*/) $ echo $dirs /mnt/ext4/ /mnt/local/ /mnt/remote/ /mnt/test/ $ echo "$dirs" /mnt/ext4/ /mnt/local/ /mnt/remote/ /mnt/test/ However when using it from a posix shell script the result is different. Here is the script #!/bin/sh dirs=$(ls -1d $1) echo "inline" echo $dirs echo "multiline" echo "$dirs" And here is the script output $ ./test.sh /mnt/*/ inline /mnt/ext4/ multiline /mnt/ext4/ This happens even if I use bash instead of sh. Does anyone know how I could save the output of ls -1d /mnt/*/ into a variable keeping the full output ? I would like to parse each of the four directory paths inside a for loop.
$ ./test.sh /mnt/*/ The glob here expands to the directory names as separate arguments, same as if you wrote them out manually on the command line: $ ./test.sh /mnt/ext4/ /mnt/local/ /mnt/remote/ /mnt/test/ However, in the script itself, you refer to only the first argument, $1: dirs=$(ls -1d $1) If you want to refer to all the arguments, use "$@" (with the quotes). However, all you're doing with the directory names here is to run ls on them, which just prints the same names out (though on separate lines). So, if you just want the list of the arguments as a string, you don't need to run the ls command at all, just assign dirs="$@". Nevertheless, in most, if not all cases it's better to just keep the list of file names in the positional parameters, and loop over them with for f in "$@"; do ... or for f do ... Note that file names might contain white space, and when you concatenate the names to a single string, you lose the meaning of the whitespace within the names. foo bar doo might be three files, or the two files foo bar and doo.
Posix shell script - Save multi line command output to variable
1,516,282,436,000
I have a very large text list and need a way to extract lines beginning with the same 2 characters, then save those lines to separate files named after those 2 characters. Example List: abWEye7kgw7 abff34ZSrZf abke8mzMyma b2R5mPZGbCb b2zhhCeLZzZ b2q2T5rkACp k9ekzbc8nUh k9QzXBUrNT7 k92RtdXntZ3 vrTtR9GmbWG vraVM9QXWzY vrME9QnksBf Desired Output: ab* > ab.txt b2* > b2.txt k9* > k9.txt vr* > vr.txt The list is rather large and there are lots of first 2 character combinations.
$ awk '{ f = substr($0,1,2) ".txt"; print >f }' file.in $ ls ab.txt b2.txt file.in k9.txt vr.txt $ cat ab.txt abWEye7kgw7 abff34ZSrZf abke8mzMyma This can obviously be solved in the shell too, but awk is better suited for parsing text files. The substr() picks out the first two characters of each line in the input file, and this is assigned to the variable f with .txt added to the end. The print will output the current line to the file whose name is in f. I believe you can do away with the f variable and use the substr() expression directly after >, but not in the awk implementation that I'm using on OpenBSD (this is possibly a bug). If the number of different combinations of two first characters are too many, you may have issues with too many open files. The following variation will take care of that: awk '{ f = substr($0,1,2) ".txt"; print >>f; close(f) }' file.in
How to extract lines starting with the same first 2 characters, then output to separate files?
1,516,282,436,000
I want to search for bash commands in the bash itself. When I forget the name of a command I want a fast way to find it. For example "search for file" should suggest "find".
The closest thing you can get is via one of these commands: man -k search apropos search These will return all manpages whose description contains the word "search". You can restrict the search to pages in section 1 (user commands) and 8 (admin commands) with the (non-standard) -s option: man -ks1,8 search That would omit pages about programming APIs or concepts, file formats...
Search for bash command
1,516,282,436,000
I'd like to write a script that reads a file and passes every line as options (or "option arguments") to a command, like this: command -o "1st line" -o "2nd line" ... -o "last line" args What's the simplest way of doing this?
Here is one possibility: $ cat tmp 1st line 2nd line 3rd line 4th line $ command $(sed 's|.*|-o "&"|' tmp | tr '\n' ' ') As glennjackman points out in the comments, word splitting can be circumvented by wrapping in eval, though the security implications of doing so should be appreciated: $ eval "command $(sed 's|.*|-o "&"|' tmp | tr '\n' ' ')" Edit: Combining my suggestion of using sed to assemble arguments with glenn jackman's mapfile/readarray approach gives the following concise form: $ mapfile -t args < <(sed 's|.*|-o\n&|' tmp) && command "${args[@]}" As a trivial demonstration, consider the aforementioned tmp file, the command grep, and the file text: $ cat text some text 1st line and a 2nd nonmatching line some more text 3rd line end $ mapfile -t args < <(sed 's|.*|-e\n&|' tmp) && grep "${args[@]}" text some text 1st line and some more text 3rd line end $ printf "%s\n" "${args[@]}" -e 1st line -e 2nd line -e 3rd line -e 4th line
How to pass every line of a file as options to a command?
1,516,282,436,000
I have a file similar to the following: random,test123,MyCompany, Inc. hello,12345,TestCompany, LLC I want to remove the commas from the third column so I'd have something like this: random,test123,MyCompany Inc. hello,12345,TestCompany LLC How would I do this?
This is easy with sed: sed 's/,//3' file Try it online! If you want to directly apply the modifications in your input file, then run: sed -i 's/,//3' file
Remove character from column of CSV
1,516,282,436,000
I am running the below directly in bash command line prompt: $ PIDS= ;while read name; do (cd $name; npm install) & echo started install; PIDS+=($!); done < <(\ls); for pid in ${PIDS[@]}; do wait $pid; done; And I get this: -bash: !: event not found I assume the ! symbol is being used to do command history substitution instead of seeing "$!" as a variable first. How do I get pid of last background process if running directly on bash command line?
I haven't been able to reproduce your problem with bash 4.3 and 4.4, so here is a generic answer. Your problem is triggered by the ! in PIDS+=($!), ! being the start of history substitution (which is enabled by default with an interactive shell). Either disable history substitution with set +H, or quote the ! (not desirable here because of the preceding $) or add a space after !. Bash manual reads: ! Start a history substitution, except when followed by a blank, newline, carriage return, = or ( (when the extglob shell option is enabled using the shopt builtin) Your command line would then become: PIDS=() while read name; do (cd "$name"; npm install) & echo started install PIDS+=( $! ) done < <(\ls) wait "${PIDS[@]}" Notes: You are not limited to one line of code here I have added proper quotes around variables You can wait for several PIDs at once; I fixed that
$! doesn't work on command line
1,516,282,436,000
I have a list of files that are space separated and I want to use the touch command to update their timestamps in that order. But when I supply the filenames as arguments, the timestamps get updated in a different order. touch 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt 11.txt 12.txt After running the command above and running ls -t (sorting by time modified) I get the following: 1.txt 10.txt 11.txt 12.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt Does supplying arguments to commands not guarantee the execution order? If not, how can I update the timestamps of those files in that specific order?
With no time specified, touch changes the timestamps of all its arguments to the current time at the time each file is touched, which should produce a different timestamp for each file, but in many cases this ends up applying the same timestamp to all its arguments; you can verify this by running stat on all the touched files. They are processed in the order specified on the command line. To get the result you want, you need to loop and touch each file individually, with some delay: for file in {1..12}.txt; do touch $file; sleep 0.1; done (with more or less delay depending on the timestamp resolution of the underlying file system). Note that ls -t lists files sorted by descending timestamp; to see increasing times you need to use ls -rt.
Execution order of touch command arguments
1,516,282,436,000
I'm aware that this is a really simple question, but I'm struggling to find a solution to this . I need to automatically find and replace in the /etc/aliases file the following section: # Person who should get root's mail #root: marc And it needs to look something like: # Person who should get root's mail root: [email protected] And I haven't been able to find a solution. Can you guys jump in with some suggestions? Doesn't need to be sed.
On a GNU system, you can use this: sed -i '/^#[[:blank:]]Person/{n;s/#root:[[:blank:]]\+marc/root:\[email protected]/;}' file It searches for a line beginning with # Person. Then switches to the next line and replaces #root:<blanks>marc with root:<tab> .... The -i flag edits the file inplace. -i, \+ and \t are GNU extensions. The standard equivalent of \+ is the wordier \{1,\}. To edit the file in place portably, you'd need to resort to a temporary file. The standard equivalent of \t is to insert a tab character literally.
SED - Find and replace with special characters (#, , % )
1,516,282,436,000
I have a script that updates some remote systems with my laptop's IP and CIDR on the WAN, how the Internet sees me, in other words. I use the following to get my IP on the WAN in bash: dig +short myip.opendns.com @resolver1.opendns.com How can I get the CIDR for my IP on the WAN in bash as well?
In general the short answer is this is not practically feasible. In the very specific case you have access to your firewall your firewall has a public IP address on its outside interface and you can access a shall prompt on the firewall. You can do something like this: ssh [email protected] 'eval $(ipcalc -np `ifconfig eth1 | sed -n "s/inet addr:\([^ ]*\).*Mask:\([^ ]*\).*/\1 \2/p"`) ; echo $NETWORK/$PREFIX' This command is specific to Redhat6. Remember this information is available from the firewall because a network administrator entered the information provided him by the ISP to configure the outside interface. Therefore the only reliable way to get this information is to ask your ISP. Even if you can deduce the CIDR for your ISP from DNS and whois records, you can't deduce how the ISP has subdivided the available IPs internally. This is an administrative function and must be solved administratively.
Get CIDR of my IP on the WAN
1,516,282,436,000
Can you please describe and explain each part of the command prompt pi@raspberrypi ~ $ This is what I saw when I first logged in to my Linux computer.
You can set this with the PS1 environment variable. pi is the username. raspberrypi is the name of the server. ~ is the current directory (and means 'home dir') $ is the prompt - $ denotes a non privileged user. (# denotes root). PS1 is probably set to: PS1='\u@\h \w \$'
Describe the prompt I see when I first logged into the Linux computer
1,516,282,436,000
Marias-MacBook-Air:~ mariasharapova$ ls -a /home . .. Marias-MacBook-Air:~ mariasharapova$ ls -a-1 /home ls: illegal option -- - usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...] Marias-MacBook-Air:~ mariasharapova$ ls -a-l /home ls: illegal option -- - usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...]
You don't have a space between -a and -1, so ls is trying to interpret the - as an option, not as signifying an option. Add a space, or just use ls -a1 and it should work properly. Although I have to admit, -1 seems a rather unusual option - I usually ls -al as opposed to ls -a1.
Why does my "ls -a-l /home" not work?
1,516,282,436,000
I come from the windows world trying to switch to Linux. Sorry for the naïve question but is there something like "common Linux commands"? For example in Windows, the command line is pretty limited, but it is common between all windows. So if you know what the dir or mkdir command does and what switches it takes, you can use any version of Windows and be sure that your BATCH files work. In Linux however, if I understand correctly there are many ways to do the same thing. For example for editing files you may use EMACS or VIM. Even the shells have a wide variety (is bash the de facto standard?) From what I understand so far, the command line is way more flexible (and versatile) than Windows CMD. Something like Busybox promises to pack a whole lot of commands in one project. Anyway, this is all too overwhelming, so I was wondering if there is a small set of Linux commands that are common between all systems and I can carry on my daily tasks on every Linux machine regardless of its distro.
The beautiful thing about *nix, and open source in general, is that you have no shortage of resources. Most *nix CLIs will behave similarly, though there are outliers. Don't worry about them for now. Get BASH figured out. It'll be your interpreter 99% of the time. Learn vim, and know right now that most distro's only include vim-tiny, so install the package 'vim' as soon as the cli lands. // Here's a good cheat sheet. There are many others, just search "BASH Cheat Sheet" https://github.com/NisreenFarhoud/Bash-Cheatsheet // Here's the comprehensive beginners guide. Very much worth reading. http://www.tldp.org/LDP/Bash-Beginners-Guide/html/Bash-Beginners-Guide.html // Read the Components section http://en.wikipedia.org/wiki/Unix Once you get used to the basics, checkout some other people's bash shell scripts. No Starch Press's Wicked Cool BASH Scripts is a good read. Here's one more. http://www.commandlinefu.com/ This one's mine: find ~/ -mtime $(echo $(date +%s) - $(date +%s -d"Dec 31, 2009 23:59:59") | bc -l | awk '{print $1 / 86400}' | bc -l)
Is there something like "common Linux commands"? [duplicate]
1,516,282,436,000
After I have learned that the $$ special variable holds the process number of the shell, I tried too kill the shell by simply: kill <pid obtained by echo $$>, however this did not work, I also tried some variations, but them also had no effect.
You can, but the shell tries hard not to die unless it's absolutely sure that's what is required. SIGHUP works (as does SIGKILL), and you can try this - kill -HUP $$ (If you prefer numeric signal identifiers the HUP can be replaced with 1.) The reason that SIGHUP works is that this is the signal that would have been sent when a serial line connection over a modem was terminated - for instance if the phone line was hung up.
Why can't I kill the shell using the PID obtained from the $$ special variable? [duplicate]
1,516,282,436,000
Here is the sample input: AbbigailAbieAbbyAbbi Using sed, I have separated them into 4 characters but seems like I got a problem because I have to separate the first 8 characters first then add spaces after every 4 characters sed 's/.\{4\}/& /g That is the code I used for sed. Any help? The output should be Abbigail Abie Abby Abbi
With GNU sed: $ echo AbbigailAbieAbbyAbbi | sed -e 's/.\{4\}/& /2g' Abbigail Abie Abby Abbi
Add space after 8 characters then add space after every 4 characters
1,516,282,436,000
I'm trying pass a URL to mpv for it to play it as a network stream. This can be done in bash with the following syntax: $ mpv http://myvideosite.com However, zsh wants to evaluate the URL as (presumably) a file path. Running % mpv http://myvideosite.com gets the following response: zsh: no matches found: http://myvideosite.com and a return code of 1. Running % mpv "http://myvideosite.com" executes as expected. Does zsh not treat arguments as strings?
I assume you've inadvertently trimmed the important part of your command lines out here: the URLs in question contain a ? character (or a *). ? and * are special glob matching characters to the shell. ? matches a single character in a filename, and * matches many. When zsh says: zsh: no matches found: http://myvideosite.com?video=123 it's telling you that there's no file called http://myvideosite.com?video=123 accessible from the current directory. In zsh, by default, a failed expansion like this is an error, but in Bash it isn't: the failed pattern is just left as an argument exactly as it was written. zsh's behaviour is safer, in that you can't write a command that secretly doesn't do what you meant because a file was missing, but you can change it to have the Bash behaviour if you want: setopt nonomatch The NOMATCH option is on by default, and causes the errors you were seeing. If you disable it with setopt nonomatch then any failed glob expansions will be left intact on the command line: $ echo foo?bar zsh: no matches found: foo?bar $ setopt nonomatch $ echo foo?bar foo?bar This will resolve your original use case. In general it will be better to quote arguments with special characters, though, in order to avoid any mistakes where a file happens to exist with a corresponding name, or doesn't exist when you thought it did.
Must pass urls in quotes
1,516,282,436,000
Say I'd like to use the file -f - command inside a script, and send some input using the stdin, when I'm finished I'd like to terminate it gracefully, but unfortunately I haven't found any way to do this (and yes I did read the man page :) ).
You only need to send EOF signal when you finished. To do that, you need to press CTRL + D: $ file -f - test.py test.py: a /usr/bin/python script, ASCII text executable test.pl test.pl: a /usr/bin/perl script, UTF-8 Unicode text executable <CTRL + D here> or you can send to file inside shell script via HERE DOC: file -f - <<EOF test.pl test.py test.awk EOF
file(1) command termination when using the -f - option