date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,512,206,558,000
I have set up fprintd and added a fingerprint profile, but now I am stuck: how do I get this to let me log in? I have added auth sufficient pam_unix.so try_first_pass likeauth nullok auth sufficient pam_fprintd.so at the beginning of /etc/pam.d/sddm as suggested here which did not change anything (I did use a tab between auth and sufficient, and sufficient and pam_... instead of spaces, but this seems more consistent with the other entries in the file. I did use spaces between pam_unix.so, try_first_pass, likeauth and nullok. Could this affect anything?). I have added them so that they are the first line of code in the file. The page also says To make it work in KDE's lock screen, also add the same line at the beginning of /etc/pam.d/kde but I have no such file! I was directed to that wiki entry from this one, but I also don't have an /etc/pam.d/system-local-login file, and my attempts to add the code to the sudo file to test the waters have not worked so far. I think I might be adding the line in the wrong place in the file, or using spaces where I should use tabs. Does this sound plausible? Thank you in advance!
I found the answer myself! Here it is. In case that link goes dead, here is the text: Install the applications needed: sudo apt install -y fprintd libpam-fprintd sudo pam-auth-update Once install finishes, open /etc/pam.d/common-auth for editing $ sudoedit /etc/pam.d/common-auth auth [success=1 default=ignore] pam_unix.so nullok_secure And modify the file adding the line shown below in bold. Make sure the order of these lines is the same as shown here. auth [success=2 default=ignore] pam_fprintd.so max_tries=1 timeout=10 auth [success=1 default=ignore] pam_unix.so nullok_secure Save. Finally, enroll your fingerprint with the following command: fprintd-enroll $USER After running the command, swipe your finger across the reader 3 times to enroll your fingerprint. But! Login screen works uncorrected.
How do I implement `fprintd` into login in Kubuntu?
1,512,206,558,000
The command ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub prints the 128-bit fingerprint of the RSA key. What is the command to get the 160-bit fingerprint of a RSA key?
The key fingerprint is a hash of the key material. In a public key file, the key material is the second whitespace-separated field on the line, encoded in base64. The display format for the fingerprint depends on the hash that's being used. The 128-bit fingerprint uses MD5 and is displayed in hexadecimal. For example, the following commands display the same fingerprint, with different punctuation and surrounding material: ssh-keygen -f /etc/ssh/ssh_host_rsa_key.pub -l -E md5 </etc/ssh/ssh_host_rsa_key.pub awk '{print $2}' | base64 -d | md5sum The SHA256 fingerprint (256 bits) is displayed in Base64. Again, here are two commands to display the fingerprint. ssh-keygen -f /etc/ssh/ssh_host_rsa_key.pub -l -E sha256 </etc/ssh/ssh_host_rsa_key.pub awk '{print $2}' | base64 -d | openssl sha -sha256 -binary | base64 If you need a 160-bit fingerprint, it's using SHA-1, which was never commonly supported (I think SHA-1 wasn't introduced as an alternative to MD5 until a time when SHA-1 itself was deprecated). Current versions of OpenSSH don't support it, but you can use either of the alternative methods above with sha1 instead of md5 or sha256, depending on whether you need the hex or base64 format.
160-bit fingerprint of RSA key
1,512,206,558,000
$ ssh 192.168.29.126 The authenticity of host '192.168.29.126 (192.168.29.126)' can't be established. ECDSA key fingerprint is SHA256:1RG/OFcYAVv57kcP784oaoeHcwjvHDAgtTFBckveoHE. Are you sure you want to continue connecting (yes/no/[fingerprint])? What is the "fingerprint" it is asking for?
The question asks whether you trust and want to continue connecting to the host that SSH does not recognise. It gives you several ways of answering: yes, you trust the host and want to continue connecting to it. no, you do not trust the host, and you do not want to continue connecting to it. [fingerprint] means that you may paste in the fingerprint, i.e. the hash of the host's key, as the reply to the question. If the pasted fingerprint is the same as the host's fingerprint (as discovered by SSH), then the connection continues; otherwise, it's terminated. The fingerprint that answers the question in the affirmative is the exact string shown in the actual question (SHA256:1RG/OFcYAVv57kcP784oaoeHcwjvHDAgtTFBckveoHE in your case). If you have stored this fingerprint elsewhere, it's easier to paste it in from there than to compare the long string by eye. In short: The third answer alternative provides a convenient way to verify that the fingerprint for the host is what you think it should be. Using the fingerprint to answer the question was introduced in OpenSSH 8.0 (in 2019). The commit message reads Accept the host key fingerprint as a synonym for "yes" when accepting an unknown host key. This allows you to paste a fingerprint obtained out of band into the yes/no prompt and have the client do the comparison for you. ok markus@ djm@
What is the fingerprint ssh is asking for?
1,512,206,558,000
I am using fingerprint scanner with python-validity driver. According to fprintd(1), fingerprint data is stored in /var/lib/fprint/ after enrolling a fingerprint with fprintd-enroll. But I have this folder empty, although scanning works properly. What I want is to share fingerprint data between two users, because one of them is always getting fingerprint recorded better. If fingerprints were stored as files, I could just create a symlink between them. But it doesn't seem to be the case. So where can I find fingerprint data for a given user and how can I copy it to be used by another user? OS: Linux 5.11.11-arch1-1 x86_64 Fingerprint reader: ID 06cb:009a Synaptics, Inc. Metallica MIS Touch Fingerprint Reader $ fprintd-list $USER found 1 devices Device at /net/reactivated/Fprint/Device/0 Using device /net/reactivated/Fprint/Device/0 Fingerprints for user <USER> on DBus driver (press): - #0: WINBIO_ANSI_381_POS_RH_INDEX_FINGER
I found your question on Google while trying to solve the same problem. fprintd does in fact store fingerprints in /var/lib/fprint/, though maybe it stores fingerprints differently for your fingerprint sensor. With regards to copying a registered fingerprint to a different user, that is technically not supported by fprintd; as not all fingerprint sensors support that, and the developers intend for a fingerprint to uniquely identify the user. This is explained in a code comment here. With that said, I wanted to be able to use the same finger to authenticate multiple users, so went and found a way to copy my fingerprint to a separate user account on my computer. However, it's not as simple as just copying the finger print file. With the way the data format works, you need to register your username inside the fingerprint data file. It's not possible to just symlink the file, and libfprint does not provide any sort of command-line tool for this. To solve this, I have written a python script that interfaces with the fprintd library in order to copy fingerprint files. Try it and let me know if it works for you.
Where does fprint store fingerprints
1,512,206,558,000
In ssh server and client authentication, key fingerprints are presented in different ways, even using the same command ssh-keygen -lf (in different hosts or as regards different keys). Representation 1: $ ssh-keygen -lf /etc/ssh/ssh_host_ed25519_key.pub 256 SHA256:3RE3UrGaTAec8H4YnZG7JTlfXpKvl89iexdqzLCyffY root@hostname1 (ED25519) Representation 2: $ ssh-keygen -lf /etc/ssh/ssh_host_ed25519_key.pub d0:21:3e:ec:52:ff:19:a9:e7:71:b5:7f:63:23:57:f7 (example from this page) Representation 3: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVo5+sYnRQxerJjG/DmUzQFso+CGzcnGT/SDa457qQqh6WIquvWOIXIY5gNPZoOByAoriK+WRxgTT39hYFmpXE= from $ ssh-keygen -H -F hostname2 |1|/DmY6Hm8TdZogykndJOUacp2NaM=|uM+t3vLw3KRySPUeXNqBLCxaGtY= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVo5+sYnRQxerJjG/DmUzQFso+CGzcnGT/SDa457qQqh6WIquvWOIXIY5gNPZoOByAoriK+WRxgTT39hYFmpXE= which is the line in .ssh/known_hosts file corresponding to hostname2. What is the difference between them? And, if they are equivalent, how to get each representation from the other ones? Representations 1 and 3 have been obtained using OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 on Ubuntu 18.04.
Older versions of the ssh-keygen utility from OpenSSH displayed only MD5 hashes; the utility now defaults to displaying a SHA256 hash, although you can still select an MD5 hash using the -E option: user@host:~/.ssh$ ssh-keygen -E md5 -l -f samplekey 2048 MD5:e6:1f:73:0f:14:cb:9a:71:2f:3b:31:b7:3f:58:1c:52 user@host (RSA) user@host:~/.ssh$ ssh-keygen -E sha256 -l -f samplekey 2048 SHA256:Oyt9H15ZBmITbhljpSiE/BLreo/+j+6lsC3gClGI97U user@host (RSA) user@host:~/.ssh$ ssh-keygen -B -l -f samplekey 2048 xomiz-lozad-ruzin-lasuz-vibic-fydar-hecoh-mapuv-vytus-futah-maxox user@host (RSA) In addition, you can add the -v (visual) flag on either the MD5 or SHA256 hash to get an ascii-art comparison image in addition to an alphanumeric hash: user@host:~/.ssh$ ssh-keygen -E sha256 -l -v -f samplekey 2048 SHA256:Oyt9H15ZBmITbhljpSiE/BLreo/+j+6lsC3gClGI97U user@host (RSA) +---[RSA 2048]----+ | . .. =.. | |.. +. + * | |o o .+. . O . | | o . .o... o o . | |. ..E.S o| | . . . . + | |. . o..o . . o | | . o +=.*.. o | | .. o+BXo..o | +----[SHA256]-----+ user@host:~/.ssh$ Your third representation is not a fingerprint, but the public key, base-64 encoded, as it will be stored in a samplekey.pub file or in the known_hosts file on a system accepting that key. There is no way to determine the key from the hash; to obtain the hash from the key, use the ssh-keygen utility either with its default options, or using the -E, -B, and/or -v options to get the output style you prefer. To obtain the fingerprint of a key in a known_hosts file (rather than in the original public key file, as per examples above), you could pipe a string containing the key-type and the key directly to ssh-keygen: $ echo "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVo5+sYnRQxerJjG/DmUzQFso+CGzcnGT/SDa457qQqh6WIquvWOIXIY5gNPZoOByAoriK+WRxgTT39hYFmpXE=" | ssh-keygen -l -f - 256 SHA256:wOxOBgRQp1qQcnTIjgmE/GB8+3fm8ahyDXuL/2GzgIo no comment (ECDSA)
Several representations for key fingerprint
1,512,206,558,000
I tried to use my fingerprint reader on my laptop yesterday, installing fprintd ; it didn't work. I then removed all those packackages. I then realized Gnome (or non graphical session) doesn't ask for my password anymore at login screen, whether I set it on ON or OFF in the GUI parameters. Could anyone tell me how to change this setting in a terminal?
Run the following command: sudo pam-auth-update Select the 4 options then validate : [*] Unix authentication [*] Register user sessions in the systemd control group hierarchy [*] GNOME Keyring Daemon - Login keyring management [*] Inheritable Capabilities Management
Debian : how to set login behaviour (ask passsword)
1,512,206,558,000
I have a Lenovo Thinkpad T440s, and everything is working pretty much out-of-the-box on Fedora 25, except for the fingerprint scanner. It is detected automatically, and flashes when necessary, and seems to "work", however it never matches my fingerprint (it always says that it is not a match). Is there anything I could try? I also saw that this scanner (by Validity) is not officially supported by fprint.
The current release of libfprint(0.6.0) is not working great with the sensors used in recent Lenovo. On my T550, it only ask me to swipe once when enrolling a finger and then fail to recognize it 9/10 times. If I'm updating to the version in git HEAD it asks me to swipe my finger 5 times and then the recognition is working way better. So you should probably try to compile the git HEAD yourself and try again. I would also suggest you to open a bug on the Fedora bugtracker.
Fingerprint scanner is detected and seems to be functional, but never matches fingerprint
1,512,206,558,000
I use Fedora 34. I love what fprint is doing when it allows me to authenticate using my fingerprints on my ThinkPad's built-in fingerprint reader, but when I'm in my office, I like to dock my laptop and close the lid. I will use my external monitor, keyboard, and mouse during that time. However, there are times that I need to authenticate, and it asks for my fingerprint. Since the fingerprint reader isn't easily accessible, I'd rather simply switch to typing my password. Do I have to actually disable fprint in order to make authentication functional when I don't have access to the fingerprint reader, or is there a way to get it to fallback to password typing?
Sorry to disappoint you but, paraphrasing /usr/share/doc/fprintd-pam/README: Known issues: * pam_fprintd doesn't support entering either the password or a fingerprint
How can I type a password when fprint is enabled?
1,512,206,558,000
So when i use lsusb i can find the fingerprint device, the name of it is Generic goodix fingerprint device and my laptop is lenovo thinkpad 530s considering that lsusb can find it then why can't fprint find it? i just want to use this fingerprint for login authentication, i am using parrot security
In the USB standards, there is a well-defined way to get Vendor ID and Product ID numbers from any USB device. lsusb just looks them up in a big table and displays the human-readable texts associated with those entries. The table is typically located at /usr/share/misc/usb.ids or /var/lib/usbutils/usb.ids. fprint has a much more complicated task: since there doesn't seem to be standard USB protocols for fingerprint readers, it must know exactly what model-specific messages to send to the reader and how to interpret the answers it gets. So it only looks for devices it knows how to talk to. Unfortunately, it looks like the Goodix fingerprint device is not at all related to devices currently supported by fprint, and uses a different protocol. But looks like Antonio Ospite is in the process of analyzing the protocol of at least some Goodix fingerprint reader(s). You might also check this Github page and the links mentioned on it for similar reverse-engineering efforts. If you're at all familiar with programming, you might try compiling Antonio's test program from the GitLab page linked above, and seeing if it works with your fingerprint reader. You could also contact the developer of that program and offer your help - at the very least, you could test Antonio's new versions with your hardware and capture USB traffic for comparative analysis. Perhaps by working together with Antonio (and maybe others that may have done the same) you might figure out the protocol so that fprint support can then be implemented.
fprint doesn't find my laptop's fingerprint device but lsusb does?
1,512,206,558,000
I would like to fingerprint some Unix (most are Debian) systems. By fingerprinting I mean run a script that collect material identifiers, system version, etc in order to be able to: accurately identify machines discriminate hardware and software modification over machines I know that a can use couple of commands to track down meta informations such as udev, uname, etc. My questions are: Is there packages that perform such actions If not what must I collect in order to achieve this accurately.
On Debian-derived systems, for hardware information use lshw, hwinfo, udevadm, hdparm, inxi (this one needs installing first) etc. To accurately identify machines you may try to use the system serial number, vendor, model, the MAC address of the network controllers, the serial number of the hard disk etc. (You may only try because some machines are virtual...) There is a tutorial with screen shots at http://www.binarytides.com/linux-commands-hardware-info/ and generally Google is not short of suggestions. You may want to record hostname, uname -a, lsb-release -cdr and /etc/machine-id. Also df may come in handy. For installed packages use dpkg-query, for example, dpkg-query --list | grep '^ii' | awk '{print $2 " " $3 " ("$4")"} produces a list beginning with a11y-profile-manager-indicator 0.1.10-0ubuntu3 (amd64) account-plugin-facebook 0.12+16.04.20160126-0ubuntu1 (all) account-plugin-flickr 0.12+16.04.20160126-0ubuntu1 (all) account-plugin-google 0.12+16.04.20160126-0ubuntu1 (all) accountsservice 0.6.40-2ubuntu11.3 (amd64) acl 2.2.52-3 (amd64) ... etc. etc. ... Other Linux variants have their own mechanisms to collect system information, for example Red Hat Enterprise Linux has sosreport. Most if not all proprietary UNIX systems have dedicated tools to collect system information, including hardware configuration and installed software. For example, HP-UX has a nice /opt/ignite/bin/print_manifest command.
How to fingerprint a Unix System [closed]
1,351,250,142,000
I find this a highly annoying "feature" on a wide screen monitor that my mostly used apps - terminal and gedit always open directly under the top-left corner of my screen and I have to drag them to my eye position each and every-time. I have tried installing the CompizConfig Settings Manager and using the feature to position windows centre, but this has had no effect - the force feature here isn't working for me either: Window Management -> "place windows" -> Fixed Window Placement -> Windows with fixed positions example: gedit 200 200 keep-in-work-area-to-yes I can use e.g. gnome-terminal --geometry=140x50+50+50 for the terminal but this doesn't work for gedit. Any ideas? Thanks
Actually, since GNOME v3.30 there is a visible option in GNOME Tweaks, which makes it much easier to enable it: Just select "Center New Windows" under "Windows". Actually I found a solution for GNOME without compiz. You can either use this extension if you have gnome-shell < v3.14 or open dconf and set center-new-windows in org.gnome.mutter to true:
Gnome - windows always open top left
1,351,250,142,000
To open a file to edit in gedit I run gedit sample.py &. But with Sublime Text it is simply subl sample.py. This opens the file to edit and it doesn't run in the background (in my shell). How would I do that with gedit? I tried exec /usr/bin/gedit "$@" (copied from /usr/bin/subl) but it works like gedit &. Or alias ged="gedit $file &" should do. What can I substitute $file in the alias?
You could use this function: gedit() { /usr/bin/gedit $@ & disown ;} It: Makes a function which can be called with gedit Launches gedit (using the full path /usr/bin/gedit), passing all the arguments/files given to it using $@ & disown sends it to background and disown detaches it from the terminal/shell.
How to launch gedit from terminal and detach it (just like "subl" command works)?
1,351,250,142,000
I want to have a shortcut key for duplicating the currently selected line in gedit. Many other editors use Ctrl+D or Ctrl+Shift+D for that, but gedit is different. Here the default behaviour: Ctrl+D: removes a line Ctrl+Shift+D: opens GTK inspector I am fine with both current behaviours as long as the other hotkey would do the thing I actually want to do. So I saw this answer where it is shown you can actually patch the gedit binary. However I don't want to do this as patching binaries is probably the worst kind of workaround you can do (think of updates and binary changes). Additionally, in that question only the "delete line" shortcut was removed and the "duplicate line" shortcut was added with a plugin, which does not exist anymore. So how can I get the "duplicate this line" behaviour into gedit?
The plugin mentioned in the comments and the other answer has been recently updated and after install you should be able to use Ctrl+Shift+D to duplicate either a line or a selection. I've tested it in gedit 3.18.3 on Ubuntu 16.04 but it should work in any version >=3.14.0 even though that's a bit questionable because the gedit devs are not shy to introduce breaking changes in minor versions (or they follow something else than semantic versioning) and there seems to be no up-to-date documentation for plugin development.
How to get a "duplicate line" hotkey in gedit?
1,351,250,142,000
I am having a dilemma whether to edit a javascript file or not. When I open it with gedit, it shows the following warning: The file you opened has some invalid characters. If you continue editing this file you could corrupt this document. You can also choose another character encoding and try again. The current encoding is UTF-8. As the file has over 100,000 lines of code, is there a quick way to scan for the invalid characters?
As the file is UTF-8 you could run isutf8. An additional utils package. It gives you both line, char and offset for bad bytes. Then use xxd, hexdump or the like to analyze. Unfortunately it stops at first crash. But then again it depends on the file. Could be there is only one bad byte ;) Have some C code that does a similar analysis but for entire file. It is on a disk somewhere long forgotten. Could try to find it if in need. Else yes, the quick and not that dirty way would be to do a diff between a copy saved with gedit – as proposed by the good mr. @vonbrand.
How do I scan for invalid characters on gedit?
1,351,250,142,000
Gedit 3.14 in Debian 8 has no window manager decoration and the window cannot be resized. Do I need to install any additional package to make it work or has Gedit become unusable outside of the Gnome desktop? I use the window manager Blackbox. Edit: Window resizing works in Openbox. Screenshot: .
Pluma is a Gedit fork without client-side decorations, which means it includes the usual the window borders and title bar. # apt-get install pluma Below is a screenshot with the window manager Blackbox.
Cannot resize Gedit window
1,351,250,142,000
I'm searching for an editor with the ability to spell-check two languages at the same time (German and English). Gedit can't do it out of the box. But I want to use Gedit. It should be possible by merging the English and German dictionaries and select the created file under Tools->Set Language... edit I got it almost (some warnings in step 6 and e.g. zzgl. (words with a dot at the end) are not spell-checked) :) Thanks to your post Kevin Atkinson :) (I may add that for English I used aspell6-en-7.1-0.tar.bz2 (is already new version/updated) but for German I used http://extensions.services.openoffice.org/project/dict-de_DE_frami because it's more updated. Extract this .oxt using unzip. In de_DE_frami are the two needed affix and dictionary files called de_DE_frami.aff and de_DE_frami.dic. Rename the de_DE_frami.aff to qed_affix.dat and the de_DE_frami.dic to de.txt. Note: For German there is also an extension: http://extensions.services.openoffice.org/de/project/DFEW which I'm going to merge later too) In step 6 I had to use $ cat de_en | aspell --encoding=utf-8 create master -l ./qed ./qed.rws because all üäöß.. were skipped and were not in the created dictionary, but now they are there: $ aspell -d ./qed.rws -a (Testing: typing e.g. "Käse" or "zweiunddreißig" prints an *, so it finds these words now) Filesizes: $ du -b qed* 18725 qed_affix.dat 103 qed.dat 12 qed.multi 6763456 qed.rws Lines: $ wc -l qed* de_en 6 qed.dat 1 qed.multi 54403 qed.rws 717 qed_affix.dat 334974 de_en Download of this package in case someone needs it. The most warnings I can see now at step 6 look like this - last 5 lines: $ cat de_en | aspell --encoding=utf-8 create master -l ./qed ./qed.rws ...(many warnings appear) Warning: Removing invalid affix 'o' from word zytostatika. Warning: Removing invalid affix 'z' from word zytostatika. Warning: Removing invalid affix 'o' from word zytostatikum. Warning: Removing invalid affix 'z' from word zytostatikum. Warning: The word "zzgl." is invalid. The character '.' (U+2E) may not appear at the end of a word. Skipping word. $ Now I removed aspell-en and aspell-de ($sudo apt-get remove aspell-en aspell-de) and put all the four files qed.dat, qed.multi, qed.rws and qed_affix.dat into /usr/lib/aspell. My /var/lib/aspell is empty BTW. The new package can be selected now: gedit->Tools->Set Language...->Unknown (qed). Regarding to the warnings in step 6 zzlg. words with a dot at the end are not spell-checked. I tried playing with qed.dat special ' -*- line by adding a . because the de.dat uses a . in there too, but it unfortunately didn't work. edit Solved the spell-checking of words with a dot at the end by grepping these only and adding them to the spell-checking white-list. $ cat de_en | sed -n '/\./{s/[./]\+[^./]*$//;p}' >> /home/<username>/.config/enchant/qed.dic
Aspell Author Here. As I said in an earlier answer, you can't just combine dictionaries from different languages and expect it to work. You need to create a new language that combines the features of the two original languages. Fortunately for English and German this is fairly easy; however, the suggestion quality will suffer for English words since we will disable the use of soundslike lookup. Install the aspell-en and the aspell-de dictionary package Go to a empty directory to keep everything clean. Also to avoid any charset issues change the locale to "C" by setting LC_ALL=C. Dump the English and German dictionaries into plan wordlists aspell dump master en > en.txt aspell dump master de > de.txt Combine en.dat and de.dat, which you can generally find in usr/lib/aspell. The English dictionary uses soundslike lookup but that won't with the German dictionary (due to the fact that it is English specific, and more importable it is incompatible with Affix compression) so we will disable it. The English dictionary doesn't use affix compression but the German dictionary does so we will just use the affix file for the German dictionary. (This will avoid having to expand the German dictionary and thus increasing it size). We will call the language qed, 'q' since very few languages start with q, 'e' for English, and 'g' for German. (The language name should generally be 2 to 3 letters, but aspell doesn't really care, so en-de or some other name might work, but a 2 or 3 letter name is guaranteed to work) The file will be named qed.dat and contain the following: name qed charset iso8859-1 special ' -*- soundslike none affix qed affix-compress true Copy de_affix.dat into the current directory and rename it qed_affix.dat. Create the combined dictionary: cat en.txt de.txt | aspell create master -l ./qed ./qed.rws Create the file qed.multi: add qed.rws Test the dictionary by using -d ./qed. the ./ is needed to force aspell to search the current directory. Install qed.dat qed.rws qed.multi and qed_affix.dat somewhere where aspell will find it. Please see the manual for info on how aspell searches for dictionary and language data files. Done. Everything should work now. A more sophisticated solution will enable some form of soundslike lookup for better suggestion quality. But that requires special care when used with affix compression (see the Aspell manual for details). As an alternative the German dictionary can be expanded and the English soundlike lookup can be used, but that might not work so well on German words. The case of combing English and German was easy because they both use the same charset (iso-8859-1) and because only one language used Affix compression. Combining other languages will take more work but it is possible once you know what you are doing. I spelled out the steps here in detail to give readers some idea of how Aspell works so a similar thing can be used for other language combinations. If both languages use affix compression, either the affix files will need to be combined so there are no conflicting flags, or one of the dictionaries will need to be expanded. If the two languages use a different 8-bit charset than a compatible charset that can support both languages we need to be used. If a standard one doesn't exist, than a new one can be created. To avoid confusion the wordlists should be converted to utf-8 and Aspell should be instructed to expect all input and output in utf-8 instead of the charset that is used internally, which for historical reasons is the default.
Gedit or an other non-commandline editor with the ability to spell-check two languages at the same time
1,351,250,142,000
I need to add a new language in Gedit. The problem is, it is included in Gedit menu of languages now, but its syntax is not highlighted and Gedit is not able to indentify the language just from the file suffix. I've created both .lang file and a XML file describing MIME-TYPE. LANG file - /usr/share/gtksourceview-3.0/language-specs/test.lang MIME-TYPE file - /usr/share/mime/packages/test.xml After creating them I've updated the mime database. sudo update-mime-database /usr/share/mime Next attempts 1) I've tried even copying test.xml file to /usr/share/mime/applications folder instead of /usr/share/mime/packages, but it had no effect. 2) I've tried to put the mime type into the /etc/mime.types as text/x-test test and it had no effect too. test.lang <?xml version="1.0" encoding="UTF-8"?> <language id="test" _name="Test" version="1.0" _section="Source"> <metadata> <property name="mimetypes">text/x-test</property> <property name="globs">*.test</property> <property name="line-comment-start">//</property> <property name="block-comment-start">/*</property> <property name="block-comment-end">*/</property> </metadata> <styles> <style id="comment" _name="Comment" map-to="def:comment"/> <style id="keyword" _name="Keyword" map-to="def:keyword"/> </styles> <definitions> <context id="if0-comment" style-ref="comment"> <start>\%{preproc-start}if\b\s*0\b</start> <end>\%{preproc-start}(endif|else|elif)\b</end> <include> <context id="if-in-if0"> <start>\%{preproc-start}if(n?def)?\b</start> <end>\%{preproc-start}endif\b</end> <include> <context ref="if-in-if0"/> <context ref="def:in-comment"/> </include> </context> <context ref="def:in-comment"/> </include> </context> <context id="keywords" style-ref="keyword"> <keyword>hello</keyword> <keyword>hi</keyword> </context> <!--Main context--> <context id="test"> <include> <context ref="if0-comment"/> <context ref="keywords"/> </include> </context> </definitions> </language> test.xml <?xml version="1.0" encoding="utf-8"?> <mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info" > <mime-type type="text/x-test"> <sub-class-of type="text/plain"/> <comment xml:lang="en">TEST language document</comment> <comment xml:lang="cs">Dokument v jazyce TEST</comment> <glob pattern="*.test"/> </mime-type> </mime-info>
Finally I've figured it out. According to GTK lang reference version attribute in <language> tag should be 2.0. And it's really working. So, the proper <language> tag is this: <language id="test" _name="Test" version="2.0" _section="Source">
Adding new language to Gedit
1,351,250,142,000
I'm using RHEL. I want to insert an image in gedit, but I'm not able to do so. Also I'm unable to install LibreOffice because they are asking for a subscription. Any alternate to gedit or can we insert image in gedit.
No gedit is strictly a text editor, and does not provide a method for inserting images or embedding images within its files. I think you're getting confused on the subscription details. The subscription is probably because you're using RHEL, LibreOffice is a free product, you can download the RPMs directly from the project's website and install them manually. See this page for the RPMs. http://www.libreoffice.org/download/?version=4.1.4&lang=en-US&type=rpm-x86_64 Downloading & Installation Take a look at the extensive documentation that already exists on the LibreOffice website. It'll take you through the various installation steps. https://wiki.documentfoundation.org/Installing_LibreOffice_on_Linux
Is it possible to insert image in gedit in Linux?
1,351,250,142,000
I was trying to edit my sources.list in order to add local mirror information. I am not comfortable with command line editors, so I tried using sudo mousepad /etc/apt/sources.list. I got the following error report. No protocol specified (mousepad:4942): Mousepad-ERROR **: Cannot open display: I tried several other editors such as gedit, kwrite etc. but I get similar error reports. No protocol specified ** (gedit:4957): WARNING **: Could not open X display No protocol specified Unable to init server: Could not connect: Connection refused (gedit:4957): Gtk-WARNING **: cannot open display: :0 I am on a local 64 bit system running Debian Jessie.
You shouldn’t run an editor as root to edit system files, you should use sudoedit (especially since you have sudo set up already). That will make a copy of the file, that you can edit, open it in the editor of your choice, wait for you to finish editing it, and if you make changes to it, copy it back over the system file. In a little more detail, you’d run something like SUDO_EDITOR="gedit -w" sudoedit /etc/apt/sources.list This will: check that you’re allowed to edit the file (according to the sudo configuration in /etc/sudoers; yours should be OK already); copy /etc/apt/sources.list to a temporary file and make it editable for you; start gedit with the temporary file; wait for you to close the file (this is why we need the -w option); check whether you made changes to the temporary file, and if so, copy it over the original file. You can set SUDO_EDITOR up permanently in your shell’s startup files (e.g. ~/.bashrc). If it’s not defined, sudoedit will also check VISUAL and EDITOR. You can specify any editor you like, as long as it’s capable of waiting for an editing session to finish.
Cannot open GUI editors in superuser mode
1,351,250,142,000
Moving from OS X & Textmate to Ubuntu & gedit, the one feature of Textmate I am missing is it's command line tool. With mate I was able to open a folder as a Textmate project using mate . from within the required directory. This is enormously useful as it speeds up my system navigation considerably. Is there a way of doing the same or similar with gedit?
This was answered on Superuser, Gedit open current directory from terminal - Ubuntu 10.10. Looks like the answers are still relevant.
open whole folder in gedit
1,351,250,142,000
How does one find and replace text in all open files with gedit?
This is not possible with a stock gedit; there's an open ubuntu brainstorm idea for adding the ability. However, there are plugins that add it, such as advanced-find. If you install that, one of the sections on the "Advanced Find/Replace" dialog is "Scope"; choose "All Opened Documents":
How does one find and replace text in all open files with gedit?
1,351,250,142,000
I almost always use the Insert spaces instead of tabs feature in gedit. The one exception is when writing a Makefile which requires tabs. I don't suppose there is a way to make this option dependent on the syntax being used? I.e. automatically switch back to tabs when Makefile is detected.
There seems to be several ways to handle this. Modelines gedit has a modeline plugin. If you enable it you can use the Emacs modeline option Indent-tabs-mode (or any other supported modeline option with the same effect). By setting that option to true you can make gedit indent with tabs for the file in question. So, to enable tab indentation in a Makefile add the following line to it: # -*- indent-tabs-mode:t; -*- Makefiletab There is a gedit plugin named Makefiletab which is said to "force the option spaces-instead-of-tabs off for all Makefiles." I do not know if it works though as I haven not tried it.
gedit: tabs or spaces dependent on syntax
1,351,250,142,000
have installed Debian Linux, but gedit is not installed along with it. When I attempt to install gedit using apt-get, it prompts to download many packages up to 370 MB, which is strange. I only want to download gedit in my gnome environmnet, and do not want to have a fresh install of the remaining installed packages like xorg-server, libwebkit, libboost,.... Do you have any idea how I can only install gedit?
The packages Debian is trying to install are dependencies of gedit: without them, gedit cannot run. Looks like apt-get wants to download the whole Desktop Environment i.e. the GUI. Evidently you have a minimum install of Debian (without GUI) and now it's asking you for the additional packages: gedit is a graphical editor so it needs X Windows to run. Most probably you don't want to install X, so you should rather use nano, vim, or other CLI text editors.
gedit installation size is too large
1,351,250,142,000
I'm trying to add the Python plugin trailsave to the text editor Pluma* (which is a Gedit fork) but the plugin doesn't show up in the "Active plugins" list in the Pluma preferences. Also, I have compiled Pluma with Python support. Any ideas? $ cat ~/.config/pluma/plugins/trailsave.pluma-plugin [Pluma Plugin] Loader=python Module=trailingspaces IAge=2 Name=Trailing Spaces Description=Makes trailing spaces very visible Authors=Gustavo Noronha Silva <[email protected]> Copyright=Copyright © 2012 Gustavo Noronha Silva Website=https://gitorious.org/gedit-trailing-spaces To my understanding, as far as the plugin configuration file above is correct the plugin will be displayed in the list. My next task will be to rewrite the Python code for Pluma. *The reason I switched to Pluma is because the client-side decorations used in Gedit in Debian 8 (stable) does not work with my window manager of choice (Blackbox).
After printing out the plugin path from the source I found out that Pluma looks for plugins in ~/.local/share/pluma/plugins. The .pluma-plugin configuration file is correct however.
How do I make Python plugins work in Pluma?
1,351,250,142,000
When I open files by double-clicking a file with mouse I always get one additional "Unsaved Document X".. which is very annoying, because I have to close tham all, and click "Close without save" every time... This happens in dolphin, nautilus and krusader (those are the ones where I tried it, so I gues it's not because of a file manager). When I try opening a file from terminal using "gedit filename" the problem is not there. It also does not happen if I open files from within gedit. Any hints on how to fix this? This started happening I think somewhere around the time when gnome3 came into Arch official repos. (I use up-to-date Arch and KDE4.6)
Felrood from Arch Linux forums provided a solution and I would like to share it here and close this question. Gedit seems to display data from stdin in a new "Unsaved document". For example: echo "foobar" | gedit What can be done is this: right click the Kmenu button -> edit applications -> find gedit there (for me that is "utilities") -> put "gedit $1 < /dev/null" in gedits command field -> save For me that solved the problem no matter whether I use krusader, dolphin, alt+f2 or something else..
Gedit opening an "Unsaved document" on opening files with mouse
1,351,250,142,000
I don't like gedit and always use geany or vim or something else. Removing it from my current Debian testing install also removes the cinnamon-desktop-environment meta package because it depends on gedit. This meta package: depends on all programs needed to have a fully fledged desktop environment. Install this if you want a complete cinnamon desktop, including graphical apps that allow users to perform almost all everyday tasks. After removing it, apt shows that 209 packages are no longer necessary (because nothing depends on them) including Pidgin, LibreOffice, Gnome System Monitor, etc. Next time I want to run autoremove for another uninstalled application, it would remove all of them. I guess the solution is to manually apt-get install the packages I actually want to have (i.e. Pidgin, LibreOffice, System Monitor, etc.), but there are also a bunch of packages of which I have no idea what they do. Many of them are probably dependencies of the aforementioned, but I don't know. I'd have to check them all manually. Some look unrelated, like t1utils (apt-cache rdepends doesn't show anything I recognize). Another "solution" is to apt-get install the whole list of packages that it would otherwise autoremove, but that's an ugly hack because it would no longer know which packages were installed because of dependencies and which were installed because I want them to be installed. I could also modify cinnamon-desktop-environment (e.g. by creating my own .deb) in some way that it doesn't depend on gedit while maintaining the other dependencies. I'm not sure how exactly, but it doesn't sound too hard. The issue with this is that it probably does not update anymore when there are updates because I installed a custom version of it. Thinking about creating my own package, I looked into cinnamon-desktop-environment's dependencies. Surprise: it depends on firefox, which is not even available in Debian. And it depends on iceweasel, which I already uninstalled with no trouble. I don't understand. How do I go about removing gedit without messing up my desktop environment?
Removing a program just because you don't use it shows a misguided sense of priorities. Disk space is cheap. Gedit takes less than 2MB of disk space. Even at SSD RAID-1 prices, that costs less than ½¢. At the minimum wage in my country, it takes less than 2s to earn that much. It'll take you far more than 2 seconds to do this. The gains from removing the package are negligible — only the network and disk bandwidth when the package is installed. That being said, here's how you can do it. The cinnamon-desktop-environment package depends on the applications that are officially part of the Cinnamon desktop environment. Gedit is one of them. If you want to remove Gedit but keep the rest of Cinnamon, mark the dependencies of cinnamon-desktop-environment except gedit as manually installed, then remove cinnamon-desktop-environment. You take the responsibility of adding any component that might be added to Cinnamon in the future. You can use aptitude search to list the packages that Cinnamon depends on. aptitude unmarkauto $(aptitude -F %p search '~i ~Rcinnamon-desktop-environment !^gedit$') apt-get remove gedit Alternatively, you could make a fake gedit package that exists solely to resolve the dependency but doesn't contain the gedit binary. You can use equivs to make such fake packages. Note that some Cinnamon configuration may still believe that Gedit is present and attempt to call the nonexistent binary.
Removing gedit without removing all of cinnamon-desktop-environment
1,351,250,142,000
In regex plugin of gedit, I use a regex to match/search and another for substitution. In the matching regex, I only have one group. In the substitution regex, I use \1 to refer to the group, and I also like to add a zero right behind \1, but \10 will change to mean the 10th group in matching regex. So I was wondering how to solve this problem? For example, in my original text there are cases where 0 is misinput as o, such as 12o should be 120. My matching regex is (\d+)o, and my substitution regex is \10 which is not right.
Assuming that plugin uses the same syntax as the Python regexp engine: use \g<1>0 as the replacement text.
How to match group 1 in a regex followed by a 0 rather than matching group 10
1,351,250,142,000
I want to remove unnecessary whitespace on my css file. I am using grep with the command as follows: $ grep -rn "[[:space:]]$" Surprisingly, it is returning a hit on every line in the file. I search for instances of \t\n, \r\n and \n but could not find anything. How do I go about identifying the invisible whitespace and remove it?
Don't be surprised that your regexps with a \n don't match: The \n is the line separator, it's not in the line. Every line in your file ends with \n-- by definition.* You'll never find a \n inside a line. One possibility is that you're looking at a Windows file on Unix, and your mystery character is \r (NB not \r\n), which your grep is not recognizing as part of the EOL. To find out what your lines actually look like, use od -c. *Footnote for the nitpickers: Except possibly for the final line, and on very old Mac OS systems, etc., etc.
How to identify and remove invisible whitespace characters on gedit?
1,351,250,142,000
I want to edit /etc/inittab in order to get a login prompt on the serial console once the system boots. By default, inittab file complains to be ro. I tried both gksudo gedit /etc/inittab and sudo vi /etc/inittab and seemed to be properly configured. However, when I opened file after that with gedit, I saw no difference. Any ideas?
Try this: Open terminal, then type su and type your root user password. After this: vi /etc/inittab In my case this works but I'm using CentOS.
How to edit /etc/inittab?
1,351,250,142,000
I'm using gedit 2.28.4, on Centos 6.4. When I click on Edit > Preferences, I can't modify anything, as everything is grayed out. Also, I could not re-size the preferences window to locate the 'Edit' button as suggested on other forums.
It turns out the gedit configuration files were missing completely. Reinstalling fixed the problem.
gedit preferences grayed out
1,351,250,142,000
I use gedit to modify/create files in my system, and sometimes I see that after editing, a duplicate file is created by the name of samename~ Just that ~ is extra. Why does this happen? Is there any significance of this file or is it okay if I delete it (I usually do)?
Some editors backs up the original file with a suffix, usually ~ but sometimes .bak, when saving the new file. Vim, for example, does this if the backup option is enabled. With Vim, you may also modify the suffix used for the backup files: set backup set backupext=.bak See also :help backup in Vim. Refer to the documentation for your particular text editor.
Why is a duplicate file sometimes created after editing a file? [duplicate]
1,351,250,142,000
I'm trying to save some work by editing multiple files with the same name located in different directories: $ mkdir -p directory_{0..10}/results How may I create files with the same name in all directories. For example, $ vim/kwrite directory_*/results/output.txt
You almost have it with your second command. You did it correctly with the first, just use the same shell sequence expansion: vim directory_{0..10}/results/output.txt You should see something in the shell about opening 11 files. Then you can use vim to iterate through each one.
Editing multiple files simultaneously with vim/gedit
1,351,250,142,000
I do most of my coding in gedit, which highlights integers and other syntax. When I use an integer range in Ruby, represented as 0..3 for example, the integers are not properly highlighted and are instead the normal text colour. I checked out /usr/share/gtksourceview3.0/language-specs/ruby.lang, but, alas, the solution appears to be much more complicated than I had anticipated. How can I fix this problem?
Well, right now I may suggest only 'brute force' solution. This task is all about knowing regular expressions. Here it is. First of all I decided to define a new regular expression that will match the whole range, instead of redefining decimal, but uses the same styles. There are 3 steps. By the way, this is a guide about language spec for gedit. Styles Let's define styles first. In section <styles> insert before decimal: <style id="range" _name="Range" map-to="def:decimal"/> Matching Then in section <definitions> insert before decimal: <context id="range" style-ref="decimal"> <match>(?&lt;![\w\.])(([1-9](_?[0-9])*|0)\s*\.\.\.?\s*([1-9](_?[0-9])*|0))(?![\w\.])</match> </context> This regular expression matches decimal only ranges (there is always room for improvement) such as 3..7 3...7 3 .. 7 3 ... 7 All the used regular expressions are PCRE (Perl compatible). The best to my mind way would be skimming through Perl Regular Exressions Doc. So you may invent your own and match whatever you want. Invoke matching To force syntax highlighting use this expression we must put it before decimal in the section <include>: <context id="ruby" class="no-spell-check"> <include> ... <context ref="range"/> <context ref="decimal"/> ... </include> </context> And restart Gedit!
How can I syntax highlight Ruby range bounds in gtksourceview3.0?
1,351,250,142,000
When ever I save a gedit file in a directory, two copies of it get saved with identical content. Ex: If the gedit file is named as file_name and saved in Home directory then when u ls the Home directory you get file_name and file_name~ in the list. When the file command is run against them I get ASCII text, with very long lines for both of them when I view their contents using the less command they seem to largely contain identical content. The ~ file is a copy of the file when it was saved for the penultimate time. Can someone please help me in understanding as to why such a file (the file with a trailing ~ mark in its name ) is created?
These are backup files that gedit creates by default. You can disable this feature by going to Preferences → Editor and unchecking the line Create a backup copy of files before saving
Regarding identical copies of gedit files
1,351,250,142,000
I'm often using gedit to print text documents. One thing I don't like is that it prints the full path to the document inside the page header, which sometimes overlaps what's in the top right corner; I'd like it to print the file name only. Can that be done?
The code for this is return g_file_get_parse_name (location); in a function gedit_document_get_uri_for_display: that would need to be changed and gedit rebuilt or a PR created.
Can I change how gedit prints its page header?
1,351,250,142,000
I use the default gEdit on Cinnamon on Linux Mint, but I find when opening files from within it, I cannot navigate the open file dialog solely using the keyboard. I hasn't been intuitive how I use tab and direction Keys My config for curious readers in the future: cat /etc/linuxmint/info RELEASE=17.2 CODENAME=rafaela EDITION="Cinnamon 64-bit" DESCRIPTION="Linux Mint 17.2 Rafaela" DESKTOP=Gnome TOOLKIT=GTK NEW_FEATURES_URL=http://www.linuxmint.com/rel_rafaela_cinnamon_whatsnew.php RELEASE_NOTES_URL=http://www.linuxmint.com/rel_rafaela_cinnamon.php USER_GUIDE_URL=help:linuxmint GRUB_TITLE=Linux Mint 17.2 Cinnamon 64-bit Virtual Machine Host: uname -a Linux ZOXFLI-FQXGR32 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux I host the above from within a Oracle Virtual Box VM running on Windows 7 (SP1). I do not have any known file conflicts from within the VM that are inhibiting this.
I have found I can press Alt + Up/Down to navigate up and down the current file path location. I haven't quite got this next part exactly correct, but if I want to navigate files within the selected path I can press Up or Down usually. Sometimes its a bit unexpected doing this and I use different direction keys or combinations with the Alt key until I get what I'm after. I will update once I have it down pat.
Using open file dialog within gEdit using keyboard only
1,435,262,063,000
I have a .txt file whith quite a long text content. I noticed whenever I read the content up to a certain line and close the file, if I re-open the file later (right away, hours later, or days later), the file opens always displaying me the same region where I ended my previous reading and closed the file. Do you have any explanation of this behavior? (any hints, implementation ...) I open the file with Gedit.
Many text editors store your last cursor position in files you have edited. The information is often only stored for files you have saved with the editor, and different editors may behave differently in this regard. But it is not stored in the file you edited. It is stored somewhere else instead, because inside a plain text file, there is no good place to put it. Gedit uses gvfs to store and retrieve this information for each file. As explained in CrazyApe84's own answer to Where does gedit store the last cursor position? (on Ask Ubuntu), gedit remembers where you were in each file by writing your cursor positions to ~/.local/share/gvfs-metadata/home. It reads them back when you reopen the file. Rather than containing explicit code to open, write, and read that file, gedit uses gvfs and that file is where these particular gvfs data happen to be stored. The information is stored in the metadata::gedit-position gvfs attribute of each file you edit. The reason that is not a contradiction is that a file's gvfs attributes aren't part of the file or even its regular filesystem metadata. They are stored in files the gvfs-metadata directory, like home. If you want to view--or even edit--this information yourself, install the gvfs-bin package: sudo apt update sudo apt install gvfs-bin (That's for Debian and its derivatives, such as Ubuntu, which you are using. Also, depending what desktop environment you are using, commands like gvfs-info may be installed already. But gedit does not use these command-line utilities itself and installing gedit on a non-GNOME desktop environment does not install gvfs-bin.) Then you can use the gvfs-info command to view the attribute. Suppose the file you edited was called your-file and it is in the current directory. Then run: gvfs-info -a metadata::gedit-position your-file Or, as CrazyApe84's answer explains, you could grep for metadata::gedit-position (or just position, if you're willing to have a few extra lines) in the output of gvfs-info your-file: gvfs-info your-file | grep metadata::gedit-position There is a popular fork of gedit called pluma, which comes with the MATE desktop environment (and which, like gedit, can be installed separately if you use a different desktop environment). If you're using pluma instead of gedit, the attribute is metadata::pluma-position rather than metadata::gedit-position. In case you've ever wondered why gedit and pluma don't share information about where you were in a file, that's why.
Why text file always opens at the same point I closed it?
1,435,262,063,000
This is the sequence of commands, gedit starts, but it cannot be killed from its process ID $ gedit& $ t=$! $ echo $t 4824 $ kill $t bash: kill: (4824) - No such process It would work just fine for a sleep process, like sleep 999& [1] 4881 $ t=$! $ echo $t 4881 $ kill $t $ ps -p $t [1] Terminated sleep 999 What's the difference? How can the gedit process be terminated?
The gedit process is already terminated. Remember how Windows applications mainly worked back in the Win16 days before Win32 came along and did away with it: where there were hInstance and hPrevInstance, attempting to run a second instance of many applications simply handed things over to the first instance, and this made things difficult for command scripting tools (like Take Command) because one would invoke an application a second time, it would visibly be there on the screen as an added window, but as far as the command interpreter was concerned the child process that it had just run immediately exited? Well GNOME has brought the Win16 behaviour back for Linux. With GIO applications like gedit, the application behaves as follows: If there's no registered "server" named org.gnome.gedit already on the per-user/per-login Desktop Bus, gedit decides that it's the first instance. It becomes the org.gnome.gedit server and continues to run. If there is a registered "server" named org.gnome.gedit already on the per-user/per-login Desktop Bus, gedit decides that it's a second or subsequent instance. It constructs Desktop Bus messages to the first instance, passing along its command line options and arguments, and then simply exits. So what you see depends from whether you have the gedit server already running. If you haven't, you'll be in sebvieira's shoes and wondering why you aren't seeing the behaviour described. If you have, you'll be in your shoes and seeing the gedit process terminating almost immediately, especially since you haven't given it any command-line options or arguments to send over to the "first instance". Hence the reason that there's no longer a process with that ID. Much fun ensues when, as alluded to above, the per-login Desktop Bus is switched to the "new" style of a per-user Desktop Bus, and suddenly there's not a 1:1 relationship between a Desktop Bus and an X display any more. Single user-bus-wide instance applications suddenly have to be capable of talking to multiple X displays concurrently. Further hilarity ensues when people attempt to run gedit as the superuser via sudo, as it either cannot connect to a per-user Desktop Bus or connects to the wrong (the superuser's) Desktop Bus. There's a proposal to give gedit a command-line option that makes the process that is invoked just be the actual editor application, so that gedit would be useful as the editor pointed-to by the EDITOR environment variable (which it isn't for many common usages of EDITOR, from crontab to git, when it just exits immediately). This proposal has not become reality yet. In the meantime, people have various ways of having a simple second instance of a "lightweight text editor", such as invoking a whole new Desktop Bus instance, private to the invocation of gedit, with dbus-run-session. Of course, this tends to spin up other GNOME Desktop Bus servers on this private bus as they are in turn invoked by gedit, making it not "lightweight" at all. The icing on the cake is when you've followed this recommendation or one like it and interposed a shell function named gedit that immediately removes the gedit process from the shell's list of jobs. Not only does the process terminate rapidly so that you don't see it later with kill or ps, but the shell doesn't even monitor it as a shell-controlled job. Further reading Apps/Gedit/NewSingleInstance. GNOME wiki. 2013. "Description". GApplication. GNOME Developers' wiki. https://stackoverflow.com/questions/7553452/ Stefan Löffler (2011-05-04). doesn't reuse running instance when started from nautilus. Bug #777292. Launchpad.
can't kill gedit process from its PID
1,435,262,063,000
Sometimes, after I’ve been editing a text file in Gedit, the letters clobber together like this: Why is this? How can I stop it from happening?
Seems to be an effect of this bug: https://bugzilla.gnome.org/show_bug.cgi?id=127731 The bug is triggered when you have a very long line (something over 500k chars). You can stop it from happening by inserting some line breaks. If you really need the long line without line breaks then you will have to use another editor until the bug is fixed.
What is it that clobbers my letters together in Gedit?
1,435,262,063,000
Pardon me if this is something that "everybody knows," but gedit (the gnome gui text editor) has a dependency on libbluray. I've googled and read the release notes for gedit, with no luck, and I can't for the life of me think of a legitimate reason for this. I'm using RHEL7 Workstation, but I suspect any gnome DE will have this same strangeness.
The gedit package doesn’t depend on libbluray directly; it depends on Gvfs, the virtual file system, and that can use libbluray to retrieve Blu-ray metadata. The RHEL package is built with this support in place, which is why you end up needing libbluray to install the gedit package (indirectly).
Why does gedit need libbluray?
1,435,262,063,000
I know how to make certain processes open up at start up but I'd really like to get gedit to open a number of files by default every time as well. It sometimes takes me a couple of minutes to find/remember all of them and this would make it a lot faster. Is this possible?
Just add this to your startup applications: gedit "~/file 1" "~/Documents/file 2" "~/Desktop/file 3" It passes the file paths as arguments to GEdit which causes GEdit to open these files.
Can I get gedit to open on startup with a list of certain files?
1,435,262,063,000
In gedit, Ctrl+D deletes a line. Is there a shortcut for inserting/adding a line?
End+Enter will insert a line after the current one. You can use Home+Enter to insert before the current line
Is there a shortcut in gedit to insert a line, i.e. the exact opposite of deleting a line?
1,435,262,063,000
Say I want to remove all digits from a text file using gedit 3.10.4 advanced_find 3.6.0 I have selected the Advanced Find/Replace plugin to support regular expressions out of an internet search. It seemed to stand out for simplicity, accessibility and users' endorsements. The version is claimed to be suitable for gedit 3.8 and later. The plugin has been successfully installed and activated in the Edit>Preferences>Plugin list. However, if I launch a regex substitution query for the object [:digit:], I get the message "[:digit:]" not found while there are digits aplenty. The same occurs for little variations like [:digit:]* or ([:digit:]*) Strangely, if I do precisely the same on the same document with LibreOffice Writer, the commands are executed flawlessly. What is the problem with gedit here? Is there a compatibility issue between host application and plugin? Need regular expression be typed in gedit according to specific rules? Did I miss anything obvious? Help to get the same capabilities with gedit as with LibreOffice Writer much appreciated.
I am not fully aware of LibreOffice's internal syntax, but if you are talking about regex, as in regular expressions, you will have to use the digit symbol \d.
Substitution with Regular Expressions in gedit
1,435,262,063,000
I've been searching out for a plugin that could possibly wraps words with specified text. For example I double click this word to highlight it : word and using a particular shortcut, it becomes as : <b>word</b> or myFunction(word); or ...whatever depending what user defines. I'm currently using Gedit v3. It would be really helpful if someone know a place where I can get this type of plugin.
The "Snippets" plugin will do exactly that. Depending on your platform and version of Gedit, it should already be included, in which case you can simply enable it by going to Edit -> Preferences -> Plugins tab. If it is not present, you may need to upgrade Gedit, as it is a default plugin distributed with Gedit and I don't know of any way to obtain it separately. See http://projects.gnome.org/gedit/plugins.html. To manage snippets, go to Tools -> Manage Snippets. One of the snippets already created for you is "Wrap selection in open/close tag." The snippet markup itself is: <${1:p}>$GEDIT_SELECTED_TEXT</${1}> and the trigger is Shift+Alt+W. You can easily copy this snippet to a new one and replace the tag and trigger to customize to your needs: <${1:em}>$GEDIT_SELECTED_TEXT</${1}> will wrap the selected text in "em" tags. You can even trigger snippets for tab completion, such as this one included for Java: public static void main(String[] args) { ${1:System.exit(0)}; } Simply typing "main" and pressing the Tab key will create the function body.
Search plugin for wrapping words with text in Gedit v3
1,435,262,063,000
I am using a Linux beta in chromebook and gedit with plugins. I want to make the gedit theme all dark. As you can see in the screenshot, I managed to change the theme, but the file browser and the terminal in Gedit won't turn dark. I already changed the color in Preferences → Font & Color.
LXAppearance has the least amount of dependencies and it configure GTK2/3 themes. Other tools: https://wiki.archlinux.org/index.php/GTK#Configuration_tools Configure manually by editing text files (does not always work): https://wiki.archlinux.org/index.php/GTK#Basic_theme_configuration As to what dark/black themes are available - it's your distro specific.
How do I apply darkTheme in gedit?
1,435,262,063,000
I locked my Fedora18 computer (set to Polish). Later I couldn't log in (password was accepted, but I didn't see my desktop). I used pkill -KILL -u tymon command in root mode (Ctrl+Alt+f2). After this, I didn't see (my native) Polish characters in gedit (I see only "bushes"). I don't why. I didn't install any update. Restart didn't help. What should I do? PS : When I wrote gedit --list-encodings there wasn't utf-8 : ISO-8859-1 ISO-8859-2 ISO-8859-3 ISO-8859-4 ISO-8859-5 ISO-8859-6 ISO-8859-7 ISO-8859-8 ISO-8859-9 ISO-8859-10 ISO-8859-13 ISO-8859-14 ISO-8859-15 ISO-8859-16 UTF-7 UTF-16 UTF-16BE UTF-16LE UTF-32 UCS-2 UCS-4 ARMSCII-8 BIG5 BIG5-HKSCS CP866 EUC-JP EUC-JP-MS CP932 EUC-KR EUC-TW GB18030 GB2312 GBK GEORGIAN-ACADEMY IBM850 IBM852 IBM855 IBM857 IBM862 IBM864 ISO-2022-JP ISO-2022-KR ISO-IR-111 JOHAB KOI8R KOI8-R KOI8U SHIFT_JIS TCVN TIS-620 UHC VISCII WINDOWS-1250 WINDOWS-1251 WINDOWS-1252 WINDOWS-1253 WINDOWS-1254 WINDOWS-1255 WINDOWS-1256 WINDOWS-1257 WINDOWS-1258 PPS : Journaltclt -b gives me : -- Logs begin at nie 2013-05-12 15:57:17 EDT, end at nie 2013-05-12 14:01:04 EDT. -- maj 12 15:57:17 localhost.localdomain systemd-journal[454]: Allowing runtime journal files to grow to 196.8M. maj 12 15:57:17 localhost.localdomain systemd-journald[96]: Received SIGTERM maj 12 15:57:17 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 96259 rules. maj 12 15:57:17 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 96259 rules. maj 12 15:57:17 localhost.localdomain kernel: SELinux: 9 users, 15 roles, 4389 types, 246 bools, 1 sens, 1024 cats maj 12 15:57:17 localhost.localdomain kernel: SELinux: 83 classes, 96259 rules maj 12 15:57:17 localhost.localdomain kernel: SELinux: Permission attach_queue in class tun_socket not defined in policy. maj 12 15:57:17 localhost.localdomain kernel: SELinux: the above unknown classes and permissions will be allowed maj 12 15:57:17 localhost.localdomain kernel: SELinux: Completing initialization. maj 12 15:57:17 localhost.localdomain kernel: SELinux: Setting up existing superblocks. maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev sysfs, type sysfs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev rootfs, type rootfs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev bdev, type bdev), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev proc, type proc), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev devtmpfs, type devtmpfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev sockfs, type sockfs), uses task SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev debugfs, type debugfs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev pipefs, type pipefs), uses task SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev anon_inodefs, type anon_inodefs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev devpts, type devpts), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev hugetlbfs, type hugetlbfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev mqueue, type mqueue), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev selinuxfs, type selinuxfs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev sysfs, type sysfs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev securityfs, type securityfs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev cgroup, type cgroup), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev rpc_pipefs, type rpc_pipefs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev dm-2, type ext4), uses xattr maj 12 15:57:17 localhost.localdomain systemd[1]: Successfully loaded SELinux policy in 289ms 940us. maj 12 15:57:17 localhost.localdomain systemd[1]: Relabelled /dev and /run in 19ms 778us. maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev autofs, type autofs), uses genfs_contexts maj 12 15:57:17 localhost.localdomain systemd-journal[454]: Journal started maj 12 15:57:15 localhost.localdomain systemd[1]: systemd 197 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYP maj 12 15:57:15 localhost.localdomain systemd[1]: Set hostname to <localhost.localdomain>. maj 12 15:57:16 localhost.localdomain systemd[1]: Started Apply Kernel Variables. maj 12 15:57:17 localhost.localdomain systemd[1]: Started Setup Virtual Console. maj 12 15:57:17 localhost.localdomain systemd[1]: Started udev Kernel Device Manager. maj 12 15:57:17 localhost.localdomain systemd-udevd[446]: starting version 197 maj 12 15:57:17 localhost.localdomain systemd[1]: Started udev Coldplug all Devices. maj 12 15:57:17 localhost.localdomain systemd[1]: Starting udev Wait for Complete Device Initialization... maj 12 15:57:17 localhost.localdomain systemd[1]: Starting Show Plymouth Boot Screen... maj 12 15:57:17 localhost.localdomain systemd[1]: Mounted Temporary Directory. maj 12 15:57:17 localhost.localdomain systemd[1]: Mounted POSIX Message Queue File System. maj 12 15:57:17 localhost.localdomain systemd[1]: Mounted Huge Pages File System. maj 12 15:57:17 localhost.localdomain systemd[1]: Mounted Debug File System. maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs maj 12 15:57:17 localhost.localdomain kernel: SELinux: initialized (dev hugetlbfs, type hugetlbfs), uses transition SIDs maj 12 15:57:18 localhost.localdomain systemd-modules-load[451]: Inserted module 'uinput' maj 12 15:57:18 localhost.localdomain kernel: EXT4-fs (dm-2): re-mounted. Opts: (null) maj 12 15:57:18 localhost.localdomain systemd[1]: Started Load Kernel Modules. maj 12 15:57:18 localhost.localdomain systemd[1]: Started Remount Root and Kernel File Systems. maj 12 15:57:18 localhost.localdomain systemd[1]: Starting Local File Systems (Pre). maj 12 15:57:18 localhost.localdomain systemd[1]: Reached target Local File Systems (Pre). maj 12 15:57:18 localhost.localdomain systemd[1]: Starting Configure read-only root support... maj 12 15:57:18 localhost.localdomain systemd[1]: Started Import network configuration from initramfs. maj 12 15:57:18 localhost.localdomain systemd[1]: Starting Load Random Seed... maj 12 15:57:18 localhost.localdomain systemd[1]: Mounting Configuration File System... maj 12 15:57:18 localhost.localdomain systemd[1]: Mounted FUSE Control File System. maj 12 15:57:18 localhost.localdomain systemd[1]: Mounted Configuration File System. maj 12 15:57:18 localhost.localdomain kernel: SELinux: initialized (dev configfs, type configfs), uses genfs_contexts maj 12 15:57:18 localhost.localdomain systemd[1]: Started Load Random Seed. maj 12 15:57:18 localhost.localdomain systemd[1]: Started Configure read-only root support. maj 12 15:57:18 localhost.localdomain kernel: kvm: disabled by bios maj 12 15:57:18 localhost.localdomain systemd[1]: Started Load legacy module configuration. maj 12 15:57:19 localhost.localdomain systemd[1]: Started Show Plymouth Boot Screen. maj 12 15:57:19 localhost.localdomain systemd[1]: Starting Forward Password Requests to Plymouth Directory Watch. maj 12 15:57:19 localhost.localdomain systemd[1]: Started Forward Password Requests to Plymouth Directory Watch. maj 12 15:57:19 localhost.localdomain systemd[1]: Started Dispatch Password Requests to Console Directory Watch. maj 12 15:57:19 localhost.localdomain kernel: asus_wmi: ASUS WMI generic driver loaded maj 12 15:57:19 localhost.localdomain kernel: asus_wmi: Initialization: 0x0 maj 12 15:57:19 localhost.localdomain kernel: asus_wmi: BIOS WMI version: 0.9 maj 12 15:57:19 localhost.localdomain kernel: asus_wmi: SFUN value: 0x0 maj 12 15:57:19 localhost.localdomain kernel: input: Eee PC WMI hotkeys as /devices/platform/eeepc-wmi/input/input6 maj 12 15:57:19 localhost.localdomain kernel: asus_wmi: Disabling ACPI video driver maj 12 15:57:19 localhost.localdomain kernel: i801_smbus 0000:00:1f.3: SMBus using PCI Interrupt maj 12 15:57:19 localhost.localdomain kernel: mei 0000:00:16.0: setting latency timer to 64 maj 12 15:57:19 localhost.localdomain kernel: mei 0000:00:16.0: irq 48 for MSI/MSI-X maj 12 15:57:19 localhost.localdomain kernel: ACPI Warning: 0x0000000000000540-0x000000000000054f SystemIO conflicts with Region \_SB_.PCI0. maj 12 15:57:19 localhost.localdomain kernel: ACPI: If an ACPI driver is available for this device, you should use it instead of the native maj 12 15:57:19 localhost.localdomain kernel: ACPI Warning: 0x0000000000000530-0x000000000000053f SystemIO conflicts with Region \_SB_.PCI0. maj 12 15:57:19 localhost.localdomain kernel: ACPI: If an ACPI driver is available for this device, you should use it instead of the native maj 12 15:57:19 localhost.localdomain kernel: ACPI Warning: 0x0000000000000500-0x000000000000052f SystemIO conflicts with Region \_SB_.PCI0. maj 12 15:57:19 localhost.localdomain kernel: ACPI: If an ACPI driver is available for this device, you should use it instead of the native maj 12 15:57:19 localhost.localdomain kernel: lpc_ich: Resource conflict(s) found affecting gpio_ich maj 12 15:57:19 localhost.localdomain kernel: iTCO_vendor_support: vendor-support=0 maj 12 15:57:19 localhost.localdomain kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.10 maj 12 15:57:19 localhost.localdomain kernel: iTCO_wdt: Found a Cougar Point TCO device (Version=2, TCOBASE=0x0460) maj 12 15:57:19 localhost.localdomain kernel: cfg80211: Calling CRDA to update world regulatory domain maj 12 15:57:19 localhost.localdomain kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) maj 12 15:57:19 localhost.localdomain kernel: snd_hda_intel 0000:00:1b.0: irq 49 for MSI/MSI-X maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:306 autoconfig: line_outs=1 (0x1c/0x0/0x0/0x0/0x0) type:l maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:310 speaker_outs=0 (0x0/0x0/0x0/0x0/0x0) maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:314 hp_outs=1 (0x1d/0x0/0x0/0x0/0x0) maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:315 mono: mono_out=0x0 maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:318 dig-out=0x20/0x0 maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:319 inputs: maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:323 Rear Mic=0x1a maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:323 Front Mic=0x1e maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_auto_parser.c:323 Line=0x1b maj 12 15:57:19 localhost.localdomain kernel: input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 maj 12 15:57:19 localhost.localdomain kernel: input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 maj 12 15:57:19 localhost.localdomain kernel: input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 maj 12 15:57:19 localhost.localdomain kernel: input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 maj 12 15:57:19 localhost.localdomain kernel: input: HDA Intel PCH Line Out as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11 maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_intel.c:2848 0000:01:00.1: Handle VGA-switcheroo audio client maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_intel.c:3040 0000:01:00.1: Using LPIB position fix maj 12 15:57:19 localhost.localdomain kernel: snd_hda_intel 0000:01:00.1: irq 50 for MSI/MSI-X maj 12 15:57:19 localhost.localdomain kernel: ALSA sound/pci/hda/hda_intel.c:1716 0000:01:00.1: Enable sync_write for stable communication maj 12 15:57:19 localhost.localdomain kernel: input: HD-Audio Generic HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/c maj 12 15:57:19 localhost.localdomain kernel: microcode: CPU0 sig=0x206a7, pf=0x2, revision=0x14 maj 12 15:57:20 localhost.localdomain systemd[1]: Starting Sound Card. maj 12 15:57:20 localhost.localdomain systemd[1]: Reached target Sound Card. maj 12 15:57:21 localhost.localdomain kernel: ieee80211 phy0: Selected rate control algorithm 'minstrel_ht' maj 12 15:57:21 localhost.localdomain mtp-probe[569]: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2" maj 12 15:57:21 localhost.localdomain mtp-probe[568]: checking bus 1, device 4: "/sys/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2" maj 12 15:57:21 localhost.localdomain mtp-probe[565]: checking bus 1, device 3: "/sys/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.1" maj 12 15:57:21 localhost.localdomain mtp-probe[566]: checking bus 1, device 5: "/sys/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.3" maj 12 15:57:21 localhost.localdomain mtp-probe[565]: bus: 1, device: 3 was not an MTP device maj 12 15:57:21 localhost.localdomain mtp-probe[566]: bus: 1, device: 5 was not an MTP device maj 12 15:57:21 localhost.localdomain mtp-probe[568]: bus: 1, device: 4 was not an MTP device maj 12 15:57:21 localhost.localdomain mtp-probe[569]: bus: 2, device: 3 was not an MTP device maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] Unhandled sense code maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Sense Key : Medium Error [current] maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Add. Sense: Unrecovered read error maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] CDB: maj 12 15:57:21 localhost.localdomain kernel: Read(10): 28 00 00 00 33 00 00 00 08 00 maj 12 15:57:21 localhost.localdomain kernel: blk_update_request: 417 callbacks suppressed maj 12 15:57:21 localhost.localdomain kernel: end_request: critical target error, dev sde, sector 13056 maj 12 15:57:21 localhost.localdomain kernel: quiet_error: 423 callbacks suppressed maj 12 15:57:21 localhost.localdomain kernel: Buffer I/O error on device sde, logical block 1632 maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] Unhandled sense code maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Sense Key : Medium Error [current] maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Add. Sense: Unrecovered read error maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] CDB: maj 12 15:57:21 localhost.localdomain kernel: Read(10): 28 00 00 00 33 00 00 00 08 00 maj 12 15:57:21 localhost.localdomain kernel: end_request: critical target error, dev sde, sector 13056 maj 12 15:57:21 localhost.localdomain kernel: Buffer I/O error on device sde, logical block 1632 maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] Unhandled sense code maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Sense Key : Medium Error [current] maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Add. Sense: Unrecovered read error maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] CDB: maj 12 15:57:21 localhost.localdomain kernel: Read(10): 28 00 00 00 33 d0 00 00 08 00 maj 12 15:57:21 localhost.localdomain kernel: end_request: critical target error, dev sde, sector 13264 maj 12 15:57:21 localhost.localdomain kernel: Buffer I/O error on device sde, logical block 1658 maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] Unhandled sense code maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Sense Key : Medium Error [current] maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] maj 12 15:57:21 localhost.localdomain kernel: Add. Sense: Unrecovered read error maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] CDB: maj 12 15:57:21 localhost.localdomain kernel: Read(10): 28 00 00 00 33 d0 00 00 08 00 maj 12 15:57:21 localhost.localdomain kernel: end_request: critical target error, dev sde, sector 13264 maj 12 15:57:21 localhost.localdomain kernel: Buffer I/O error on device sde, logical block 1658 maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde] Unhandled sense code maj 12 15:57:21 localhost.localdomain kernel: sd 4:0:0:2: [sde]
Ok, I found the solution. When I opened my file, there was set "Automatycznie wykryte" encoding, which means "automatically detected". When I changed it to UTF-8, it works correctly. I don't know why, but it's a solution.
Polish chars in gedit (Fedora 18)
1,435,262,063,000
I have been trying to install a gedit plugin called RunC to compile C programs using the text editor. Although the instructions tell me to use ubuntu 9 or 10 for this, I'm currently running Fedora 16. I thought that I would not have much problems, but after running the shell script using the sh command to install the plugin, gedit showed no signs of the plugin being installed. I've made sure that gedit is updated to the current release. Is there something I'm missing from this installation? Additionally, does the file structure of various distributions differ greatly from each other? Will scripts that work for one distro be incompatible with another? Thanks for indulging me with my silly questions. Attached below is the RunC plugin for gedit: http://plg1.uwaterloo.ca/~gvcormac/RunC/
I didn't manage to install it because RunC wasn't compatible with other distros. I confirmed this by installing Ubuntu 10, and it worked perfectly.
Installing a gedit plugin on Fedora
1,435,262,063,000
I'm installing and configuring the ssh in CentOs 8 with a virtual box machine. I installed both libraries with following command: sudo yum install openssh-server openssh-clients Then I started the service with: sudo systemctl enable sshd sudo systemctl start sshd sudo systemctl status sshd And it's running normally. When I try to edit the sshd_config file with the command: sudo gedit /etc/ssh/sshd_config I get the following error: No protocol specified Unable to init server: Could not connect: Connection refused (gedit:5680): Gtk-WARNING **: 21:59:00.071: cannot open display: :0 Could someone help me, please?
You're likely running Xwindows under your account, but are trying to open the gedit session as root with the sudo command. Try xhost local: to allow any user from the local host to access the display, then try the command again. If that works, you can refine it more to just allow root to access the display with xhost +SI:localuser:root
How to run sudo gedit without connection refusal?
1,435,262,063,000
I am not sure if it is supported but cannot find it. OS: Debian 8.5
It's supported by the code comment plugin (gedit-plugins in Debian). Make sure it's installed, then in the gedit preferences, enable "Code Comment" in the "Plugins" tab. Open a TeX or LaTeX document, select some text, and hit CtrlM to comment the corresponding lines. CtrlShiftM uncomments.
How to comment out LaTeX code in Gedit? [closed]
1,435,262,063,000
Note: I already know the answer, I'm only sharing my experience. Some gedit plugins doesn't work on Lubuntu, the most notable is the Snippets one. I get the following error when I click on Tools->Manage Snippets: Traceback (most recent call last): File "/usr/lib/i386-linux-gnu/gedit/plugins/snippets/document.py", line 95, in do_deactivate self.disconnect_signals(self.view) File "/usr/lib/python3/dist-packages/gi/_gobject/propertyhelper.py", line 214, in __get__ value = instance.get_property(self.name) TypeError: unknown type (null)
It seems gedit does havea very large number of missing required dependencies, that rise up when you try to use it on environments that are not Unity or GNOME. To get rid of this particular problem, you have only to install: sudo apt-get install python3-gi-cairo Source: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=720724
"Snippets" plugin doesn't work on gedit
1,435,262,063,000
I try to use regex such as ^ to find the start of the line in the gedit editor in Ubuntu 22.04 but it doesn't work. Also for $ as the end of the line. Other regular expressions such as \n,\d,\s work well
^ and $ work in gedit's Find and replace provided you tick the Regular expression box but gedit skips empty matches. ^$, ^ alone match an empty string so won't be replaced. x* matches 0 or more x's but gedit won't replace the occurrences matching 0 x's, so it's equivalent to using x+. If you want to use Find and replace to insert something at the start of the line, instead of replacing ^ with something, which won't work, you can replace ^. (the first character of the line) with something\0 (something followed by what was matched), bearing in mind that it won't work for empty lines where ^. doesn't match. You can however replace (?s)^. with something\0. By enabling the s flag, . will also match on newline, so it will also work on empty lines. The only case where I find it doesn't work is on the last line if it happens to be empty, presumably because gedit doesn't include the line delimiter of the last line in the buffer its regexp matches against. And to replace empty lines with something, replace ^\n with something\n instead of ^$ with something. Again, that doesn't work for the last line.
Start of the line with regex in Ubuntu 22.04's gedit text editor
1,435,262,063,000
I was planning on building a current logging device with an Arduino, and tried starting off with the basic SD-card write example from the Arduino IDE sketchbook. The Serial console showed no errors, so I assumed everything worked perfectly. However... when I inserted the card in my computer (Ubuntu 18.04) and opened it with gedit, the file was blank? Vim had the same behaviour: it was blank. But... when I used cat on the file, it DID show the contents?! Anyone have a clue what I did wrong here? EDIT: in reponse to Bodo's question: output of cat: ... TEST TEST TEST TEST TEST TEST 372,345,324 342,340,330 331,332,328 327,325,324 322,320,318 317,315,313 313,310,309 ... (This is what's supposed to be in there) out put of ls -l: total 16 -rw-r--r-- 1 myname myname 15161 Jan 1 2000 DATALOG.TXT output of od -c -tx1 DATALOG.TXT | head -1 : 0000000 \r \n \r \n \r \n \r \n \r \n \r \n \r \n \r \n This last one answerd the question to me... It shows a bunch of newlines. But really... A LOT... This explains why gedit and vim showed empty files. I have to scroll down for ages before seeing the actual data. When using cat, I only saw the trailing lines, so it showed the contents. Probably something that went wrong when saving the data to the SD-card. Thanks Bodo! You solved my problem merely by posing a question :D
As already shown in the edit, here's the answer: executing od -c -tx1 DATALOG.TXT | head -1 gave me this: 0000000 \r \n \r \n \r \n \r \n \r \n \r \n \r \n \r \n This hinted me that the file might have a bunch of newlines at the beginning, with the contents trailing. This turned out to be the case. That is why the file appeared empty in gedit and vim, but not with cat. Because I would only see the last lines of cat in terminal.
cat shows file contents, gedit and vim show blank file
1,435,262,063,000
I am reading a tutorial that wants me to place a script file called script.sh into a folder called /etc/profile.d/. However, when I try to save the script.sh file in that directory, the gedit tool gives me an error stating that I do not have privileges to save in that folder. So I saved script.sh on the desktop temporarily. I cannot even view the contents of the /etc/ folder through the GUI. (Unless it is empty and I am seeing truly empty contents.) I can run the terminal as root by typing su - root, but what do I type to either move the script.sh file from the desktop to /etc/profile.d/script.sh or to open gedit in a way that lets me save it to /etc/profile.d/script.sh?
If you have the sudo package try gksudo nautilus, otherwise use sudo mv -v /home/username/Desktop/script.sh /etc/profile.d/script.sh For more, try man mv sudo elevates the command following it temporarily to perform tasks like you describred.
moving a file to a folder with root privileges in CentOS 7
1,359,459,458,000
Currently I'm writing a Bash script which has the following requirements: it should run on a wide variety of Unix/Linux platforms it should support both short and (GNU) long options I know that getopts would be the preferred way in terms of portability but AFAIK it doesn't support long options. getopt supports long options but the BashGuide recommends strongly against it: Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed. So, there still is the option of manual parsing. This is error-prone, produces quite some boilerplate code, and I need to handle errors by myself (I guess getopt(s) do error-handling by themselves). So, what would be the preferred choice in this case?
If it has to be portable to a range of Unices, you'd have to stick to POSIX sh. And AFAIU there you just have no choice but rolling argument handling by hand.
getopt, getopts or manual parsing - what to use when I want to support both short and long options?
1,359,459,458,000
Are there some built-in tools that will recognize -x and --xxxx as switches (flags or "boolean options", rather than "ordinary arguments"), or do you have to go through all the arguments, test for dashes, and then parse the rest thereafter?
Use getopts. It is fairly portable as it is in the POSIX spec. Unfortunately it doesn't support long options. See also: the Small getopts tutorial (in the Bash Hackers wiki) another question on StackOverflow. If you only need short options, typical usage pattern for getopts (using non-silent error reporting) is: # process arguments "$1", "$2", ... (i.e. "$@") while getopts "ab:" opt; do case $opt in a) aflag=true ;; # Handle -a b) barg=$OPTARG ;; # Handle -b argument \?) ;; # Handle error: unknown option or missing required argument. esac done
How do I handle switches in a shell script?
1,359,459,458,000
I have this code - #getoptDemo.sh usage() { echo "usage: <command> options:<w|l|h>" } while getopts wlh: option do case $option in (w) name='1';; (l) name='2';; (h) name='3';; (*) usage exit;; esac done print 'hi'$name When I run bash getoptDemos.sh (without the option) it prints hi instead of calling the function usage. It calls usage when options other than w, h and l are given. Then can't it work when no options are specified. I have tried using ?, \?, : in place of * but I can't achieve what I wanted to. I mean all the docs on getopt says it to use ?. What am I doing wrong?
When you run this script without any options, getopt will return false, so it won't enter the loop at all. It will just drop down to the print - is this ksh/zsh? If you must have an option, you're best bet is to test $name after the loop. if [ -z "$name" ] then usage exit fi But make sure $name was empty before calling getopts (as there could have been a $name in the environment the shell received on startup) with unset name (before the getopts loop)
How can I detect that no options were passed with getopts?
1,359,459,458,000
In option string when using getopts, from http://wiki.bash-hackers.org/howto/getopts_tutorial If the very first character of the option-string is a : (colon), which would normally be nonsense because there's no option letter preceding it, getopts switches to " silent error reporting mode". In productive scripts, this is usually what you want because it allows you to handle errors yourself without being disturbed by annoying messages. I was wondering what the followings mean: "silent error reporting mode" "it allows you to handle errors yourself without being disturbed by annoying messages"? Could you maybe give some examples?
If the very first character of optstring is a colon, getopts will not produce any diagnostic messages for missing option arguments or invalid options. This could be useful if you really need to have more control over the diagnostic messages produced by your script or if you simply don't want anything to appear on the standard error stream if the user provides wonky command line options. In silent reporting mode (with the initial :), if you want to alert the user of an invalid option, you will have to look for ? in the variable passed to getopts. Likewise, for missing option arguments, it's a :. These are the two errors usually handled by getopts itself, but to do your own error reporting to the user, you will need to catch these separately to be able to give the correct diagnostic message. In non-silent reporting mode, getopts does its own error reporting on standard error and you just have to catch a * for "any error". Compare these two examples: #!/bin/bash while getopts 'a:b:' opt; do case "$opt" in a) printf 'Got a: "%s"\n' "$OPTARG" ;; b) printf 'Got b: "%s"\n' "$OPTARG" ;; *) echo 'some kind of error' >&2 exit 1 esac done The * case catches any kind of command line parsing error. $ bash script.sh -a script.sh: option requires an argument -- a some kind of error $ bash script.sh -c script.sh: illegal option -- c some kind of error #!/bin/bash while getopts ':a:b:' opt; do case "$opt" in a) printf 'Got a: "%s"\n' "$OPTARG" ;; b) printf 'Got b: "%s"\n' "$OPTARG" ;; :) echo 'missing argument!' >&2 exit 1 ;; \?) echo 'invalid option!' >&2 exit 1 esac done The : case above catches missing argument errors, while the ? case catches invalid option errors (note that ? needs to be escaped or quoted to match a literal ? as it otherwise matches any single character). $ bash script.sh -a missing argument! $ bash script.sh -b missing argument! $ bash script.sh -c invalid option!
What is the purpose of the very first character of the option-string of getopts being a : (colon)?
1,359,459,458,000
I have a bash script as below in a file nepleaks_upd.sh, that I want to run as ./nepleaks_upd.sh bootstrap --branch off. Couldn't make it to take --branch , but what it works with is ./nepleaks_upd.sh bootstrap -b off. usage() { echo "Usage: $0 [prepare | up | down] [-b <on/off>]" 1>&2; exit 1; } case "$1" in bootstrap) while getopts ":b:" o; do case "${o}" in b) b=${OPTARG} if [ ${b} == "off" ]; then echo "git clone https://github.com/iPrayag/dotfiles.git" ## logic fi ;; *) echo ${o} usage ;; esac done shift $((OPTIND-1)) echo "option1 = ${o}" echo "option2 = ${b}" if [ -z "${b}" ]; then usage fi ;; up) echo "up" ##logic ;; down) echo "down" ##logic ;; *) echo "Usage: $0 {up | down} dev" exit 1 ;; esac Without first case .. in .... esac, it works fine. With case ... in ... esac, it gives blank option for -b, $ ./nepleaks_upd.sh bootstrap -b off option1 = ? option2 = Usage: ./nepleaks_upd.sh [bootstrap | up | down] [-b <on/off>]
getopts starts parsing at the first argument and stops at the first non-option arguments. That's the standard convention — some GNU utilities accept options after arguments, but the normal thing is that in somecommand foo -bar qux, -bar is not parsed as an option. If you want to start parsing options after bootstrap, you need to indicate that. getopts uses the OPTIND variable to remember what position it's at. OPTIND starts out with the value 1. To skip the first argument, set it to 2. case "$1" in bootstrap) OPTIND=2 while getopts ":b:" o; do … Alternatively, you could shift the arguments that you've already processed. subcommand=$1; shift case "$subcommand" in bootstrap) while getopts ":b:" o; do …
Using getopts to parse options after a non-option argument
1,359,459,458,000
In below code when I give option r then getopts requires one arguments: while getopts ":hr::l:" opt; do case $opt in r ) echo "Run Numbers - argument = $OPTARG " ;; l ) echo "Latency range - argument = $OPTARG" ;; h ) helptext graceful_exit ;; * ) usage clean_up exit 1 esac done But I need to pass two arguments after -r option, instead of one. Is there an easy way to do this?
You cannot pass two arguments with single option using getopts. I recommend the following alternatives: Put quotes around multiple arguments In this case getopts will treat them as one argument, but you will be able to split it later on. You can even put all arguments in the array at once: #!/bin/bash while getopts ":hr:l:" opt; do case $opt in r ) echo "Run Numbers - argument = $OPTARG " set -f # disable glob IFS=' ' # split on space characters array=($OPTARG) ;; # use the split+glob operator l ) echo "Latency range - argument = $OPTARG" ;; h ) helptext graceful_exit ;; * ) usage clean_up exit 1 esac done echo "Number of arguments: ${#array[@]}" echo -n "Arguments are:" for i in "${array[@]}"; do echo -n " ${i}," done printf "\b \n" The example of run: ./script -r "123 456 789" And output: Run Numbers - argument = 123 456 789 Number of arguments: 3 Arguments are: 123, 456, 789 Use comma (or other preferred character) as a delimiter ./script -r 123,456,789 and you just replace IFS=" " with IFS=, in the code above. That one has the advantage of allowing empty elements. As pointed out in the comments section this solution is chosen by some common programs e.g. lsblk -o NAME,FSTYPE,SIZE. Allow multiple -r options Multiple -r, but each taking only one argument: ./script -r 123 -r 456 -r 789 Then arguments would be added to array one by one array+=("$OPTARG") That one has the advantage of not having limitations on what characters the elements may contain. This one is also used by some standard linux tools e.g. awk -v var1=x -v var2=y.
Provide two arguments to one option using getopts
1,359,459,458,000
I defined the function f in Bash based on the example here (under "An option with an argument"): f () { while getopts ":a:" opt; do case $opt in a) echo "-a was triggered, Parameter: $OPTARG" >&2 ;; \?) echo "Invalid option: -$OPTARG" >&2 return 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 return 1 ;; esac done } Whereas they use a script, I directly define the function in the shell. When I first launch Bash and define the function, everything works: f -a 123 prints -a was triggered, Parameter: 123. But when I run the exact same line a second time, nothing is printed. What's causing this behavior? It happens in Bash 3.2 and 4.3, but it works fine in Zsh 5.1. This is surprising because the example was supposed to be for Bash, not for Zsh.
bash getopts use an environment variable OPTIND to keep track the last option argument processed. The fact that OPTIND was not automatically reset each time you called getopts in the same shell session, only when the shell was invoked. So from second time you called getopts with the same arguments in the same session, OPTIND wasn't changed, getopts thought it had done the job and do nothing. You can reset OPTIND manually to make it work: $ OPTIND=1 $ f -a 123 -a was triggered, Parameter: 123 or just put the function into a script and call the script multiple times. zsh getopts is slightly different. OPTIND was normally reset to 1 each time upon exit from shell function.
Bash function with `getopts` only works the first time it's run
1,359,459,458,000
I am working with a pretty simple bash script, but I am facing a problem which I can't resolve: I have myscript.sh with three parameters, -u, -h, and -p. Both -u and -h are mandatory, needed for the script to run. What I would like to do is, if myscript.sh -u User1 and nothing more it should terminate with exit 1. I want my script to analyze and just run the script if myscript.sh -u User1 -h Localhost are on the options, otherwise it should exit. Many thanks
Maybe something like this? #!/bin/bash unset -v host unset -v port unset -v user while getopts h:p:u: opt; do case $opt in h) host=$OPTARG ;; p) port=$OPTARG ;; u) user=$OPTARG ;; *) echo 'Error in command line parsing' >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" if [ -z "$host" ] || [ -z "$user" ]; then echo 'Missing -h or -u' >&2 exit 1 fi # The rest of the script continues here and may make use of # "$host" and "$user", and possibly "$port". The main bit here is the short if statement at the end that tests whether there were ever anything assigned to the host or user variables in the while loop. If either of these variables are empty, then the code treats it as an error and exits after outputting a short diagnostic message. The script shown above should also be able to run under /bin/sh as it does not contain any bashishms. Instead of the if statement, you could use : ${host:?Missing -h} : ${user:?Missing -u} or, even shorter, : ${host:?Missing -h} ${user:?Missing -u} The : command is a utility that doesn't do anything, but its argument would still be processed by the calling shell, like for all commands. With the two lines above, the shell would try to expand the host and user variables, and if the variable is unset, or if the expansion results in an empty string, then the string to the right after ? will be outputted and the script will terminate with a non-zero exit status: $ bash script.sh -h somehost script.sh: line 21: user: Missing -u $ echo $? 1 The ${variable:?text} expansion is standard and therefore supported by bash and all other POSIX sh-like shells.
Bash getopts, mandatory arguments
1,359,459,458,000
As I was looking this answer https://stackoverflow.com/a/11065196/4706711 in order to figure out on how to use parameters like --something or -s some questions rised regarding the answer's script : #!/bin/bash TEMP=`getopt -o ab:c:: --long a-long,b-long:,c-long:: \ -n 'example.bash' -- "$@"` if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi # Note the quotes around `$TEMP': they are essential! eval set -- "$TEMP" while true ; do case "$1" in -a|--a-long) echo "Option a" ; shift ;; -b|--b-long) echo "Option b, argument \`$2'" ; shift 2 ;; -c|--c-long) # c has an optional argument. As we are in quoted mode, # an empty parameter will be generated if its optional # argument is not found. case "$2" in "") echo "Option c, no argument"; shift 2 ;; *) echo "Option c, argument \`$2'" ; shift 2 ;; esac ;; --) shift ; break ;; *) echo "Internal error!" ; exit 1 ;; esac done echo "Remaining arguments:" for arg do echo '--> '"\`$arg'" ; done First of all what does the shift program in the following line: -a|--a-long) echo "Option a" ; shift ;; Afterwards what is the purpose to use the eval command in the following line: eval set -- "$TEMP" I tried to comment the line in script mentioned above and I got the following response: $ ./getOptExample2.sh -a 10 -b 20 --a-long 40 -charem --c-long=echi Param: -a Option a Param: 10 Internal error! But if I uncomment it it runs like a charm: Option a Option b, argument `20' Option a Option c, argument `harem' Option c, argument `echi' Remaining arguments: --> `10' --> `40'
One of the many things that getopt does while parsing options is to rearrange the arguments, so that non-option arguments come last, and combined short options are split up. From man getopt: Output is generated for each element described in the previous section. Output is done in the same order as the elements are specified in the input, except for non-option parameters. Output can be done in compatible (unquoted) mode, or in such way that whitespace and other special characters within arguments and non-option parameters are preserved (see QUOTING). When the output is processed in the shell script, it will seem to be composed of distinct elements that can be processed one by one (by using the shift command in most shell languages). [...] Normally, no non-option parameters output is generated until all options and their arguments have been generated. Then '--' is generated as a single parameter, and after it the non-option parameters in the order they were found, each as a separate parameter. This effect is reflected in your code, where the option-handling loop assumes that all option arguments (including arguments to options) come first, and come separately, and are finally followed by non-option arguments. So, TEMP contains the rearranged, quoted, split-up options, and using eval set makes them script arguments. Why eval? You need a way to safely convert the output of getopt to arguments. That means safely handling special characters like spaces, ', " (quotes), *, etc. To do that, getopt escapes them in the output for interpretation by the shell. Without eval, the only option is set $TEMP, but you're limited to what's possible by field splitting and globbing instead of the full parsing ability of the shell. Say you have two arguments. There is no way to get those two as separate words using just field splitting without additionally restricting the characters usable in arguments (e.g., say you set IFS to :, then you cannot have : in the arguments). So, you need to able to escape such characters and have the shell interpret that escaping, which is why eval is needed. Barring a major bug in getopt, it should be safe. As for shift, it does what it always does: remove the first argument, and shift all arguments (so that what was $2 will now be $1). This eliminates the arguments that have been processed, so that, after this loop, only non-option arguments are left and you can conveniently use $@ without worrying about options.
Bash: Why is eval and shift used in a script that parses command line arguments?
1,359,459,458,000
I use getopts to parse arguments in bash scripts as while getopts ":hd:" opt; do case $opt in d ) echo "directory = $OPTARG"; mydir="$OPTARG"; shift $((OPTIND-1)); OPTIND=1 ;; h ) helptext graceful_exit ;; * ) usage clean_up exit 1 esac done exeparams="$*" exeparams will hold any unparsed options/arguments. Since I want to use exeparams to hold options for a command to be executed within the script (which can overlap with the scripts own options), I want to use -- to end the options passed to the script. If I pass e.g. myscript -d myscriptparam -- -d internalparam exeparams will hold -- -d internalparam I now want to remove the leading -- to pass these arguments to the internal command. Is there an elegant way to do this or can I obtain a string which holds just the remainder without -- from getopts?
How about: # ... getopts processing ... [[ $1 = "--" ]] && shift exeparams=("$@") Note, you should use an array to hold the parameters. That will properly handle any arguments containing whitespace. Dereference the array with "${exeparams[@]}"
How to deal with end of options -- in getopts
1,359,459,458,000
The Linux foundation list of standard utilities includes getopts but not getopt. Similar for the Open Group list of Posix utilities. Meanwhile, Wikipedia's list of standard Unix Commands includes getopt but not getopts. Similarly, the Windows Subsystem for Linux (based on Ubuntu based on Debian) also includes getopt but not getopts (and it is the GNU Enhanced version). balter@spectre:~$ which getopt /usr/bin/getopt balter@spectre:~$ getopt -V getopt from util-linux 2.27.1 balter@spectre:~$ which getopts balter@spectre:~$ So if I want to pick one that I can be the most confident that anyone using one of the more standard Linux distros (e.g. Debian, Red Hat, Ubuntu, Fedora, CentOS, etc.), which should I pick? Note: thanks to Michael and Muru for explaining about builtin vs executable. I had just stumbled across this as well which lists bash builtins.
which is the wrong tool. getopts is usually also a builtin: Since getopts affects the current shell execution environment, it is generally provided as a shell regular built-in. ~ for sh in dash ksh bash zsh; do "$sh" -c 'printf "%s in %s\n" "$(type getopts)" "$0"'; done getopts is a shell builtin in dash getopts is a shell builtin in ksh getopts is a shell builtin in bash getopts is a shell builtin in zsh If you're using a shell script, you can safely depend on getopts. There might be other reasons to favour one or the other, but getopts is standard. See also: Why not use "which"? What to use then?
Which is the more standard package, getopt or getopts (with an "s")?
1,359,459,458,000
I'm trying to build up a shell script that accepts various options and getopts seems like a good solution as it can handle the variable ordering of the options and arguments (I think!). I'll only be using short options and each short options will require a corresponding value eg: ./command.sh -a arga -g argg -b argb but I would like to allow the options to be entered in a non-specific order, as is the way most people are accustomed to working with shell commands. The other point is that I would like to do my own checking of option argument values, ideally within the case statements. The reason for this is that my testing of :) in my case statement has yielded inconsistent results (probably through lack of understanding on my part). For example: #!/bin/bash OPTIND=1 # Reset if getopts used previously if (($# == 0)); then echo "Usage" exit 2 fi while getopts ":h:u:p:d:" opt; do case "$opt" in h) MYSQL_HOST=$OPTARG ;; u) MYSQL_USER=$OPTARG ;; p) MYSQL_PASS=$OPTARG ;; d) BACKUP_DIR=$OPTARG ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 2;; :) echo "Option -$OPTARG requires an argument" >&2 exit 2;; esac done shift $((OPTIND-1)) echo "MYSQL_HOST='$MYSQL_HOST' MYSQL_USER='$MYSQL_USER' MYSQL_PASS='$MYSQL_PASS' BACKUP_DIR='$BACKUP_DIR' Additionals: $@" Was failing for occurrences like this... ./command.sh -d -h When I want it to flag -d as requiring an argument but I get the value of -d=-h which is not what I need. So I figured it would be easier to run my own validation within the case statements to ensure that each option is set and set only once. I'm trying to do the following but my if [ ! "$MYSQL_HOST" ]; then blocks are not triggered. OPTIND=1 # Reset if getopts used previously if (($# == 0)); then echo "Usage" exit 2 fi while getopts ":h:u:p:d:" opt; do case "$opt" in h) MYSQL_HOST=$OPTARG if [ ! "$MYSQL_HOST" ]; then echo "host not set" exit 2 fi ;; u) MYSQL_USER=$OPTARG if [ ! "$MYSQL_USER" ]; then echo "username not set" exit 2 fi ;; p) MYSQL_PASS=$OPTARG if [ ! "$MYSQL_PASS" ]; then echo "password not set" exit 2 fi ;; d) BACKUP_DIR=$OPTARG if [ ! "$BACKUP_DIR" ]; then echo "backup dir not set" exit 2 fi ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 2;; #:) # echo "Option -$opt requires an argument" >&2 # exit 2;; esac done shift $((OPTIND-1)) echo "MYSQL_HOST='$MYSQL_HOST' MYSQL_USER='$MYSQL_USER' MYSQL_PASS='$MYSQL_PASS' BACKUP_DIR='$BACKUP_DIR' Additionals: $@" Is there a reason that I'm unable to check if an OPTARG has zero-length from within getopts ... while ... case? What's the better way to run my own argument validation with getopts in a case where I don't want to be relying on the :). Perform my argument validation outside of the while ... case ... esac? Then I could end up with argument values of -d etc and not catching a missing option.
When you call your second script (I saved it as getoptit) with: getoptit -d -h This will print: MYSQL_HOST='' MYSQL_USER='' MYSQL_PASS='' BACKUP_DIR='-h' Additionals: So BACKUP_DIR is set, and you are testing with if [ ! "$BACKUP_DIR" ]; then if it is not set, so it is normal that the code inside of it is not triggered. If you want to test if each option is set once, you have to do that before you do the assigment from the $OPTARG value. And you should probably also check for the $OPTARG to start with a '-' (for the -d -h error) before assigning: ... d) if [ ! -z "$BACKUP_DIR" ]; then echo "backup dir already set" exit 2 fi if [ z"${OPTARG:0:1}" == "z-" ]; then echo "backup dir starts with option string" exit 2 fi BACKUP_DIR=$OPTARG ;; ...
bash getopts, short options only, all require values, own validation
1,359,459,458,000
When parsing command line arguments with GNU getopt command, how do I (if possible) do recognize -? as another option? Is there a way to escape it in the opstring?
The GNU getopt command uses the GNU getopt() library function to do the parsing of the arguments and options. The man page getopt(3) states: If getopt() does not recognize an option character, it prints an error message to stderr, stores the character in optopt, and returns ?. The calling program may prevent the error message by setting opterr to 0. Therefore ? is used to signal "unknown option" and cannot be used as an option value. (It would be impossible to tell the option -? from an unknown option.)
How to specify -? option with GNU getopt
1,359,459,458,000
I'm using getopts for all of my scripts that require advanced option parsing, and It's worked great with dash. I'm familiar with the standard basic getopts usage, consisting of [-x] and [-x OPTION]. Is it possible to parse options like this? dash_script.sh FILE -x -z -o OPTION ## Or the inverse? dash_script.sh -x -z -o OPTION FILE
Script arguments usually come after options. Take a look at any other commands such as cp or ls and you will see that this is the case. So, to handle: dash_script.sh -x -z -o OPTION FILE you can use getopts as shown below: while getopts xzo: option do case "$option" in x) echo "x";; z) echo "z";; o) echo "o=$OPTARG";; esac done shift $(($OPTIND-1)) FILE="$1" After processing the options, getopts sets $OPTIND to the index of the first non-option argument which in this case is FILE.
Getopts option processing, Is it possible to add a non hyphenated [FILE]?
1,359,459,458,000
Problem I have a script that accepts a few different (optional) command line arguments. For one particular argument, I'm getting the value "less" appear but I don't know why. Bash Code while getopts ":d:f:p:e:" o; do case "${o}" in d) SDOMAIN=${OPTARG} ;; f) FROM=${OPTARG} ;; p) PAGER=${OPTARG} ;; e) DESTEXT=${OPTARG} ;; *) show_usage ;; esac done source "./utils.sh" test #test includes echo "$SDOMAIN is the sdomain" echo "$FROM is the from" echo "$PAGER is the pager" echo "$DESTEXT is the extension" exit Output When I run my script, this is what I see: lab-1:/tmp/jj# bash mssp.sh -d testdomain.net Utils include worked! testdomain.net is the sdomain is the from less is the pager is the extension I can't see why I'm getting the "less" value in pager. I was hoping to see empty string. If you can see my bug, please let me know. I've been looking at this too long.
Your script never sets PAGER, but that variable is likely exported from your current environment; check with declare -p PAGER. I would recommend using a different variable name inside your script; this is why there's a general recommendation against using upper-case variables as your own.
bash script: incorrect argument value being set
1,359,459,458,000
By convention -- signals that there is no more options after it. It seems to me that when using getopts with case clause, -) pattern subclause doesn't match --. So what is the behavior of getopts when it meets --? Does it treat -- as an option, a nonoption argument, or neither? Thanks.
The behaviour is that it stops parsing the command line and leaves the rest of the arguments as is. The -- itself is removed (or rather $OPTIND will indicate that it was processed but $opt in the code below will never be -, and if you shift "$(( OPTIND - 1 ))" as one usually does, you'll never see it). Example: #!/bin/bash while getopts 'a:b:' opt; do case "$opt" in a) printf 'Got a: "%s"\n' "$OPTARG" ;; b) printf 'Got b: "%s"\n' "$OPTARG" ;; *) echo 'error' >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" printf 'Other argument: "%s"\n' "$@" Running it: $ bash script.sh -a hello -- -b world Got a: "hello" Other argument: "-b" Other argument: "world" As you can see, the -b world bit of the command line was not processed by getopts. It stops parsing the command line at -- or at the first non-option argument: $ bash script.sh something -a hello -- -b world Other argument: "something" Other argument: "-a" Other argument: "hello" Other argument: "--" Other argument: "-b" Other argument: "world" In this case, -- was not "removed" since getopts never got that far.
What is the behavior of `getopts` when it meets `--`?
1,359,459,458,000
I am trying to run the following script using getopts to parse the options but it does not seem to work: #!/bin/bash set -x echo $@ while getopts "rf" opt do case "${opt}" in r) ropt=${OPTARG} ;; f) fopt=${OPTARG} ;; esac done shift $((OPTIND -1)) echo $fopt $ropt The output I get is: $ ./myscript.sh -f opt2 -r opt1 + echo -f opt2 -r opt1 -f opt2 -r opt1 + getopts rf opt + case "${opt}" in + fopt= + getopts rf opt + shift 1 + echo + set +x Do you have any ideas on what am I doing wrong?
You expect your options to take option-arguments, but you don't let getopts know about this. You should use while getopts "r:f:" opt; do ...; done i.e., each option that takes an argument should have : after it in the argument string to getopts. You'll probably also want a default case branch at the end to handle invalid options: *) usage >&2 exit 1 (the error message (about invalid option or missing option argument) will be displayed by getopts itself, usage is expected to be a function that you will have defined that prints a short help message to standard output). Also, don't forget to double quote all expansions, even $(( OPTIND - 1 )). Related to that last point: When is double-quoting necessary?
getopts does not seem to work
1,359,459,458,000
I am new to bash scripting and I am trying to write a script with getopts so that when script.sh -sp is is invoked the URL and column and row count is printed out. And when script.sh -r option is invoked only the file type is printed. Script : #!/bin/bash #define the URL URL='https://github.com/RamiKrispin/coronavirus/tree/master/data_raw' #define file name filename=$(basename "$URL") #dowload the file wget -q "${URL}" while getopts ":sp:r" o; do case $o in sp ) echo URL: $URL #print the URL address awk 'BEGIN{FS=","}END{print "COLUMN NO: "NF " ROWS NO: "NR}' $filename #print the column count and row count r ) file $filename #print the file type exit 1 esac done Can anyone help me understand how to use getopts correctly?
getopts only supports single-character option names, and supports clustering: -sp is equivalent to passing -s and -p separately if -s doesn't take an argument, and it's the option -s with the argument p if -s takes an argument. This is the normal convention on Unix-like systems: after a single dash -, each character is a separate option (until an option that takes an argument); after a double dash --, everything (up to =) is the option name. So getopts ":sp:r" o declares three options: -s and -r with no argument, and -p with an argument. You seem to only want two options and neither expects an argument, so the correct specification would be sr, and the usage would be script.sh script.sh -s or script.sh -r or script.sh -rs or script.sh -r -s etc. while getopts sr o; do case $o in s) echo "URL: $URL" #print the URL address awk 'BEGIN{FS=","}END{print "COLUMN NO: "NF " ROWS NO: "NR}' -- "$filename" #print the column count and row count ;; r) file -- "$filename";; #print the file type \?) exit 3;; #invalid option esac done shift $((OPTIND - 1)) # remove options, keep non-option arguments if [ $# -ne 0 ]; then echo >&2 "$0: unexpected argument: $1" exit 3 fi The leading : says not to report unknown options to the user. Unknown options still cause $o to be ?. If you do that, you should print a message when $o is ?. And either way you should exit with a failure status if an option is unknown. Other errors I fixed: Added missing ;; in the case syntax. Removed exit 1 in a success case. Added missing double quotes around variable substitutions. Added missing -- in front of command arguments that might start with -. Added error handling. Note that it's spelled getopts, not getopt. There is a separate utility called getopt which has the same purpose but works differently and is rather harder to use. It doesn't support multiple-character options introduced by a single dash either, but it does support long options introduced by double dash.
How to use getopts in bash
1,359,459,458,000
I ran into an interesting scenario last night and, so far, my google foo has been unable to find a work around. I have a script that supports a number of arguments. A user (damn those users) didn't specify an argument for an option and the results were ... unexpected. The code: while getopts "a:c:d:De:rs:" arg do case ${arg} in a) app=${OPTARG} ;; c) cmd=${OPTARG} ;; d) domain=${OPTARG} ;; D) Debug=1 ;; e) env=${OPTARG} ;; r) Doit=1 ;; s) subapp=${OPTARG} ;; *) echo "You are DISCREPANT!!";; # *) usage "Invalid argument ${arg}::${OPTARG}" ;; esac done if [ ${Debug} -gt 0 ] then echo "Env: ${env}" echo "App: ${app}" echo "Subapp: ${subapp}" echo "Cmd: ${cmd}" echo "Doit: ${Doit}" echo "Debug: ${Debug}" exit 1 fi Specifying all the args correctly results in: $ ./mwctl -a weblogic -c start -s admin -e trn -r -D Env: trn App: weblogic Subapp: admin Cmd: start Doit: 1 Debug: 1 Forgetting the '-s' results in: $ ./mwctl -D -a weblogic -c start admin -e trn -r Env: App: weblogic Subapp: Cmd: start Doit: 0 Debug: 1 Similar results for skipping other args with options. It seems that 'case' loses its mind when presented with an OPTARG that doesn't have an OPT... I'm at a bit of a loss as to how to catch this.
I would use getopt instead of getopts: #!/usr/bin/env bash OPT=$(getopt \ --options a:c:d:De:rs: \ --name "$0" \ -- "$@" ) if [ $? -ne 0 ]; then echo You are doing it wrong! exit 1 fi eval set -- "${OPT}" while true; do case "$1" in -a) app=${2}; shift 2;; -c) cmd=${2}; shift 2;; -d) domain=${2}; shift 2;; -D) Debug=1; shift;; -e) env=${2}; shift 2;; -r) Doit=1; shift;; -s) subapp=${2}; shift 2;; --) break;; esac done echo "Env: ${env}" echo "App: ${app}" echo "Subapp: ${subapp}" echo "Cmd: ${cmd}" echo "Doit: ${Doit}" echo "Debug: ${Debug}" $ ./mwctl -a weblogic -c start -s admin -e trn -r -D > Env: trn > App: weblogic > Subapp: admin > Cmd: start > Doit: 1 > Debug: 1 $ ./mwctl -D -a weblogic -c start admin -e trn -r > Env: trn > App: weblogic > Subapp: > Cmd: start > Doit: 1 > Debug: 1 Note that when you are googling getopts vs. getopt, you will find many people complaining about getopt. As far as I can tell, this is always about an older version of getopt, which indeed was very buggy. My experience is, that getopt has more options and is also more robust than getopts. To check if you have the enhanced getopt version, you can run getopt -T echo $? If the output is 4, you have the enhanced version.
bash case && extra OPTARG
1,359,459,458,000
I'm trying to create a script that has an option that will contain arbitrary text (including spaces) surrounded by quotes and this is proving difficult to search for and implement. Basically the behavior I would like to have is docker_build_image.sh -i "image" -v 2.0 --options "--build-arg ARG=value", this will be a helper script for simplifying versioning docker images with our build server. The closest I've come to successfully grabbing the --options value gives me an error from getopt, "unrecognized option '--build-arg ARG=value'. The full script is below #!/usr/bin/env bash set -o errexit -o noclobber -o nounset -o pipefail params="$(getopt -o hi:v: -l help,image:,options,output,version: --name "$0" -- "$@")" eval set -- "$params" show_help() { cat << EOF Usage: ${0##*/} [-i IMAGE] [-v VERSION] [OPTIONS...] Builds the docker image with the Dockerfile located in the current directory. -i, --image Required. Set the name of the image. --options Set the additional options to pass to the build command. --output (Default: stdout) Set the output file. -v, --version Required. Tag the image with the version. -h, --help Display this help and exit. EOF } while [[ $# -gt 0 ]] do case $1 in -h|-\?|--help) show_help exit 0 ;; -i|--image) if [ -n "$2" ]; then IMAGE=$2 shift else echo -e "ERROR: '$1' requires an argument.\n" >&2 exit 1 fi ;; -v|--version) if [ -n "$2" ]; then VERSION=$2 shift else echo -e "ERROR: '$1' requires an argument.\n" >&2 exit 1 fi ;; --options) echo -e "OPTIONS=$2\n" OPTIONS=$2 ;; --output) if [ -n "$2" ]; then BUILD_OUTPUT=$2 shift else BUILD_OUTPUT=/dev/stderr fi ;; --) shift break ;; *) echo -e "Error: $0 invalid option '$1'\nTry '$0 --help' for more information.\n" >&2 exit 1 ;; esac shift done echo "IMAGE: $IMAGE" echo "VERSION: $VERSION" echo "" # Grab the SHA-1 from the docker build output ID=$(docker build ${OPTIONS} -t ${IMAGE} . | tee $BUILD_OUTPUT | tail -1 | sed 's/.*Successfully built \(.*\)$/\1/') # Tag our image docker tag ${ID} ${IMAGE}:${VERSION} docker tag ${ID} ${IMAGE}:latest
Given you seem to be using the "enhanced" getopt (from util-linux or Busybox), just handle options like you handle the others that take an argument (image and version). I.e. add the colon marking a mandatory argument to the option string that goes to getopt, and pick the value off of $2. I think the error you get comes from getopt, since it's not told that options takes an argument, and so it tries to interpret --build-arg ARG=value as a long option (it does start with a double-dash). $ cat opt.sh #!/bin/bash getopt -T if [ "$?" -ne 4 ]; then echo "wrong version of 'getopt' installed, exiting..." >&2 exit 1 fi params="$(getopt -o hv: -l help,options:,version: --name "$0" -- "$@")" eval set -- "$params" while [[ $# -gt 0 ]] ; do case $1 in -h|-\?|--help) echo "help" ;; -v|--version) if [ -n "$2" ]; then echo "version: <$2>" shift fi ;; --options) if [ -n "$2" ]; then echo "options: <$2>" shift fi ;; esac shift done $ bash opt.sh --version 123 --options blah --options "foo bar" version: <123> options: <blah> options: <foo bar>
How to parse Command Line Arguments with arbitrary string
1,359,459,458,000
I have the following commands in my script: set -- `getopt -q agvc:l:t:i: "$@"` ... while [ -n "$1" ] do -i) TIME_GAP_BOOT=$2 shift ;; ... sleep $TIME_GAP_BOOT When invoking the script with -i 2, I get the error sleep: invalid time interval `\'2\'' What am I doing wrong? How do I correctly format the argument?
The bash builtin getopts is a lot easier to use. If you're using bash, you should use it instead of getopt. GNU getopt is designed to work with arguments which have whitespace and other metacharacters in them. In order to do that, it produces a result string with bash-style quotes (or csh-style quotes, depending on the -s option.) You need to arrange to have the quotes interpreted, which requires the use of eval. (Did I mention that the bash builtin getopts is better?). The following example is from the getopt distribution; I had nothing to do with it. (It should be present on your machine somewhere; with ubuntu and debian, it shows up as /usr/share/doc/util-linux/examples/getopt-parse.bash. I'm only quoting a few lines: # Note that we use `"$@"' to let each command-line parameter expand to a # separate word. The quotes around `$@' are essential! # We need TEMP as the `eval set --' would nuke the return value of getopt. TEMP=`getopt -o ab:c:: --long a-long,b-long:,c-long:: \ -n 'example.bash' -- "$@"` if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi # Note the quotes around `$TEMP': they are essential! eval set -- "$TEMP" In addition to the quotes which the example's comment points at, it's important to look at the eval, which is generally frowned upon. By contrast, the bash builtin getopts requires no eval, and is quite straightforward; it basically emulates the standard C library call: while getopts agvc:l:t:i: opt; do case "$opt" in i) TIME_GAP_BOOT=$OPTARG;; # ... esac done
How do I fix the argument received from getopt?
1,359,459,458,000
I am trying to call a function in a while loop passing some arguments. However, getopts can only get the arguments for the first call. Here's a minimal example: function add_all_external_services() { env | sed -n "s/^EXTERNAL_SERVICE_OPTIONS_\(.*\)$/\1/p" > options while read -r line do key="${line%%=*}" opt="${line#*=}" if [[ -n "$key" && -n "$opt" ]]; then echo "Adding external service \"$key\" with options: \"$opt\"" add_external_service $opt else echo "Missing one or more variables: - Key: \"$key\" - Options: \"$opt\" " fi done < options rm options } function add_external_service() { local local_service_name="" local external_service_name="" local external_service_namespace="" local service_url="" echo " Options: $@" while getopts l:s:n:-: OPT; do if [[ "$OPT" = "-" ]]; then # long option: reformulate OPT and OPTARG OPT="${OPTARG%%=*}" # extract long option name OPTARG="${OPTARG#$OPT}" # extract long option argument (may be empty) OPTARG="${OPTARG#=}" # if long option argument, remove assigning `=` fi case "$OPT" in l | local-service-name) needs_arg; local_service_name="$OPTARG" ;; s | external-service-name) needs_arg; external_service_name="$OPTARG" ;; n | external-service-namespace) needs_arg; external_service_namespace="$OPTARG" ;; external-name) needs_arg; service_url="$OPTARG" ;; ??* ) die "Illegal option --$OPT" ;; # bad long option \? ) exit 2 ;; # bad short option (error reported via getopts) esac done echo " - local $local_service_name" echo " - name $external_service_name" echo " - namespace $external_service_namespace" echo " - url $service_url" } Then, when calling: export EXTERNAL_SERVICE_OPTIONS_A="-l local_a -s rasa -n botinstance-12424526-review-feature-ce-swdjtf" export EXTERNAL_SERVICE_OPTIONS_B="-l local_b -s review-feature-ce-swdjtf -n botinstance-12424526-review-feature-ce-swdjtf" ventury-deploy add_all_external_services I get this: Adding external service "B" with options: "-l local_b -s name_b -n namespace_b" Options: -l local_b -s name_b -n namespace_b - local local_b - name name_b - namespace namespace_b - url Adding external service "A" with options: "-l local_a -s name_a -n namespace_a" Options: -l local_a -s name_a -n namespace_a - local - name - namespace - url I got the getopts part from here and it works fine whenever I call functions outside loops. After reading this question I tried adding a & after calling the function inside the loop, and it works... all the args are read by getopts. I can't understand why running the commands in the background would make it work. If I echo $@ right before the getopts, I can see that all arguments are passed correctly, if I call the second function manually, once for every env variable, it also works correctly. So, how is running these commands in the background different? I mean, what's different for getopts? Also, why echo $@ can see the arguments while getopts can't?
This is because you are not resetting OPTIND. According to the manual: Each time it is invoked, getopts places the next option in the shell variable name, initializing name if it does not exist, and the index of the next argument to be processed into the variable OPTIND. OPTIND is initialized to 1 each time the shell or a shell script is invoked. So OPTIND is used to keep track of the next argument to process and its automatically set to 1 when your script starts but NOT reset when your function ends. To fix your script just add OPTIND=1 to the start of your function: function add_external_service() { OPTIND=1 local local_service_name="" local external_service_name="" local external_service_namespace="" local service_url="" echo " Options: $@" while getopts l:s:n:-: OPT; do if [[ "$OPT" = "-" ]]; then # long option: reformulate OPT and OPTARG OPT="${OPTARG%%=*}" # extract long option name OPTARG="${OPTARG#$OPT}" # extract long option argument (may be empty) OPTARG="${OPTARG#=}" # if long option argument, remove assigning `=` fi case "$OPT" in l | local-service-name) needs_arg; local_service_name="$OPTARG" ;; s | external-service-name) needs_arg; external_service_name="$OPTARG" ;; n | external-service-namespace) needs_arg; external_service_namespace="$OPTARG" ;; external-name) needs_arg; service_url="$OPTARG" ;; ??* ) die "Illegal option --$OPT" ;; # bad long option \? ) exit 2 ;; # bad short option (error reported via getopts) esac done echo " - local $local_service_name" echo " - name $external_service_name" echo " - namespace $external_service_namespace" echo " - url $service_url" }
getopts gets no arguments when function is called inside while loop
1,359,459,458,000
According to several sources, the UNIX utility guidelines specify that operands should always be processed after options: utility_name[OPTIONS][operands...] Some older UNIX utilities are known to not follow these conventions quite so, e.g, find, but newer and well-established utilities do too break the rules without an apparent explanation, e.g, curl <url>. I would like to know if there is a good reason for this and what is the community general consensus on this.
The normal convention is that arguments always follow options. The first non-option (the first string on the command line that does not start with -) terminates the options and begins the arguments. Some tools, notably the build tools (compilers, linkers), have always gone against this convention. Another example that you note is find. Sometimes this is done because the options take effect at the point on the command line where they appear, so you need a way to specify arguments both before and after the option, where the option applies to that argument only if the argument appears after the option. This convention allows you to write a shell script that contains a line like this: rm foobar ${more_things_to_remove} ...and guarantee that you can't accidentally add options to the rm command even if the shell variable more_things_to_remove has a nasty value like "-rf". That convention predates the more recent convention of using the special option -- to terminate option processing. -- is a much better way of marking the end of options explicitly: rm -- foobar ${more_things_to_remove} # and it works even if you don't need to delete something called "foobar": rm -- ${more_things_to_remove} So lately (and by lately, I mean this has already been doing on for many, many years) lots more command line parsers appear to have been moving toward breaking the earlier convention and allowing options and arguments to be mixed apparently everywhere (subject always to -- forcing the end of options) even if they don't have any special reason to break the convention like compilers and some other tools did. Personally I never know which utilities still adhere to the convention and which don't, so I always place options before arguments as before, and I am mildly surprised when I see someone else's working code which does it in the opposite order!
Why do some utilities parse operands before options?
1,359,459,458,000
I want to parse multiple arguments using getopts in a bash script using the code below. while getopts b:B:m:M:T flag do case "${flag}" in b) rbmin=${OPTARG};; B) rbmax=${OPTARG};; m) mbmin=${OPTARG};; M) mbmax=${OPTARG};; T) sigType=${OPTARG};; esac done echo $rbmin,$rbmax,$mbmin,$mbmax, $sigType [amit@amitk]$ sh pass.sh -b 0.1 -B 0.3 -m 10 -M 11 -T sig 0.1,0.3,10,11, I don't know why I cannot pass more than four arguments. Any suggestions?
You seem to be missing the : after T in the option string given to getopts. This : would indicate that -T takes an option-argument. Without the :, -T would be an option with no argument, and your invocation would leave sig as an operand at the end of the command line rather than as an option-argument. while getopts b:B:m:M:T: flag do case $flag in b) rbmin=$OPTARG ;; B) rbmax=$OPTARG ;; m) mbmin=$OPTARG ;; M) mbmax=$OPTARG ;; T) sigType=$OPTARG ;; *) echo error >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" echo "$rbmin,$rbmax,$mbmin,$mbmax, $sigType" if [ "$#" -gt 0 ]; then printf 'Other operands: %s\n' "$*" fi Testing: $ sh script -b 0.1 -B 0.3 -m 10 -M 11 -T sig 0.1,0.3,10,11, sig $ sh script -b 0.1 -B 0.3 -m 10 -M 11 -T sig hello bumblebee 0.1,0.3,10,11, sig Other operands: hello bumblebee Also note that if you run the script by using an explicit interpreter like sh, you may not be running the script with bash. I only mention this because you mentioned "bash script" in the question. In this instance, it's ok since the script does not require bash, but it would be better to use an executable file with the proper #!-line at the top.
getopts for more than 4 parser arguments
1,359,459,458,000
I'm taking a look at the optparse library for bash option parsing, specifically this bit in the generated code: params="" while [ $# -ne 0 ]; do param="$1" shift case "$param" in --my-long-flag) params="$params -m";; --another-flag) params="$params -a";; "-?"|--help) usage exit 0;; *) if [[ "$param" == --* ]]; then echo -e "Unrecognized long option: $param" usage exit 1 fi params="$params \"$param\"";; ##### THIS LINE esac done eval set -- "$params" ##### AND THIS LINE # then a typical while getopts loop Would there be any real reason to use eval here? The input to eval seems to be properly sanitized. But wouldn't it work the same to use: params=() # ... --my-long-flag) params+=("-m");; --another-flag) params+=("-a");; # ... params+=("$param");; # ... set -- "${params[@]}" That seems cleaner to me. In fact, wouldn't this allow options to be parsed directly out of the params array (without even using set) by using while getopts "ma" option "${params[@]}"; do instead of while getopts "ma" option; do?
You don't need to use a bash array here (but do so if it feels better). Here's how to do it for /bin/sh: #!/bin/sh for arg do shift case "$arg" in --my-long-flag) set -- "$@" -m ;; --another-flag) set -- "$@" -a ;; "-?"|--help) usage exit 0 ;; --*) printf 'Unrecognised long option: %s\n' "$arg" >$2 usage exit 1 ;; *) set -- "$@" "$arg" esac done This is cleaner than the bash array solution (personal opinion) since it doesn't need to introduce another variable. It's also better than the auto-generated code that you show as it retains each command line argument as a separate item in "$@". This is good, because this allows the user to pass arguments containing whitespace characters as long as they are quoted (which the auto-generated code does not do). Style comments: The loop above is supposed to translate long options into short options for a loop over getopts later. As such, it breaks from that task by actually acting on some options, such as -? and --help. IMHO, these should instead be translated to -h (or some suitable short option). It also does the translation of long options past the point where options should not be accepted. Calling the script as ./script.sh --my-long-flag -- -? should not interpret -? as an option due to the -- (meaning "options ends here"). Likewise, ./script.sh filename --my-long-flag should not interpret --my-long-option as an option, as the parsing of options should stop at the first non-option. Here's a variant that takes the above into account: #!/bin/sh parse=YES for arg do shift if [ "$parse" = YES ]; then case "$arg" in --my-long-flag) set -- "$@" -m ;; --another-flag) set -- "$@" -a ;; --help) set -- "$@" -h ;; --) parse=NO set -- "$@" -- ;; --*) printf 'Unrecognised long option: %s\n' "$arg" >$2 usage exit 1 ;; *) parse=NO set -- "$@" "$arg" esac else set -- "$@" "$arg" fi done What this does not allow is long options with separate option arguments, such as --option hello (the hello would be treated as a non-option and the option parsing would end). Something like --option=hello would be fairly easy to handle with a bit of extra tinkering though.
Can a bash array be used in place of eval set -- "$params"?
1,359,459,458,000
I am writing a script which can choose a file and print specific content. For example, san#./script.sh Expected Usage : ./script.sh --file1 --dns (Here it checks for file1, search for dns name and prints. Basically there are sub-parameters under a parameter) I tried for single parameter/Option as below : options=$@ arguments=($options) index=0; for argument in $options do index=`expr $index + 1`; case $argument in -a | --fun1 ) run_function1 ;; -b | --fun2 ) run_function2 ;; -c | --fun3 ) run_function3 ;; esac done exit; [ ${1} ] || helpinfo Can any one suggest for double parameter(sub options) ? Expected target options : ./script.sh OPTIONS : ./script.sh -h ./script --fun1 stackoverflow microsoft Google --fun2 Yahoo Basically each function will look into one file. I have looked into getopt or getopts, But it doesn't have long option (--long is not possible, instead we can use only -l). But again not sure of sub parameters. Can any one help on this ? I don't want to use getopt or getopts.
This is a version that is more convenient to use than the first one I gave here, in particular it avoids duplicated code for equivalent long and short options. It should handle anything you ever want for options: short options (-q), long options (--quiet), options with arguments, accumulated short options (-qlfinput instead of -q -l -f input), uniquely abbreviated long options (--qui instead of --quiet), end of options by --. Most of the code is fixed; you only have to modify the marked parts. #!/bin/bash # Update USAGE (USAGE1, USAGE2, USAGE3 may remain unchanged): USAGE='Usage: prog [-q|--quiet] [-l|--list] [-f file|--file file] [-Q arg|--query arg] args' USAGE1=' Ambiguously abbreviated long option:' USAGE2=' No such option:' USAGE3=' Missing argument for' # List all long options here (including leading --): LONGOPTS=(--quiet --list --file --query) # List all short options that take an option argument here # (without separator, without leading -): SHORTARGOPTS=fQ while [[ $# -ne 0 ]] ; do # This part remains unchanged case $1 in --) shift ; break ;; ### no more options -) break ;; ### no more options -*) ARG=$1 ; shift ;; *) break ;; ### no more options esac # This part remains unchanged case $ARG in --*) FOUND=0 for I in "${LONGOPTS[@]}" ; do case $I in "$ARG") FOUND=1 ; OPT=$I ; break ;; "$ARG"*) (( FOUND++ )) ; OPT=$I ;; esac done case $FOUND in 0) echo "$USAGE$USAGE2 $ARG" 1>&2 ; exit 1 ;; 1) ;; *) echo "$USAGE$USAGE1 $ARG" 1>&2 ; exit 1 ;; esac ;; -["$SHORTARGOPTS"]?*) OPT=${ARG:0:2} set dummy "${ARG:2}" "$@" shift ;; -?-*) echo "$USAGE" 1>&2 ; exit 1 ;; -??*) OPT=${ARG:0:2} set dummy -"${ARG:2}" "$@" shift ;; -?) OPT=$ARG ;; *) echo "OOPS, this can't happen" 1>&2 ; exit 1 ;; esac # Give both short and long form here. # Note: If the option takes an option argument, it it found in $1. # Copy the argument somewhere and shift afterwards! case $OPT in -q|--quiet) QUIETMODE=yes ;; -l|--list) LISTMODE=yes ;; -f|--file) [[ $# -eq 0 ]] && { echo "$USAGE$USAGE3 $OPT" 1>&2 ; exit 1 ; } FILE=$1 ; shift ;; -Q|--query) [[ $# -eq 0 ]] && { echo "$USAGE$USAGE3 $OPT" 1>&2 ; exit 1 ; } QUERYARG=$1 ; shift ;; *) echo "$USAGE$USAGE2 $OPT" 1>&2 ; exit 1 ;; esac done # Remaining arguments are now in "$@": echo "QUIETMODE = $QUIETMODE" echo "LISTMODE = $LISTMODE" echo "FILE = $FILE" echo "QUERYARG = $QUERYARG" echo "REMAINING ARGUMENTS:" "$@"
Including sub-parameters in help options to execute wisely without getopt or getopts?
1,359,459,458,000
I am attempting to do copy a file (or rename a file) by running the script with flags/parameters to give both the source and the destination file name: #!/bin/bash/ while getopts s:d flag do case "${flag}" in s) copy_source=${OPTARG};; d) copy_dest=${OPTARG};; esac done echo "Copy a file input with argument to another file input with argument" cp $copy_source $copy_dest The output is an error: sh test_cp.sh -s file1.txt -d file2.txt Copy a file input with argument to another file input with argument cp: missing destination file operand after ‘file1.txt’ Try 'cp --help' for more information. Does cp (and mv) not accept parametrized destination? What am I doing wrong?
You are missing the mandatory : after the d in your while getopts line if the -d is to accept a parameter. Therefore your copy_dest is empty, and hence cp complains about the "missing operand". If you add "debug" lines such as echo "Source parameter: $copy_source" echo "Destination parameter: $copy_dest" after your loop, you will see the problem. To solve, simply add the :: while getopts s:d: flag do ... done Also, please note that in particular when dealing with filenames, you should always quote shell variables, as in cp "$copy_source" "$copy_dest" In addition, be aware that running a script as sh test_cp.sh will override the shebang-line #!/bin/bash and you cannot be sure that it is run under bash! If you want to ensure the correct shell is being used, you could either explicitly state bash test_cp.sh arguments or make the script file executable and run it as ./test_cp.sh arguments
Using command-line parameters as destination for cp and mv in bash script
1,359,459,458,000
Considering: #!/bin/sh while getopts ":h" o; do case "$o" in h ) "Usage: sh $(basename "$0") -h Displays help message sh $(basename "$0") arg Outputs ... where: -h help option arg argument." exit 0 ;; \? ) echo "Invalid option -$OPTARG" 1>&2 exit 1 ;; : ) echo "Invalid option -$OPTARG requires argument" 1>&2 exit 1 ;; esac done This invocation returns not found why? $ sh getopts.sh -h getopts.sh: 12: getopts.sh: Usage: sh getopts.sh -h Displays help message sh getopts.sh arg Outputs ... where: -h help option arg argument.: not found This is ok: $ sh getopts.sh arg For this one I was expecting 'Invalid option': $ sh getopts.sh This is ok: $ sh getopts.sh -s x Invalid option -s
You seemed to have missed printing the message but rather passing the whole string as a command to run. Add an echo before the string case "$o" in h ) echo "Usage: sh $(basename "$0") -h Displays help message sh $(basename "$0") arg Outputs ... where: -h help option arg argument." exit 0 ;; But generally prefer a style of adding a heredoc to printing out the multi-line string as show_help() { cat <<'EOF' Usage: sh $(basename "$0") -h Displays help message sh $(basename "$0") arg Outputs ... where: -h help option arg argument. EOF } and use the function show_help for the -h flag. Also for empty argument flags, the first calls to getopts() exits the loop, so you cannot have a handle inside the loop. Have a generic check for empty arguments before invoking getopts() if [ "$#" -eq 0 ]; then printf 'no argument flags provided\n' >&2 exit 1 fi With your earlier definition of argument flag :h, suggests that -h does not take any argument. The clause :) only applies when you define -h to take an argument i.e. when defined as :h:. Only then you run it without passing the arguments the code under :) gets executed. Putting together the whole script #!/usr/bin/env bash if [ "$#" -eq 0 ]; then printf 'no argument flags provided\n' >&2 exit 1 fi show_help() { cat <<'EOF' Usage: sh $(basename "$0") -h Displays help message sh $(basename "$0") arg Outputs ... where: -h help option arg argument. EOF } while getopts ":h:" opt; do case "$opt" in h ) show_help exit 1 ;; \? ) echo "Invalid option -$OPTARG" 1>&2 exit 1 ;; : ) echo "Invalid option -$OPTARG requires argument" 1>&2 exit 1 ;; esac done and running it now $ bash script.sh no argument flags provided $ bash script.sh -h Invalid option -h requires argument $ bash script.sh -s Invalid option -s
Unexpected behavior from getopts
1,359,459,458,000
I want to write a shell script which will take some arguments with some options and print that arguments. Suppose the name of that script is abc.ksh. Usage of that script is - ./abc.ksh -[a <arg>|b <arg>|c|d] <some_string> Now I write a shell script which will take options and arguments #!/bin/ksh # Default Values vara="," varb=false varbname="" varc=false # Scanning inputs while getopts :a:b:cd option do case $option in a) vara=$OPTARG;; #shift $((OPTIND-1));; b) varb=true varbname=$OPTARG;; #shift $((OPTIND-1));; c) varc=true;; #shift $((OPTIND-1));; d) echo "Usage $0 \-[a|b|c|d] <filename>" exit 0;; \?) echo "Invalid option -$OPTARG. Please run '$0 -h' for help" exit 1;; :) echo "Option -$OPTARG requires an argument. Please run '$0 -d' for help" exit 1;; esac done print "Args: $* \nvara: $vara \noptfile: $varb \nvarbname: $varbname \nvarc: $varc" Examples of Correct Inputs: ./abc.ksh -a "sample text" "some_string" ./abc.ksh "some_string" -a "sample text" ./abc.ksh -asample\ text some_string etc... some_string input is not catch in my script. How can I catch that?
It is typical for programs to force the "some_string" part to be the last argument so that .abc.ksh "some_string" -a "sample text" is an error. If you do this, then after parsing the options, $OPTIND holds the index to the last argument (the "some_string" part). If that is not acceptable, then you can check at the beginning (before you enter the while to see if there is a non-prefixed argument. This will let you have "some_string" at the beginning and at the end. If you needed to have it in the middle, you could either not use getopts or you could have two sets of getopts. When the first one errors out, it could be due to the non-prefixed argument; get it and start a new getopts to get the remaining args. Or you can skip getopts all together and roll your own solution.
How to catch optioned and non optioned arguments correctly?
1,359,459,458,000
I have the following bash function: lscf() { while getopts f:d: opt ; do case $opt in f) file="$OPTARG" ;; d) days="$OPTARG" ;; esac done echo file is $file echo days is $days } Running this with arguments does not output any values. Only after running the function without arguments, and then again with arguments does it output the correct values: -bash-4.1$ lscf -d 10 -f file.txt file is days is -bash-4.1$ lscf file is days is -bash-4.1$ lscf -d 10 -f file.txt file is file.txt days is 10 Am I missing something?
Though I can't reproduce the initial run of the function that you have in your question, you should reset OPTIND to 1 in your function to be able to process the function's command line in repeated invocations of it. From the bash manual: OPTIND is initialized to 1 each time the shell or a shell script is invoked. When an option requires an argument, getopts places that argument into the variable OPTARG. The shell does not reset OPTIND automatically; it must be manually reset between multiple calls to getopts within the same shell invocation if a new set of parameters is to be used. From the POSIX standard: If the application sets OPTIND to the value 1, a new set of parameters can be used: either the current positional parameters or new arg values. Any other attempt to invoke getopts multiple times in a single shell execution environment with parameters (positional parameters or arg operands) that are not the same in all invocations, or with an OPTIND value modified to be a value other than 1, produces unspecified results. The "shell invocation" that the bash manual mentions is the same as the "single execution environment" that the POSIX text mentions, and both refer to your shell script or interactive shell. Within the script or interactive shell, multiple calls to your lscf will invoke getopts in the same environment, and OPTIND will need to be reset to 1 before each such invocation. Therefore: lscf() { OPTIND=1 while getopts f:d: opt ; do case $opt in f) file="$OPTARG" ;; d) days="$OPTARG" ;; esac done echo file is $file echo days is $days } If the variables file and days should not be set in the calling shell's environment, they should be local variables. Also, quote variable expansions and use printf to output variable data: lscf() { local file local days OPTIND=1 while getopts f:d: opt ; do case $opt in f) file="$OPTARG" ;; d) days="$OPTARG" ;; esac done printf 'file is %s\n' "$file" printf 'days is %s\n' "$days" }
bash function arguments strange behaviour
1,359,459,458,000
I have a shell script testShell.sh which uses getopts as below: #!/bin/bash while getopts ":j:e:" option; do case "$option" in j) MYHOSTNAME=$OPTARG ;; e) SCRIPT_PATH=$OPTARG ;; *) ;; esac done echo "j=$MYHOSTNAME" echo "e=$SCRIPT_PATH" shift $((OPTIND - 1)) echo "remaining=$@" When I test run it like following: $ testShell.sh -jvalue1 -evalue4 -Djvalue3 -pvalue2 The output which I get is following: j=value3 e=2 remaining= But I would like the output as: j=value1 e=value4 remaining=-Djvalue3 -pvalue2 Is it possible to make sure that getopts only looks at first character post - symbol? so that it doesn't interpret -Djvalue3 as -jvalue3 and -pvalue2 as -e2.
After posting it on 3 forums and searching everywhere... eventually I tried the following and it worked... testShell.sh -jvalue1 -evalue4 -- -Djvalue3 -pvalue2 Notice -- after -evalue4 And the output was j=value1 e=value4 remaining=-Djvalue3 -pvalue2 I believe -- asks getopts to stop processing options.
how to make getopts just read the first character post `-`
1,359,459,458,000
So I am writing a script that mixes options with arguments with options that don't. From research I have found that getopts is the best way to do this, and so far it has been simple to figure out and setup. The problem I am having is figuring out how to set this up so that if no options or arguments are supplied, for it to run a separate set of commands. This is what I have: while getopts ":n:h" opt; do case $opt in n) CODEBLOCK >&2 ;; h) echo "script [-h - help] [-n <node> - runs commands on specified node]" >&2 exit 1 ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done I have tried adding something like this to the top of the code to catch no arguments, but it then runs the same code even when options and arguments are supplied (something is probably wrong in my syntax here): [[ -n "$1" ]] || { CODEBLOCK1 } while getopts ":n:h" opt; do case $opt in n) CODEBLOCK2 >&2 ;; h) echo "script [-h - help] [-n <node> - runs commands on specified node]" >&2 exit 1 ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done The man page for getopts was sparse and I have found relatively few examples on searches that provide any insight into getopts, let alone all the various features of it.
You can use any of the following to run commands when $1 is empty: [[ ! $1 ]] && { COMMANDS; } [[ $1 ]] || { COMMANDS; } [[ -z $1 ]] && { COMMANDS; } [[ -n $1 ]] || { COMMANDS; } Also, you don't need to quote the expansion in this particular example, as no word splitting is performed. If you're wanting to check if there are arguments, though, you'd be better to use (( $# )). If I've understood your intentions, here is how your code could be written with getopts: #!/bin/bash (( $# )) || printf '%s\n' 'No arguments' while getopts ':n:h' opt; do case "$opt" in n) [[ $OPTARG ]] && printf '%s\n' "Commands were run, option $OPTARG, so let's do what that says." [[ ! $OPTARG ]] && printf '%s\n' "Commands were run, there was no option, so let's run some stuff." ;; h) printf '%s\n' 'Help printed' ;; *) printf '%s\n' "I don't know what that argument is!" ;; esac done
How to run a specified codeblock with getopts when no options or arguments are supplied?
1,359,459,458,000
I have copied code from tutorialspoint's getopt article and got the following script to work (sort of): ##argument_script.sh VARS=`getopt -o i::o:: --long input::,output:: -- "$@"` eval set -- "$VARS" # extract options and their arguments into variables. while true ; do case "$1" in -i|--input) case "$2" in "") MAPPE='/default/input/here/' ; shift 2 ;; *) MAPPE=$2 ; shift 2 ;; esac ;; -o|--output) case "$2" in "") OUTPUTFOLDER='/default/input/here/' ; shift 2 ;; *) OUTPUTFOLDER=$2 ; shift 2 ;; esac ;; --) shift ; break ;; esac done echo "${MAPPE}" echo "${OUTPUTFOLDER}" #do something here.. that is, I have two optional argument flags -i/--input and -o/-output. I have a problem with the script currently: To overwrite the default value of a flag, you need to write the value you want right after the flag, without any spaces. example: if i wanted to pass /c/ into -i and /f/ into -o, i would need to call the script as: bash argument_script.sh -i/c/ -o/f/. Notice the missing spaces. If I were to write bash argument_script.sh -i /c/ -o /f/ the variables MAPPE and OUTPUTFOLDER would be using the default values. Can the script be rewritten, so the arguments passed into -i/-o needs to be written after a space (example: bash argument_script.sh -i /c/ -o /f/)
The behavior you describe is because you have an extra : in your getopt. Just change your getopt line to this and it will work: VARS=`getopt -o i:o: --long input:,output: -- "$@"` However, this is a very, very convoluted way of writing your script. Here is a simpler version (also correcting some bad practices like capitalized variables): #!/bin/bash ##argument_script.sh vars=$(getopt -o i:o: --long input:,output: -- "$@") eval set -- "$vars" mappe='/default/input/here/' outputFolder='/default/input/here/' # extract options and their arguments into variables. for opt; do case "$opt" in -i|--input) mappe=$2 shift 2 ;; -o|--output) outputFolder=$2 shift 2 ;; esac done echo "mappe: $mappe" echo "out: $outputFolder" You can now do: $ ./argument_script.sh -i /c/ -o /f/ mappe: /c/ out: /f/ Note that it also works if you run ./argument_script.sh -i/c/ -o/f/. The space is not required, it is allowed.
bash script with optional input arguments using getopt
1,359,459,458,000
I have a script where I've implemented switches using getopts. However, I'm having trouble referencing the next argument. My script is for backporting a backup of our website on a local development environment. I've added a -p switch to run some post-deploy steps. Here's my syntax: backport -p /path/to/website_backup.sql.gz So, before the switch, I was testing that a file was specified, and that it's a proper file. Since a filename path was the only argument, I could assume that it was necessary, and also that it would be the first argument ($1). if [[ $# -eq 0 ]] ; then echo 'Specifcy the sql file to backport.'; exit 0; fi if [[ ! -f "$1" ]]; then echo "$1 is not a valid file."; exit 0; fi I found this answer which demonstrated an example of how to use getopts to parse switch arguments: while getopts "p" opt; do case $opt in p) p_post_deploy=true ;; # Handle -a esac done Of course, using $1 as the argument didn't work after implementing getopts. Without the switch it was fine. But, when I added the switch, it was, of course, the first argument, which my script was testing for being a file. $ backport -p /path/to/website_backup.sql.gz -p is not a valid file. So, with the introduction of switches, I can't rely on any argument appearing at particular position in the command. Hard-coding $2 won't work, because the filename argument won't be the second argument if there is no switch. I want a solution that will allow me to accept arguments after switches, and allow me to introduce new switches in the future while only updating the code to handle the new switches themselves (and not have to re-shuffle later arguments that may be moved after the introduction of a new switch). I looked at the answers to the unix.stackexchange question that instructed me how to use getopts to parse switches. One of the answers mentioned $* as a variable representing the remaining arguments. if [[ "$*" -eq 0 ]] ; then echo 'Specifcy the sql file to backport.'; exit 0; fi However, when I try to use it, I'm not expressing syntax correctly, and I get a parse error. ~/scripts/backport: line 14: [[: /d/Downloads/database.sql.gz: syntax error: operand expected (error token is "/d/Downloads/database.sql.gz") How do I test the filename argument after getops? Here is the script in its current version: $ cat backport #!/bin/bash set -e while getopts "p" opt; do case $opt in p) p_post_deploy=true ;; esac done shift $(($OPTIND - 1)) # testing what this variable looks like printf "Remaining arguments are: %s\n" "$*" if [[ "$*" -eq 0 ]] ; then echo 'Specifcy the sql file to backport.'; exit 0; fi if [[ ! -f "$*" ]]; then echo "$* is not a valid file."; exit 0; fi drush @local.dev sql-drop -y ; zcat $1 | drush @local.dev sqlc ; drush @local.dev cr; if [ ! -z "$p_post_deploy" ] ; then echo "Running post-deploy..." SCRIPT_PATH=$(dirname "$BASH_SOURCE") source "$SCRIPT_PATH/post-deploy" post_deploy fi
I can only blame myself for poor communication, but Kusalananda has given the answer I was looking for in a comment. After using getopts to parse switches while getopts "p" opt; do case $opt in p) p_post_deploy=true ;; esac done this line shift "$((OPTIND - 1))" will remove all switches from the list of arguments, so that you can use positional arguments again, just as you would without switches. The shell does not reset OPTIND automatically. You have to reset it manually.
Parsing script arguments after getopts
1,359,459,458,000
I have two scripts ins.sh and variable.sh. variables.sh holds various key-value pairs. ins.sh #!/usr/bin/env bash set -e # Exit upon error # This script generates a 64-bit system source variables.sh # Parse options while getopts ":t:" opt; do case $opt in t ) if [ $OPTARG = TRUE ] || [ $OPTARG = FALSE ]; then sed -i "s/*.MAKE_TESTS=.*/MAKE_TESTS=${OPTARG}/" variables.sh else echo "Invalid argument. -t only takes either 'TRUE' or 'FALSE'." exit 1 fi ;; \? ) echo "Invalid option: -$OPTARG" >&2 ;; : ) echo "Option -$OPTARG requires an argument." ;; esac done variables.sh MAKE_TESTS=TRUE MAKE_PARALLEL=-j4 INSTALL_DIR=/tmp/install-dir To change the value is issue the following command: bash ins.sh -t FALSE This command should change to the MAKE_TESTS=FALSE but this doesn't happens at all. I just want to toggle the values from TRUE to FALSE and vice versa. To achieve this, I was replacing the whole string and passing the value provided by the user. UPDATE For the time being, I've found a way to accomplish my task. First, I'm deleting the whole string and then adding the new string. sed -i "/MAKE_TESTS/d" variables.sh echo "MAKE_TESTS=${OPTARG}" >> variables.sh But I would still like to know why my string replacement ain't working.
Replace: sed -i "s/*.MAKE_TESTS=.*/MAKE_TESTS=${OPTARG}/" variables.sh with: sed -i "s/.*MAKE_TESTS=.*/MAKE_TESTS=${OPTARG}/" variables.sh .* means zero or more of any character. By contrast, the meaning of *. may likely vary from one implementation of sed to another. In GNU sed, it means a literal star, *, followed by any one character. Observe: $ echo 'aa' | sed 's/*./HI/' aa $ echo '*a' | sed 's/*./HI/' HI
sed whole string replacement not working
1,359,459,458,000
I'm trying to use the ksh built-in getopts to manage runtime options for my ksh code. I keep on getting the error: "unknown option argument value" when using an option that requires and argument. Here's the offending code: $ cat usage.sh #!/bin/ksh #set -xv USAGE=$'[-?\n@(#)$Id: '"script_name" USAGE+=$'\n'"script_version"$' $\n]' USAGE+="[m:mode?Sets notification mode.]:[mode:=ALL]" USAGE+="{[mode=SMS?SMS notification][mode=MAIL?EMAIL notification][mode=ALL?EMAIL and SMS notification]}" while getopts "$USAGE" optchar; do case $optchar in m) case "$OPTARG" in MAIL) echo -e "-m MAIL:\tOK!" ;; SMS) echo -e "-m SMS:\tOK!" ;; ALL) echo -e "-m ALL:\tOK!" ;; esac ;; esac done And here some output: $ ./usage.sh --man SYNOPSIS ./usage.sh [ options ] OPTIONS -m, --mode=mode Sets notification mode. mode=SMS SMS notification mode=MAIL EMAIL notification mode=ALL EMAIL and SMS notification The default value is ALL. IMPLEMENTATION version script_name script_version $ ./usage.sh -m SMS ./usage.sh: -m: SMS: unknown option argument value Usage: ./usage.sh [-m mode] $ ./usage.sh -m pippo ./usage.sh: -m: pippo: unknown option argument value Usage: ./usage.sh [-m mode] I came up with that horribly complex optstring following O'Reilly's - Learning the Korn Shell. If I comment the fourth USAGE definition line the option argument value that's what I get: $ ./usage.sh --man SYNOPSIS ./usage.sh [ options ] OPTIONS -m, --mode=mode Sets notification mode. The default value is ALL. IMPLEMENTATION version script_name script_version $ ./usage.sh -m SMS -m SMS: OK! $ ./usage.sh -m pippo (nothing) Which I understand as getopts not checking for argument allowed values. How can I have getopts check against unallowed argument values, in a way that it doesn't block allowed ones? $ ksh --version version sh (AT&T Research) 93u+ 2012-08-01
Posting this as memo, the following code works as intended: #!/bin/ksh #set -xv USAGE=$'[-?\n@(#)$Id: '"script_name" USAGE+=$'\n'"script_version"$' $\n]' USAGE+="[m:mode?Sets notification mode.]:[mode:=ALL]" USAGE+="{[S:SMS?SMS notification][M:MAIL?EMAIL notification][A:ALL?EMAIL and SMS notification]}" while getopts "$USAGE" optchar; do case $optchar in m) case "$OPTARG" in M) echo -e "-m MAIL:\tOK!" ;; S) echo -e "-m SMS:\tOK!" ;; A) echo -e "-m SA:\tOK!" ;; esac ;; esac done Here's the output: $ ./usage.sh --man SYNOPSIS ./usage.sh [ options ] OPTIONS -m, --mode=mode Sets notification mode. SMS SMS notification MAIL EMAIL notification ALL EMAIL and SMS notification The default value is ALL. IMPLEMENTATION version script_name script_version $ ./usage.sh -m SMS -m SMS: OK! $ ./usage.sh -m pippo ./usage.sh: -m: pippo: unknown option argument value Usage: ./usage.sh [-m mode] This way -m SMS is equivalent to -m S.
KSH - built-in getopts unknown option argument value
1,359,459,458,000
I spent quite a while researching the problem I encountered but none of the getopts tutorial say anything about the leading whitespace in OPTARG when using getopts. In bash(on Ubuntu and OSX), executing below commands: OPTIND=1 && getopts ":n:" opt "-n 1" && echo "OPTARG: '$OPTARG'" and it echos: OPTARG: ' 1' However, if I execute this: OPTIND=1 && getopts ":n:" opt "-n1" && echo "OPTARG: '$OPTARG'" then I will get what I expect: OPTARG: '1' From what I read online: Normally one or more blanks separate the value from the option letter; however, getopts also handles values that follow the letter immediately [Reference] If the above quote is universally right for getopts, what do I do wrong that I get that leading whitespace in OPTARG?
You should just leave out the double quotes around "-n -1", as that is what preserves the space before the 1: OPTIND=1 && getopts ":n:" opt -n 1 && echo "OPTARG: '$OPTARG'" gives: OPTARG: '1'
Strange leading whitespace in OPTARG when using getopts
1,359,459,458,000
for instance, gcc accepts the input file without any flag and the output file with the -o flag in: gcc input.c -o output.out or gcc -o output.out input.c I am creating a random password generator bash script, the user should be able to specify the number of special chars, lowercase chars and uppercase chars using the -s, -l, -u flags respectively. They should also be able to specify the length of the password without any flag. example usage: ./randompassword.sh -s2 -u2 -l3 16 should mean a password with 2 special, 2 uppercase, 3 lowercase chars and a length of 16. I am able to parse the flags using getopts like so: while getopts s:l:u: flag do case "${flag}" in s) special=${OPTARG};; l) lower=${OPTARG};; u) upper=${OPTARG};; esac done However I can't figure out how to get the length argument which does not have a flag. I tried using $1 but this argument should be able to be at any position in the command just like the gcc example.
getopts stops parsing the argument list as soon as it reaches a non-option argument. That is, it returns success as it iterates left-to-right over the option arguments, then returns failure on the first non-option. So the proper use case is put options and their arguments on the command line before any non-option arguments. The getops tutorials then show a shift $((OPTIND - 1)) command. The while loop iterates through the option arguments, then the shift removes them from the positional variables, leaving the non-option arguments (if any) in $@. In code, using variables like those in tutorials: while getopts ':s:l:u:' OPTION do case "${OPTION}" in s) special=${OPTARG};; l) lower=${OPTARG};; u) upper=${OPTARG};; \?) echo "$0: Error: Invalid option: -${OPTARG}" >&2; exit 1;; :) echo "$0: Error: option -${OPTARG} requires an argument" >&2; exit 1;; esac done shift $((OPTIND - 1)) # now the positional variables have the non-option arguments echo "${1}" With your example command (./randompassword.sh -s2 -u2 -l3 16), the output is 16. I added a leading : to the getopts string because it gives you "silent error reporting", which lets your script catch errors and issue a friendlier complaint. That's the function of the extra two options I added to the case statement. If the user broke the rule and typed option arguments after non-option arguments, the trailing option argument(s) would not be parsed in the while loop above. It/they would appear in the $@ array holding the remaining positional arguments. If the user followed the guidelines and typed all the option arguments before the non-option arguments, the $@ array will have the non-option arguments after the while loop and shift commands are done. Per your concern about the order in which non-option arguments are typed, either your script expects these in a strict order, which your user must follow (else they get errors), or your script doesn't require a particular order to them, and must use other parsing methods to figure out how to use them. But in my view, un-ordered arguments are why we have option letters. The remaining arguments are ordered, else they would have option letters. That's just my opinion, though.
How to mix plain arguments with flagged arguments in bash scripting?
1,359,459,458,000
I'm using this : for example ./imgSorter.sh -d directory -f format the scripts' content is : #!/bin/bash while getopts ":d:f:" opt; do case $opt in d) echo "-d was triggered with $OPTARG" >&2 ;; f) echo "-f was triggered with $OPTARG" >&2 ;; \?) echo "Invalid option: -$OPTARG" >&2 exit 1 ;; :) echo "Option -$OPTARG requires an argument." >&2 exit 1 ;; esac done use cases : $ ./imgSorter.sh -d myDir -d was triggered with myDir OK $ ./imgSorter.sh -d -f myFormat -d was triggered with -f NOK : how is it that a string beginning with - is not detected as a flag ?
You have told getopts that the -d option should take an argument, and in the command line you use -d -f myformat which clearly (?) says "-f is the argument I'm giving to the -d option". This is not an error in the code, but in the usage of the script on the command line. Your code needs to verify that the option-arguments are correct and that all options are set in an appropriate way. Possibly something like while getopts "d:f:" opt; do case $opt in d) dir=$OPTARG ;; f) format=$OPTARG ;; *) echo 'error' >&2 exit 1 esac done # If -d is *required* if [ ! -d "$dir" ]; then echo 'Option -d missing or designates non-directory' >&2 exit 1 fi # If -d is *optional* if [ -n "$dir" ] && [ ! -d "$dir" ]; then echo 'Option -d designates non-directory' >&2 exit 1 fi If the -d option is optional, and if you want to use a default value for the variable dir in the code above, you would start by setting dir to that default value before the while loop. A command line option can not both take and not take an argument.
how to properly parse shell script flags and arguments using getopts
1,359,459,458,000
Hey is there any difference between $OPTIND and $#? Is there a certain reason for that you use $OPTIND with getopts, not $#?
$OPTIND indicates how far you have progressed through parsing the parameter list (i.e., for options), while $# is simply the number of parameters. They are not really related, because $OPTIND changes, while $# does not (unless you use shift). The POSIX description of getopts goes into some detail.
Difference between $OPTIND and $#
1,359,459,458,000
If getopts is a bash function, by my understanding, you need to pass $@ - the whole arguments to getopts to let the function know what kind of arguments you have in order to proceed, right? It seems to me that you don't need it, so how the getopts get to know what arguments I have in the current scope? Does it have a way to trace the previous call like the other high-level languages do? while getopts abcde opt; do ˄˄˄˄˄ <-- you only need to pass the argument labels here, how getopts knows what arguments I have case $opt in ... esac done
getopts is a shell built-in, so it can reference $@ directly. It also sets the shell variables OPTARG and OPTIND. (Note that inside a function, getopts will reference that function's $@ rather than the global arguments. You should localise OPTIND if you want a repeatable (odempotent) function call.) The synopsis (summary) is getopts optstring name [arg ...] and the description is such that, getopts is used by shell procedures to parse positional parameters. optstring contains the option characters to be recognized; if a character is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. Note that (at least) the bash documentation (man bash) also notes, getopts normally parses the positional parameters, but if more arguments are supplied as arg values, getopts parses those instead. References How do I handle switches in a shell script?
How bash getopts knows what arguments the call has
1,359,459,458,000
This is my assignment. The task is to output n longest lines from the input file(s). If there is no argument for n, the the default of value n is 5. If there is no files in the parameter, the standard input is used. If there are at least 2 files, output the name files. The output lines should be in the same order as in the original files. I've asked a question here to deal with the optional arguments: How to deal with optional input in shell script? However, I'm stuck with another problem with the new shell script. #!/bin/sh while getopts “n” arg; do case $arg in n) # Check next positional parameter eval nextopt=\${$OPTIND} # existing or starting with dash? if [[ -n $nextopt && $nextopt != -* ]] ; then OPTIND=$((OPTIND + 1)) level=$nextopt else level=5 fi ;; esac done for name do if [ "$#" -gt 1 ]; then printf 'File: %s\n' "$name" fi awk '{ print length(), NR, $0 | "sort -rn" }' $name | head -n $level | sed 's/[^ ]* //' | sort -n done I run it like this sh ex1.sh -n 10 unix1.txt unix1.1.txt and this is the output File: -n awk: can't open file -n source line number 1 File: 10 awk: can't open file 10 source line number 1 File: unix1.txt 2 kbjkbkbbnbnmbnmnmmnbmnbmjbjkb 3 asjdsakdbakjsdbasbkj 4 asjdsakdbakjsdbasbkj 5 asjdsakdbakjsdbasbkj 10 ppûunsdj 11 tieutuvi 13 sdbhsdbjhdsvfdsvfgj 14 avavdvas 16 ffdsdfggdgdgdfgdfgdf112233 17 qwertyuiopsdfghjklxcvbnm,fghjk File: unix1.1.txt 1 csdkbfsdk 2 fskjfnjkfnkjdsndjks 3 fsnjfnsjkf 4 snjfndsjknskjdfbnjksfdsfn 5 323124 6 jknjkkjnk4n4jn2 7 kjnjkb423 13 423b2j3kb4jk23bkb234kb32 14 234jb32jk43b 15 331 "-n" and "10" are not files. Also if I run like this sh ex1.sh -n unix1.txt unix1.1.txt The output should be 5 longest lines from the files, but instead: File: -n head: illegal line count -- unix1.txt awk: can't open file -n source line number 1 File: unix1.txt head: illegal line count -- unix1.txt File: unix1.1.txt head: illegal line count -- unix1.txt So how can I fix this? Although this is not the goal, this would work while getopts “n” arg; do case $arg in n) # Check next positional parameter eval nextopt=\${$OPTIND} # existing or starting with dash? if [[ -n $nextopt && $nextopt != -* ]] ; then OPTIND=$((OPTIND + 1)) level=$nextopt else level=5 fi ;; esac done awk '{ print length(), NR, $0 | "sort -rn" }' unix1.txt | head -n $level | sed 's/[^ ]* //' | sort -n if I run sh ex1.sh -n I've got 2 kbjkbkbbnbnmbnmnmmnbmnbmjbjkb 4 asjdsakdbakjsdbasbkj 5 asjdsakdbakjsdbasbkj 16 ffdsdfggdgdgdfgdfgdf112233 17 qwertyuiopsdfghjklxcvbnm,fghjk or sh ex1.sh -n 10 and I've got 2 kbjkbkbbnbnmbnmnmmnbmnbmjbjkb 3 asjdsakdbakjsdbasbkj 4 asjdsakdbakjsdbasbkj 5 asjdsakdbakjsdbasbkj 10 ppûunsdj 11 tieutuvi 13 sdbhsdbjhdsvfdsvfgj 14 avavdvas 16 ffdsdfggdgdgdfgdfgdf112233 17 qwertyuiopsdfghjklxcvbnm,fghjk which are correct. Also, how to deal with 'If there is no files in the parameter, the standard input is used' ?
Your -n option takes an argument, so you need getopts 'n:' arg. The option argument is found in $OPTARG. Don't touch OPTIND in the while getopts loop. After the loop, shift "$(( OPTIND - 1 ))". This leaves the filenames in the positional parameters. That is, #!/bin/sh level=5 while getopts 'n:' arg; do case $arg in n) level=$OPTARG ;; *) echo 'Error in command line parsing' >&2 esac done shift "$(( OPTIND - 1 ))" for name do # stuff done Next, you never handle the case of no input files. The following makes the script use standard input as a filename if there are none given: if [ "$#" -eq 0 ]; then # handle no filenames, for example: set -- /dev/stdin fi for name do # stuff done I'll leave the rest to you (but I would strongly suggest moving the sort out of awk and instead run that as its own stage in the pipeline, for clarity).
How to correctly identify the order of parameters?
1,507,439,499,000
I'm trying to get a script to: set a variable with -q option show help for -h option, and fail for other options -*, but allow positional arguments Here is the getopts snippet I'm using: while getopts qh opt; do case "${opt}" in q) quiet="true" ;; h) usage exit 1 ;; \?) echo "unrecognized option -- ${OPTARG}" exit 1 ;; esac shift done echo "unparsed: $*" This seems pretty straightforward. However it only works if I provide a single argument (a.sh -q or a.sh -h do what's expected). However, it does not do anything if I provide both arguments, or provide an unrecognized argument as at $2: $ ./a.sh -b unrecognized option -- b $ ./a.sh -q -b unparsed: -b $ ./a.sh -h -k this is my help message unparsed: -k Any ideas why the second argument ($2) is not handled in the getopts loop?
The shift command is misplaced. It should be outside of the while loop. Try: while getopts :qh opt; do case "${opt}" in q) quiet="true" ;; h) usage exit 1 ;; \?) echo "unrecognized option -- ${OPTARG}" exit 1 ;; esac done shift $((OPTIND - 1)) echo "unparsed: $*" Examples If we add the following line to the beginning of the code: usage() { echo "this is my help message"; } Then we can do these tests: $ ./a.sh -q -foo unrecognized option -- f $ ./a.sh -q -h this is my help message
getopts does not match the second argument
1,507,439,499,000
I'm coding a script that goes and searches for files on a remote server and transfers them back to my local computer. I want to be able to do a dry run first, so I know which files I'm bringing back. I'm currently using a mix of getopts and output redirection from some code I found here. It seems to me, through my research, that it's impractical to return arrays from ZSH or Bash functions. To me, that makes it hard to understand how I would code this script up without having to repeat myself a ton. Here is my current script: EDIT: Please forgive me mixing some bashisms with zsh things, I started writing this script using #!/bin/bash but switched to zsh. #!/usr/local/bin/zsh RED='\033[0;31m' NC='\033[0m' GREEN='\033[0;32m' YELLOW='\033[0;33m' dry_run=0 yesterday=1 # Establish -n flag means to do a dry run. while getopts "ny:" flag; do case "$flag" in n) dry_run=1 ;; y) yesterday=${OPTARG} ;; *) echo 'error in command line parsing' >&2 exit 1 esac done shift $(($OPTIND-1)) # This is the folder I'm interested in getting files from folder=${1:?"You must define a folder of interest"} # Check to see if dry-run, if not proceed with copying the files over. if [ "$dry_run" -eq 1 ]; then print -Pn "\n%S%11F%{Initiating Dry-Run%}%s%f" # SSH onto server and find the most recently updated folder. # Then place that newest folder and folder of interest into the absolute file path. # Then SSH again and use find with that file-path # Return array of file paths # TODO: **THIS IS THE SECTION I NEED TO REFACTOR INTO A FUNCTION** bison_remote_files=($( { { bison_latest_run=$(ssh -qn falcon1 'find /projects/bison/git/* -mindepth 0 -maxdepth 0 -type d -printf "%T@\t%f\n"' | sort -t$'\t' -r -nk1,5 | sed -n "$yesterday"p | cut -f2-) bison_remote_path=$( echo $bison_latest_run | awk -v folder="$folder" '{print "/projects/bison/git/"$1"/assessment/LWR/validation/"folder}') ssh -qn falcon1 \ "find $bison_remote_path -type f -name '*_out.csv' -not -path '*/doc/*' 2>/dev/null" >&3 3>&-; echo "$?" print -Pn "\n\n%U%B%13F%{Fetching data from:%}%u %B%12F%{ /projects/bison/git/${bison_latest_run}%}%b%f\n" >&2 } | { until read -t1 ret; do print -Pn "%S%11F%{.%}%s%f" >&2 done exit "$ret" } } 3>&1)) # Maninpulate remote file paths to match the local machine directory local_file_path=($(for i in "${bison_remote_files[@]}"; do echo $i | gsed -E "s|/projects/bison/git/bison_[0-9]{8}|$HOME/Documents/projects/bison|g" done )) # Loop through remote and local and show where they will be placed for ((i=1; i<=${#bison_remote_files[@]}; i++)); do print -P "\u251C\U2500%B%1F%{Remote File ->%}%b%f ${bison_remote_files[i]}" print -P "\u251C\U2500%B%10F%{Local File ->%}%b%f ${local_file_path[i]}" if [[ $i -lt ${#bison_remote_files[@]} ]]; then print -Pn "\U2502\n" else print -Pn "\U2514\U2500\U2504\U27E2\n" fi done # If it's not a dry run, grab all the files using scp # This is the part I am stuck... # All my defined variables are un-run in the scope above # How do I craft a function (or something else) so I don't have to do all the above all over again? else printf "${YELLOW}Fetching Data from ${NC}(${GREEN}${bison_latest_run}${NC})${YELLOW}...${NC}\n" for ((i=0; i<${#NEW_RFILEP[@]}; i++)); do scp -qp mcdodyla@falcon1:"${NEW_RFILEP[i]}" "${LOCAL_FILEP[i]}" # Check if scp was successful, if it was show green. if [ ${PIPESTATUS[0]} -eq 0 ]; then printf "${GREEN}File Created/Updated at:${NC} ${LOCAL_FILEP[i]}\n" else printf "${RED}Error Fetching File:${NC} ${NEW_RFILEP[i]}\n" fi done printf "${YELLOW}Bison Remote Fetch Complete!${NC}\n" fi As you can see, all my data gets stuck in the first if statement case and so if I want to dont want to do a dry run then I have to run all of that code again. Since bash/zsh doesn't really return arrays, how do I refactor this code? EDIT: Here is an example use-case: > bfetch -n "HBEP" Initiating Dry-Run... Fetching data from:  /projects/bison/git/bison_20190827 ├─Remote File -> /projects/bison/git/bison_20190827/assessment/LWR/validation/HBEP/analysis/BK370/HBEP_BK370_out.csv ├─Local File -> /Users/mcdodj/Documents/projects/bison/assessment/LWR/validation/HBEP/analysis/BK370/HBEP_BK370_out.csv │ ├─Remote File -> /projects/bison/git/bison_20190827/assessment/LWR/validation/HBEP/analysis/BK363/HBEP_BK363_out.csv ├─Local File -> /Users/mcdodj/Documents/projects/bison/assessment/LWR/validation/HBEP/analysis/BK363/HBEP_BK363_out.csv │ ├─Remote File -> /projects/bison/git/bison_20190827/assessment/LWR/validation/HBEP/analysis/BK365/HBEP_BK365_out.csv ├─Local File -> /Users/mcdodj/Documents/projects/bison/assessment/LWR/validation/HBEP/analysis/BK365/HBEP_BK365_out.csv
I don't know zsh, but: 1) first ensure that all your "conversational" print statements go to stderr, NOT to stdout, such as: print -Pn "\n%S%11F%{Initiating Dry-Run%}%s%f" >&2 and many others. 2) Instead of executing your scp statements, printf them to stdout, such as: printf 'scp -qp mcdodyla@falcon1:"%s" "%s"\n' "${NEW_RFILEP[i]}" "${LOCAL_FILEP[i]}" This pertains to all statements that modify the filesystem, such as cp, rm, rsync, mkdir, touch, whatever. From a brief inspection of your script, scp was the only one that jumped out at me, but you know your code better than I do. Inspect your code again, and triple-check that all fs-modifying ("irreversible") commands are converted to printf's. You don't want to miss any. Now, just to test that you've converted your script correctly, run it, and throw away stderr: ./myscript 2>/dev/null That should display only the stdout from your script. You must ensure that ALL of that output is valid shell syntax. All of the informational messages should have gone to stderr, and all of the "action" statements should be printf'ed to stdout. If you've still got some informational messages leaking into stdout, go back and edit your script again and ensure that the print statements are redirected >&2. Once you've definitively proven that you've got the info messages going to stderr, and the actual work going to stdout, your conversion is done. To dry-run, simply run the script: ./myscript To actually perform the work, run the script again and pipe stdout to a shell: ./myscript | zsh -v
Proper way to code dry run option without having to repeat myself?
1,507,439,499,000
I have a script that starts with getopts and looks as follows: USAGE() { echo -e "Usage: bash $0 [-w <in-dir>] [-o <out-dir>] [-c <template1>] [-t <template2>] \n" 1>&2; exit 1; } if (($# == 0)) then USAGE fi while getopts ":w:o:c:t:h" opt do case $opt in w ) BIGWIGS=$OPTARG ;; o ) OUTDIR=$OPTARG ;; c ) CONTAINER=$OPTARG ;; t ) TRACK=$OPTARG ;; h ) USAGE ;; \? ) echo "Invalid option: -$OPTARG exiting" >&2 exit ;; : ) echo "Option -$OPTARG requires an argument" >&2 exit ;; esac done more commands etc echo $OUTDIR echo $CONTAINER I was doing some testing on this script and at some stage, I didn't need/want to use the -c argument [-c ]. In other words, I was trying to test another specific part of the script not involving the $CONTAINER variable at all. Therefore, I simply added # in front of all commands with the $CONTAINER and did some testing which was fine. When testing the script without using $CONTAINER, I typed: bash script.bash -w mydir -o myoutdir -t mywantedtemplate However, I was wondering, given my getopts command I didn't get a warning. In other words, why did I not get a warning asking for -c argument. Is this possible? Does the warning only occur if I type: bash script.bash -w mydir -o myoutdir -t mywantedtemplate -c UPDATE After doing some testing, I think that is it: If you don't explicitly write "-c", getopts won't "ask" you for it and give you an error (unless your script is doing something with it - i.e. if you haven't put # in front of each command using this argument) You only get an error if you put "-c " and nothing else Is this correct? Presumably what I did was "bad practise" and should be avoided: when testing, I should just remove the c: from the getopts command entirely. I guess what I am asking is: when you tell getopts about the arguments (the "while" line in my script), are we saying: these are the options you can expect and the ones followed by a ":" should have argument with them. BUT they don't HAVE to be given. I.e. you can expect an c option with an argument but don't throw an error if you are not given the c option at all.
The getopts utility does not know about mandatory options, only about what options are allowed (and what options out of these should take an option argument). If you want to enforce mandatory options, you would have to do so with your own tests in or after the option parsing loop. The getopts utility does not do this because options can have more complex relationships such as some options conflicting, some options requiring the presence of other options etc. This is left to to the script author to sort out with their own logic.
Handling unused getopts argument (are options not mandatory?)
1,507,439,499,000
I am working on a a script that need to take two script arguments and use them as variables in the script. I couldn't get this working and unable to find out what I am doing wrong. I see the issue is the second argument, as I see in some of my tests(as mentioned in the bottom of this post) it is not being read. Here is the code: #!usr/bin/bash help() { echo "" echo "Usage: $0 -p patch_level -e environment" echo -e "\t-p Patch level that you are tryin to update the server" echo -e "\t-e Environment that the patch need to be run on" exit 1 # Exit script after printing help } while getopts "p:e" opt do case "${opt}" in p ) patch_level="$OPTARG" ;; e ) _env="$OPTARG" ;; ? ) help ;; # Print help in case parameter is non-existent esac done if [ -z "$patch_level" ] || [ -z "$_env" ]; # checking for null parameter then echo "Some or all of the parameters are empty"; help else echo "$patch_level and $_env" fi When I run the script like below , I get this. > ./restart.sh -p 2021 -e sbx > Some or all of the parameters are empty > Usage: ./restart.sh -p patch_level -e environment > -p Patch level that you are tryin to update the server > -e Environment that the patch need to be run on Note: I modeled my code based on the third answer in this How can I pass a command line argument into a shell script? I see the issue is with the second variable (-e). Because if I change the last if statement from "or to and", the script runs but doesn't print anything for the second variable: here is what I am talking about if [ -z "$patch_level" ] && [ -z "$_env" ]; the output is ./restart.sh -p 2021 -e sbx 2021 and This server will be patched to 2021 I am on Ubuntu if that matters.
$_env is unset due to how you pass arguments to getopts. You need to add a colon after the e to tell getopts the option takes an argument. while getopts "p:e:" op ^ mandatory here Check: help getopts | less
script arguments in bash
1,507,439,499,000
I have a script to get ldap user name, email and mobile number: #!/bin/bash echo -n "Enter Unix id > " read UNIXID ldapsearch -x "(cn=$UNIXID)" | awk '/givenName/||/mobile/||/mail/' Here is the output of the script: #./lsearch Enter Unix id > in15004 givenName: Mr. Xyz mail: [email protected] mobile: 9xxxxxxxx1 Now I want to modify the script so that I can run it in non-interactive mode, e.g: #./lsearch –i in15004 # (i meand ID) givenName: Mr. Xyz mail: [email protected] mobile: 9xxxxxxxx1 or: #./lsearch –n Xyz* # (n means givenName) givenName: Mr. Xyz mail: [email protected] mobile: 9xxxxxxxx1 or: ./lsearch –e x@*.com # (e means email id) givenName: Mr. Xyz mail: [email protected] mobile: 9xxxxxxxx1 How can I do that? I tried below : #!/bin/bash while getopts "i:" OPTION; do case $OPTION in i) UNIXID=$OPTARG ;; esac done ldapsearch -x "(cn=$UNIXID)" | awk '/givenName/||/mobile/||/mail/' #ldapsearch -x "(mail=$MAIL)" | awk '/givenName/||/mobile/||/mail/ #ldapsearch -x "(givenName=$NAME)" | awk '/givenName/||/mobile/||/mail/ exit 0; Here is the output of the script: #./lsearch -i in15004 givenName: Mr. Xyz mail: [email protected] mobile: 9xxxxxxxx1 I think similar like above will do. But not sure how to make the loop.
If you want to use getopts (noted the "s") to get the command line arguments you can do something like while getopts "i:n:e:" OPT; do case "$OPT" in i) # do stuff with the i option ID="$OPTARG" ;; n) # do stuff with the n option ;; e) # do stuff with the e option ;; esac done The getopts takes 2 arguments, a string saying what options it should look for and the name of the variable to store the current option it found in. The string to tell it what options to look for is the letter for the short option, and if that later is followed by a colon it means the option takes an argument, it isn't just a flag that is set.
Command line options with argument in shell script