date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,361,291,930,000 |
I'm trying to format a supposedly defective hard disk using "mkfs.ext3 -cc /dev/sda1" on a partition that spans over the entire disk.
I wish to understand the meaning of the ongoing error report in mkfs.ext3's command output, on the last line: "...(109/0/0 errors)". I didn't find information about these three values in man pages and other sources.
This is the ongoing output of the running command:
# mkfs.ext3 -cc /dev/sda1
mke2fs 1.42.4 (12-June-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
61054976 inodes, 244190390 blocks
12209519 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups saved in blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Checking with model 0xaa: done
Reading and comparision: 94.30% done, 24:09:03 elapsed. (109/0/0 errors)
|
When all else fails, use the actual sources! There, we see that the fields being printed are:
fprintf(stderr,
_("Pass completed, %u bad blocks found. (%d/%d/%d errors)\n"),
bb_count, num_read_errors, num_write_errors, num_corruption_errors);
In other words, they are the number of read errors, write errors, and corruption errors.
| Meaning of "mkfs.ext3 -cc" error report |
1,562,066,920,000 |
I'm trying to run badblocks on a drive with a single partition. The drive contains a FreeBSD file system on it.
I boot up using a Linux live USB drive. The drive is unmounted. The output of fdisk -l is:
Device Boot Start End Id System
/dev/sda1 * 63 976773167+ a5 FreeBSD
So I run:
# badblocks -v /dev/sda1
And it says:
badblocks: invalid last block - /dev/sda1
I can't find any useful information about this. Am I using the badblocks utility correctly here? Or is this an indication that something is wrong with the drive?
|
No, this isn't an indication something is wrong with the drive. You are getting this error because badblocks is accepting /dev/sda1 as the last-block argument instead of accepting it as the device.
The syntax in your question looks correct to me. Try specifying the last-block argument after the device:
badblocks -v /dev/sda1 976773167
If that doesn't work, try adding the first-block to that as well:
badblocks -v /dev/sda1 976773167 63
Just to assure you that this does not indicate something is wrong with your drive, here is the output when I add an invalid last-block argument "nope":
sudo badblocks -v /dev/sdb1 nope
badblocks: invalid last block - nope
Here is an example from my bash history of the last time I used badblocks (sudo access is required to access these drives on my system):
sudo badblocks -v /dev/sdb1
Output:
Checking blocks 0 to 976751967
Checking for bad blocks (read-only test):
If I cancel the process after awhile with Ctrl+C the output is:
Interrupted at block 7470720
Here is the syntax to resume the process (see man badblocks):
badblocks -v device [ last-block ] [ first-block ]
The "last-block" is the last block to be read on the device and "first-block" is where it should start reading. Example:
sudo badblocks -v /dev/sdb1 976751967 7470720
Output:
Checking blocks 7470720 to 976751967 Checking for bad blocks
(read-only test):
| badblocks utility keeps reporting "invalid last block" |
1,562,066,920,000 |
My external hard drive was acting strangely, so I ran badblocks, and it seemed that nearly every block was bad from the first minute I ran it. If I did badblocks -v > file, the file was over 100MB after only seconds of running it.
Then, for the hell of it, I ran badlocks on the same drive without using the 10 foot USB3 extension cable I've been using, and it's at 5% with no errors.
Also, if I interrupt badblocks with the cord, it will show up as a different drive name (/dev/sdb, run badblocks and quit, and the drive is now /dev/sdc), and I haven't been able to reproduce this without the cord.
Is it possible for badblocks to be wrong and complain about a perfect drive?
|
Sounds like you have a bad cable. The drive renaming itself is indicative of the USB connection being dropped and restarted and the kernel assigning the next device name to the subsequent connection.
I'd watch dmesg for USB errors while accessing the drive. If it works with the short cable, that further reinforces your long cable is bad. Also keep in mind that 10 feet is essentially the max length for a USB3 cable and any deficiency in the electronics on the motherboard or the USB hard drive is going to be amplified by using a cable that long. So, could be bad cable or it could a cheap USB controller in the drive. Recommendation is the same: use a shorter cable.
| Bad blocks only with extension cord? |
1,562,066,920,000 |
I have a failed hard drive which I need to extract data from. My dd kung fu is failing me right now. I know that the drive is failing at sector 60515007 to 60517093 (512b per sector), and multiple other locations. and I need to skip that area. How do I do it in dd? And I need to compress it on the fly (piping maybe?). Can you recommend a good compression algorithm for that?
|
If you really want to do this with dd, you need to split your reads up:
dd if=/dev/sda bs=512 count=60515006 | gzip -9 > dump1.gz
will dump the first 60515006 sectors of /dev/sda to dump1.gz, compressing with gzip. Then
dd if=/dev/sda bs=512 skip=60517093 count=... | gzip -9 > dump2.gz
will skip the failed part and dump the next however many sectors you need to dump2.gz.
If you can spare the disk capacity somewhere, I would highly recommend using ddrescue instead; it can copy failed disks automatically (it doesn't stop on I/O errors). It will work much faster than dd (it starts with large block reads and only reads smaller amounts where necessary to retrieve data around failed sections) and avoid your having to figure out all the skips etc. It doesn't support compressed output though since it needs to seek around the output file.
| How to image certain portions of hard drive only |
1,562,066,920,000 |
I am developing for an embedded Linux application using friendlyARM's micro2440.
It runs on a Samsung s3c2440 ARM processor and uses squashfs in its NAND flash.
Recently, some flash blocks went bad. u-Boot correctly finds them and creates a bad block table with the offsets given by the nand bad command:
Device 0 bad blocks:
01340000
0abc0000
0f080000
0ff80000
0ffa0000
0ffc0000
0ffe0000
When I try to boot the kernel, it correctly scans the bad blocks and creates its bad block table, as seen in the following messages:
Scanning device for bad blocks
Bad eraseblock 154 at 0x000001340000
Bad eraseblock 1374 at 0x00000abc0000
Bad eraseblock 1924 at 0x00000f080000
But when it comes the time for the kernel to mount the filesystem in the partition where the bad block at 0x000001340000 happens, it seems unable to skip the bad blocks and then it panics. The error messages given were:
SQUASHFS error: squashfs_read_data failed to read block 0xd0e24b
SQUASHFS error: Unable to read metadata cache entry [d0e24b]
SQUASHFS error: Unable to read inode 0x3d1d0f68
------------[ cut here ]------------
WARNING: at fs/inode.c:712 unlock_new_inode+0x20/0x3c()
Modules linked in:
[<c0037750>] (unwind_backtrace+0x0/0xcc) from [<c0044994>] (warn_slowpath_null+0x34/0x4c)
[<c0044994>] (warn_slowpath_null+0x34/0x4c) from [<c00a42c8>] (unlock_new_inode+0x20/0x3c)
[<c00a42c8>] (unlock_new_inode+0x20/0x3c) from [<c00a61b8>] (iget_failed+0x14/0x20)
[<c00a61b8>] (iget_failed+0x14/0x20) from [<c00f75cc>] (squashfs_fill_super+0x3c8/0x508)
[<c00f75cc>] (squashfs_fill_super+0x3c8/0x508) from [<c0095990>] (get_sb_bdev+0x110/0x16c)
[<c0095990>] (get_sb_bdev+0x110/0x16c) from [<c00f7164>] (squashfs_get_sb+0x18/0x20)
[<c00f7164>] (squashfs_get_sb+0x18/0x20) from [<c0095008>] (vfs_kern_mount+0x44/0xd8)
[<c0095008>] (vfs_kern_mount+0x44/0xd8) from [<c00950e0>] (do_kern_mount+0x34/0xe0)
[<c00950e0>] (do_kern_mount+0x34/0xe0) from [<c00a9084>] (do_mount+0x5d8/0x658)
[<c00a9084>] (do_mount+0x5d8/0x658) from [<c00a9330>] (sys_mount+0x84/0xc4)
[<c00a9330>] (sys_mount+0x84/0xc4) from [<c0008c60>] (mount_block_root+0xe4/0x20c)
[<c0008c60>] (mount_block_root+0xe4/0x20c) from [<c00090fc>] (prepare_namespace+0x160/0x1c0)
[<c00090fc>] (prepare_namespace+0x160/0x1c0) from [<c00089c8>] (kernel_init+0xd8/0x104)
[<c00089c8>] (kernel_init+0xd8/0x104) from [<c0033738>] (kernel_thread_exit+0x0/0x8)
---[ end trace c21b44698de8995c ]---
VFS: Cannot open root device "mtdblock5" or unknown-block(31,5)
Please append a correct "root=" boot option; here are the available partitions:
1f00 256 mtdblock0 (driver?)
1f01 128 mtdblock1 (driver?)
1f02 640 mtdblock2 (driver?)
1f03 5120 mtdblock3 (driver?)
1f04 5120 mtdblock4 (driver?)
1f05 40960 mtdblock5 (driver?)
1f06 40960 mtdblock6 (driver?)
1f07 167936 mtdblock7 (driver?)
1f08 1024 mtdblock8 (driver?)
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(31,5)
[<c0037750>] (unwind_backtrace+0x0/0xcc) from [<c02fdd40>] (panic+0x3c/0x114)
[<c02fdd40>] (panic+0x3c/0x114) from [<c0008d44>] (mount_block_root+0x1c8/0x20c)
[<c0008d44>] (mount_block_root+0x1c8/0x20c) from [<c00090fc>] (prepare_namespace+0x160/0x1c0)
[<c00090fc>] (prepare_namespace+0x160/0x1c0) from [<c00089c8>] (kernel_init+0xd8/0x104)
[<c00089c8>] (kernel_init+0xd8/0x104) from [<c0033738>] (kernel_thread_exit+0x0/0x8)
I tried mounting the filesystem in mtdblock6 partition and everywhing worked as expected, as there are no badblocks in that part of the memory.
I investigated the mtd source files responsible for the bad block management, but I couldn't find something useful about how the kernel skips the bad blocks.
|
We've figured out that the problem is with squashfs itself. It has no support for bad block detection, as stated here:
http://elinux.org/Support_read-only_block_filesystems_on_MTD_flash
So the possible solution is to use another filesystem or use UBI to manage the bad blocks and then keep using squashfs.
| Kernel does not skip bad blocks when mounting filesystem |
1,562,066,920,000 |
The badblocks utility allows one to find bad blocks on a device, and e2fsck -c allows one to add such bad blocks to the bad block inode so that they will not be used for actual data. But for SSD, it is known that bad sectors are normally reallocated (remapped) transparently by the drive (however, only when a write occurs). So, does it make any sense to use badblocks / e2fsck -c on a SSD?
I suppose that
badblocks alone can make sense to get information on the health of the SSD, e.g. by considering the total number of bad blocks (I don't know whether smartctl from smartmontools can do the same thing... perhaps with a long test smartctl -t long, but I haven't seen any clear documentation);
it should be discouraged to use e2fsck -c (which adds bad blocks to the bad block inode), because due to the possible reallocation, the associated numbers (logical addresses?) may become obsolete.
But there isn't any warning about the case of SSD in the man pages of these utilities. So I'm wondering...
|
Hard drives also remap failing sectors on writes, and have done so for decades; this isn’t specific to SSDs. The main wrinkle with badblocks and SSDs compared to hard drives is the amount of wear that writing an entire drive entails (but even that’s not necessarily significant).
This remapping (which doesn’t affect externally-visible block identifiers) means that using badblocks to avoid writing to a block is useless — when a bad block is encountered, it’s actually better to write to it, so that the drive can remap it if necessary. And using badblocks to identify blocks which can’t be read is also not particularly useful; if the data is important, it’s better to use a tool such as ddrescue to try to recover it, and if the data isn’t important, it’s better to overwrite the block so that the drive can remap it if necessary.
The drives’ own tests can be used to identify bad blocks; for that, the best option is the offline test, since that is what updates the most error tracking fields (and thus checks for the most errors). If you run that periodically and look for non-zero “offline uncorrectable” sector counts, you should get the same result as running badblocks. (Run smartctl -a and look at the fields which have “Offline” in their “Updated” column.)
In any case on modern drives, if the drive gets bad enough that its remapping can’t cope and thus excluding blocks from a file system would be useful, then it’s time to recycle it.
See also the Arch wiki for a discussion of badblocks v. remapping. badblocks can be used to force identification of bad blocks by the drive firmware, but I suspect that a more targeted approach (at least when writing) would be preferable on an SSD.
| SSD: `badblocks` / `e2fsck -c` vs reallocated/remapped sectors |
1,562,066,920,000 |
man badblocks says:
-n Use non-destructive read-write mode. By default only a non-
destructive read-only test is done. This option must not be
combined with the -w option, as they are mutually exclusive.
This answer says:
The non-destructive read-write test works by overwriting data, then reading to verify, and then writing the original data back afterwards.
What pattern(s) are used by -n if none are explicitly specified by -t?
|
The default pattern with -n is a random pattern:
const unsigned int patterns[] = { ~0 };
(see pattern_fill for the equivalence to “random”).
In destructive mode, four patterns are used:
const unsigned int patterns[] = {0xaa, 0x55, 0xff, 0x00};
| What pattern(s) does non-destructive badblocks -n write? |
1,562,066,920,000 |
Since btrfs doesn't maintain a list of badblocks, I'm looking for a work-around at a lower layer.
(I'm mining burstcoin and don't mind losing a few blocks here and there.)
It seems that LVM doesn't maintain a badblocks list also.
There is a ingenious work-around with dmsetup: creating a table avoiding current bad blocks with an unallocated pool of spare good blocks to fill in for bad ones as they occur. However, I want something more set-and-forget.
This btrfs mailing list post suggested it may be possible to use btrfs over mdadm 3.1+ (which supports badblocks) with RAID0.
How would one use mdadm with the intent of providing a badblocks "layer"?
|
I asked on the linux-raid mailing list if this were possible, and the answer was "no".
| Use mdadm as workaround for lack of badblocks support in btrfs |
1,562,066,920,000 |
I have a 16G pendrive that has some bad blocks:
# f3read /media/morfik/224e0447-1b26-4c3e-a691-5bf1db650d21
SECTORS ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097112/ 40/ 0/ 0
Validating file 2.h2w ... 2097120/ 32/ 0/ 0
Validating file 3.h2w ... 2097098/ 54/ 0/ 0
Validating file 4.h2w ... 2097148/ 4/ 0/ 0
Validating file 5.h2w ... 2097114/ 38/ 0/ 0
Validating file 6.h2w ... 2097152/ 0/ 0/ 0
Validating file 7.h2w ... 2097152/ 0/ 0/ 0
Validating file 8.h2w ... 2097152/ 0/ 0/ 0
Validating file 9.h2w ... 2097152/ 0/ 0/ 0
Validating file 10.h2w ... 2097152/ 0/ 0/ 0
Validating file 11.h2w ... 2097152/ 0/ 0/ 0
Validating file 12.h2w ... 2097152/ 0/ 0/ 0
Validating file 13.h2w ... 2097152/ 0/ 0/ 0
Validating file 14.h2w ... 2097152/ 0/ 0/ 0
Validating file 15.h2w ... 90664/ 0/ 0/ 0
Data OK: 14.05 GB (29450624 sectors)
Data LOST: 84.00 KB (168 sectors)
Corrupted: 84.00 KB (168 sectors)
Slightly changed: 0.00 Byte (0 sectors)
Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 18.77 MB/s
As you can see only the first five gigs have damaged sectors. The rest is fine. The problem is when I try to burn a live image to this pendrive, the action stops after transferring 50MiB.
Is there a way to skip the 5G from the beginning and place the image after the damaged space, so it could boot without a problem?
|
I've managed to solve this problem, but I'm still wonder if there's a better and easier solution.
Anyways, if you have bad blocks at the beginning of the device and you are unable to burn a live image, you should make two partitions:
Then you download an image and check its first partition's offset:
# parted /home/morfik/Desktop/debian-live-8.1.0-amd64-mate-desktop.iso
(parted) unit s
(parted) print
Model: (file)
Disk /home/morfik/Desktop/debian-live-8.1.0-amd64-mate-desktop.iso: 2015232s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 64s 2015231s 2015168s primary boot, hidden
So it's 64 sectors, which means 64*512=32768bytes . Now we are able to mount this image:
# mount -o loop,offset=32768 /home/morfik/Desktop/debian-live-8.1.0-amd64-mate-desktop.iso /mnt
mount: /dev/loop0 is write-protected, mounting read-only
# ls -al /mnt
total 593K
dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:09:57 ./
drwxr-xr-x 24 root root 4.0K 2015-06-08 20:54:43 ../
dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:08:34 .disk/
dr-xr-xr-x 1 root root 2.0K 2015-06-06 15:59:10 dists/
dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:09:41 install/
dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:08:29 isolinux/
dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:08:29 live/
dr-xr-xr-x 1 root root 2.0K 2015-06-06 15:59:00 pool/
dr-xr-xr-x 1 root root 2.0K 2015-06-06 16:09:37 tools/
-r--r--r-- 1 root root 133 2015-06-06 16:09:44 autorun.inf
lr-xr-xr-x 1 root root 1 2015-06-06 15:59:10 debian -> ./
-r--r--r-- 1 root root 177K 2015-06-06 16:09:44 g2ldr
-r--r--r-- 1 root root 8.0K 2015-06-06 16:09:44 g2ldr.mbr
-r--r--r-- 1 root root 28K 2015-06-06 16:09:57 md5sum.txt
-r--r--r-- 1 root root 360K 2015-06-06 16:09:44 setup.exe
-r--r--r-- 1 root root 228 2015-06-06 16:09:44 win32-loader.ini
We have access to the files so we can copy them to the prendrive's second partition:
# cp -a /mnt/* /media/morfik/good
The following command will hardcode the second partition into MBR in order to boot from it:
printf '\x2' | cat /usr/lib/SYSLINUX/altmbr.bin - | dd bs=440 count=1 iflag=fullblock conv=notrunc of=/dev/sdb
I'm using ext4 filesystem on the second partition, so I have to use extlinux, but the image has isolinux. I don't have to remove this folder, I can change its name instead:
# mv isolinux extlinux
Tha same thing I have to do with the config file inside of that folder:
# mv isolinux.cfg extlinux.conf
I'm not sure whether this step is necessary, but I always copy all the files anyway:
# cp /usr/lib/syslinux/modules/bios/* /media/morfik/good/extlinux/
The last thing is to install extlinux's VBR on the second partition:
# extlinux -i /media/morfik/good/extlinux/
/media/morfik/good/extlinux/ is device /dev/sdb2
And that's pretty much it. I tested the image, it boots and the live system works well. This solution should work for all kind of live images.
| Is it possible to burn a live image to a damaged pendrive? |
1,562,066,920,000 |
I am facing a similar problem as this: Kernel does not recognize nand bad blocks marked by u-boot
I'm using a friendlyARM micro2440 board that contains the s3c2440 ARM processor. u-Boot has found some bad blocks and written their positions in the bad block table, but when I boot the kernel it seems to be unable to find those bad blocks and then crashes.
I wanted to try the obscure solution found by that user before, but I can't figure out how to do it: figuring out the BBT offset (maybe s3c2440's BBT offset is also an unusual value and not the one used by uboot). Also, if that's the case, how would I change u-Boot's BBT offset?
|
It was found that the problem did not reside in the BBT offset as previously stated. The source of the problem was the usage of squashfs, as said in this link:
http://elinux.org/Support_read-only_block_filesystems_on_MTD_flash
The solutions would be to either use another filesystem or to use UBI to detect the bad blocks.
| How to find out the bad block table offset and how to change it in u-Boot |
1,562,066,920,000 |
I have an SSD that I suspect failing silently now and then. I have run badblocks on it and it is clear that it is not bad sectors but might instead be some race condition in the electronics, in which case a retry would probably read the data correctly.
Normal magnetic disks have some ECC to correct errors by taking up more space. Can Linux add an ECC layer on top of my block device?
I am thinking of something similar to device mapper, so maybe:
dmsetup create-ecc /dev/orig /dev/mapper/with_ecc
so any read and write to /dev/mapper/with_ecc will be converted to an ecc-read/write on /dev/orig.
Edit:
It seems others have been looking for it, too:
http://permalink.gmane.org/gmane.linux.kernel.device-mapper.devel/8756
|
btrfs and zfs are engineered for data integrity.
By default, btrfs duplicates meta-data on single device configurations. I think you can duplicate data too, although I've never done it.
zfs has copies=n - which I think of as RAID1 for a single-disk. Consider that the amount of redundancy chosen will negatively impact usable device space as well as the device's performance. Fortunately you can specify replication/copies on a per partition/volume basis.
Check this blog post from Richard Elling / Oracle regarding zfs on single device. Unfortunately none of the graph images are loading for me.
Both real and anecdotal evidence suggests that unrecoverable errors
can occur while the device is still largely operational. ZFS has the
ability to survive such errors without data loss. Very cool. Murphy's
Law will ultimately catch up with you, though. In the case where ZFS
cannot recover the data, ZFS will tell you which file is corrupted.
You can then decide whether or not you should recover it from backups
or source media.
| ECC on a single block device |
1,562,066,920,000 |
Gentlemen,
I need some fatherly advice about e2fsck: I have a disk that has been getting cranky, and "e2fsck -ccv" was indeed showing bad blocks. However, I repartitioned the disk, and now the same command reports that the disk is in perfect health! What happened to my bad blocks? Of course the partitions are now all empty, but surely a bad block is still a bad block? Has the disk's internal housekeeping somehow flagged those blocks off to the point that even e2fsck doesn't get a look at them? Or does e2fsck not work on empty partitions? Or has a repair somehow been made? How can I find out?
And: what are the practicalities of using '-c' vs. '-cc', that is, when and where do I want a read-write test vs. a read-only test?
And: after repartitioning, I tried this: "mkfs.ext4 -vcc ..." in the hopes of checking the disk at the same time as creating the FS, but it took hours and hours. In constrast: "e2fsck -ccvy ..." after the FS was created was much faster, less than an hour for a 500GB disk with 12 partitions. Why? One needs to know the facts of life before one starts fscking.
|
Filesystem badblock lists are obsolete (ignoring flash filesystems, because you're talking about ext4). bad blocks are remapped by the drive. Look for errors - there should be a permanent log of these in SMART counters. If you see one or more errors / "bad blocks" / "bad sectors" you should consider the disk untrustworthy.
If your valued data is saved redundantly (RAID, backups), some people develop methods to re-establish trust in the drive over a testing period.[*] You aren't using RAID to start with, so I'm not able to recommend this.
Those are the facts of life. The behaviour of mkfs v.s. fsck is unfortunate. A read-write test is still potentially useful to stress-test a newly-acquired drive. It should take more than one hour, because disk IO speed is around 100MB/s and you want to both write and read the whole disk. (The relative performance of modern disks also affects the viability of certain RAID modes). I also notice that badblocks -w runs several passes with different patterns, which would explain why it takes so long. Since badblock lists are obsolete, you can run badblocks directly and just look for any error.
However given how long this would take & that you could not use the disk for this period, you might prefer to use the longest available SMART test, or simply dd if=/dev/sdX bs=10M of=/dev/null and see if you get any read errors.
SMART features are available in GNOME Disks. (It also has a benchmark feature). The error counters are measured in sectors; you can just look at all the counters that say "sectors" and check that they're all zero. It sounds like you might have some under "reallocated sectors".
[*] Writing new data to a bad sector will clear the error. This works by writing the logical sector to a different physical sector in a "spare area", and the drive will make sure to remap future reads of the logical sector.
| e2fsck: bad blocks disapearing! |
1,562,066,920,000 |
This command:
badblocks -svn /dev/sda
What does it do? Does it just report the bad blocks? Or does it somehow handle the bad blocks so that I don't need to be worried about them?
I read the manual by man badblocks, but I don't get the -n option:
-s Show the progress of the scan by writing out rough percentage completion of
the current badblocks pass over the disk. Note that badblocks may do multiple
test passes over the disk, in particular if the -p or -w option is requested
by the user.
-v Verbose mode. Will write the number of read errors, write errors and data-
corruptions to stderr.
-n Use non-destructive read-write mode. By default only a non-destructive read-
only test is done. This option must not be combined with the -w option, as
they are mutually exclusive.
The output of running badblocks -svn /dev/sda which lasted for almost two days:
Update
Some posts suggest that after running badblocks -svn /dev/sda, the hard disk controller would take care of bad blocks. Not sure.
to have the hard disk controller replace bad blocks by spare blocks.
https://askubuntu.com/a/490552/507217
If you have fully processed your disk this way, the disk controller should have replaced all bad blocks by working ones and the reallocated count will be increased in the SMART log.
https://askubuntu.com/a/490549/507217
SMART
I checked the SMART table after running the badblocks command by:
smartctl --all /dev/sda
Note that Current_Pending_Sector raw value is 56. It's twice the 28 reported by badblocks. Maybe they are related.
Error interpretation
According to this:
How to interpret badblocks output
badblocks error log is in the form of reading/writing/comparing. In my case, all of 28 errors are reading errors. Meaning no application can read those blocks.
OS logs
I looked at OS logs by sudo journalctl -xe. Actually, SMART is throwing errors about those 56 bad sectors (28 bad blocks):
smartd[1243]: Device: /dev/sda [SAT], 56 Currently unreadable (pending) sectors
Conclusion
I'd rather backup the data and replace the hard disk before it's too late.
|
The "non-destructive read-write mode" triggered by the -n option writes the test data to each block, just like the -w, and forces the disk either to accept the write, to reallocate a faulty block, or to return a write error.
However, its big win is that it first reads the block it's about to overwrite, and re-writes that data after the test data has been written. This means that after badblocks has completed, the disk should contain the same data as it did before it started running.
Process
Read block and save
Write block of test data
Capture status result and report if necessary
Rewrite saved block
Repeat with next block until done
Caveat
Writing a good block of data to a disk will result in expected operation: the block will be written. However, if the write fails, the disk firmware will automatically and transparently remap the block address to one of its spare blocks and retry the write for you at that new location on the disk. Provided that that write is successful you won't know anything different and the disk will seem perfectly normal. (In the SMART table, the Sector Reallocated counter will be increased by one.) Eventually as time progresses the set of spare blocks may get used up, and from this point disk writes that would have been remapped will simply fail.
A full disk write test such as one provided by badblocks with either -w or -n will force writes to all disk blocks, ensuring that they are all available to you, or else highlighting disk blocks that cannot be remapped.
Notice that badblocks does not guarantee you haven't lost data: if it cannot read a block it cannot rewrite it after the test, so it doesn't perform the write test (but does report the block as bad). If badblocks cannot read a block then neither would any other application have been able to do so, and your data is lost.
My recommendation would be that if you get any disk blocks that cannot be remapped you replace the disk as soon as possible because you no longer have any safety net. (Personally, I would replace such a disk before reaching this stage.) The ddrescue tool may help in copying data from this broken disk to a new one.
| What does command do: `badblocks -svn /dev/sda`? does it just report the bad blocks? |
1,562,066,920,000 |
Suppose there's a hard drive /dev/sda, and both that:
/dev/sda1 is a single ext4 partition taking up the whole disk, and it's mostly empty of data.
dumpe2fs -b /dev/sda1 outputs the badblocks list, which in this case outputs single high number b representing a bad block near the end of /dev/sda; b is fortunately not part of any file.
Now a swap partition needs to be added to the beginning of /dev/sda1, and gparted (v0.30.0-3ubuntu1) is used to:
Resize (shrink) sda1, so that it starts several gigabytes later, but ends at the same place.
Add a swap partition in the gap left by shrinking sda1.
So gparted finishes the job and we run dumpe2fs -b /dev/sda1 again. What happens? Does it...?
Output nothing, meaning the resize forgot the bad block.
Output b unchanged.
Output b + o where o is an offset corresponding to where the just shrunk /dev/sda1 now begins.
NOTE: To simplify the question, suppose that the hard disk in question has no S.M.A.R.T. firmware. (Comments about firmware are off-topic.)
|
GParted doesn’t take any ext2/3/4 badblocks list into account; I checked this by creating an ext4 file system with a force bad block, then moving it using GParted. Running dumpe2fs -b on the moved partition shows the bad block at the same offset.
The result is 2, so the bad block ignored by the file system no longer corresponds to the real bad block on the medium. This means that the file system ignores a block it could safely use, and is liable to use the bad block it should avoid.
This does make sense, at some level. When GParted (or any other tool) moves a partition, it doesn’t use a file system-specific tool, it moves the container. This works in general because file system data is relative to its container; usually the file system data structures don’t need to be updated as a result of a move. However bad block lists describe features which don’t move with their container... Making GParted handle this would be quite complex: not only would it have to update the bad blocks list itself, it would also have to move data out of the way so that the bad block’s new position in the moved file system is unused.
| Does gparted make good use of badblocks lists? |
1,562,066,920,000 |
I got a softbricked/hardbricked 1TB WD Passport HDD which happened when transfering a suspected 11GB PS3 game file from my mac.As for repairing, My mac cannot do anything with the HDD, so I'm trying to solve it using linux machine.
by running : sudo badblocks -v /dev/sdb > badsectors.txt
I got an unlimited number of lines in badsectors.txt, which takes a very long time.
I tried sudo dd if=/dev/zero of=/dev/sdb , it completed after awhile left me an unpartitioned space. I thought it fixed already, I formatted it and tried to see if it has any problem, but unfortunatelly, it still blinking like forever and never mount. scanning it from thunar file manager will freeze the thunar, and gparted will scan forever, the only way to determine if its connected is using sudo fdisk -l
so what should I do?
zero out the bad blocks one by one using dd seek command?
|
You have the highest possible read error rate:
1 Raw_Read_Error_Rate 0x002f 001 001 051 Pre-fail Always FAILING_NOW 72289
which means there's some hardware defect somewhere.
This drive is dead.
| If dd zero does not "format" my disk what should i do? |
1,562,066,920,000 |
I'm using f3 to test hundreds of USB flash memory sticks for errors.
Here's an example output from a faulty drive. First writing test files with f3write:
Free space: 3.74 GB
Creating file 1.h2w ... OK!
Creating file 2.h2w ... OK!
Creating file 3.h2w ... OK!
Creating file 4.h2w ... OK!
Free space: 0.00 Byte
Average writing speed: 2.22 MB/s
Then reading back with f3read:
SECTORS ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2030944/ 0/ 0/ 66208
Validating file 2.h2w ... 2032136/ 0/ 0/ 65016
Validating file 3.h2w ... 2031920/ 0/ 0/ 65232
Validating file 4.h2w ... 1509112/ 0/ 0/ 48376
Data OK: 3.63 GB (7604112 sectors)
Data LOST: 119.55 MB (244832 sectors)
Corrupted: 0.00 Byte (0 sectors)
Slightly changed: 0.00 Byte (0 sectors)
Overwritten: 119.55 MB (244832 sectors)
Average reading speed: 3.23 MB/s
Typically if a USB drive contains errors, they come up in the corrupted column. Recently I've got drives that report errors in the "overwritten" column. I wonder what is the difference between the three.
I've also noticed that badblocks utility also reports errors in three columns, I wonder if it's the same scheme? EDIT: not it's not - How to interpret badblocks output
|
The f3 documentation says:
When f3read reads a sector (i.e. 512 bytes, the unit of communication with the card), f3read can check if the sector was correctly written by f3write, and figure out in which file the sector should be and in which position in that file the sector should be. Thus, if a sector is well formed, or with a few bits flipped, but read in an unexpected position, f3read counts it as overwritten. Slightly changed sectors, are sectors at right position with a fews bits flipped.
The three types of errors mean:
changed: the sector was written by f3write, and read in the expected position, with some changes (less than the “tolerance”, which allows for two errors);
overwritten: the sector read contains data written by f3write to another sector, possibly with some changes (within the tolerance);
corrupted: the sector doesn’t match data written by f3write (the changes exceed the tolerance).
All three are bad news, but of different kinds. Overwritten sectors indicate that the drive is lying about its capacity and is wrapping writes.
| f3read - what is the difference between corrupted, changed and overwritten sectors? |
1,562,066,920,000 |
From the manpage:
badblocks - search a device for bad blocks
but as I try to isolate between software and hardware, I might need a bit more context.
Does badblocks scan for software (filesystem) or hardware (ssd) failures?
See also Ubuntu manpage entry at: https://manpages.ubuntu.com/manpages/focal/man8/badblocks.8.html
|
The answer lies in the definition of a badblock. A working definition may be:
Bad Block is an area of storing media that is no longer reliable for the storage of data because it is completely damaged or corrupted.
It is not the best definition to use with the program badblocks, but gives a general idea of what it means.
It is not correct in that it defines the area (sector) as damaged. And, from the point of view of badblocks it doesn't matter if the sector is damaged, broken or burnt, it just tries to read the block, and, if there is an ECC (Error Checking and correction), the sector is deemed bad.
The ECC is a method to ensure (most of the time) that what was read is consistent (and valid). It is based on something similar to encryption.
An ECC error might be temporal, trying a couple of times it might happen that the error clears. That is very usual in SSDs because there is a (Dynamic) mapping of physical sectors to logical sectors. As soon as a sector got an ECC error and is successfully read correctly later, the disk chip will replace the physical sector by a different one.
A sector could give an error and, on the next read, be perfectly fine.
A deeper test is to write to each sector with some patterns and ensure that what is read back is the pattern itself. That would erase the data on the sector, but if correct, the sector could not only be read from but also written into.
So, to answer your specific question:
The program badblocks will try to find sectors that (repeatedly) fail the ECC and therefore should be deemed as bad. That is a hardware failure.
After a disk has been checked by badblocks and found "correct" there might, still, be filesystem, OS, or other errors.
| Does `badblocks` scan for software or hardware failures? |
1,562,066,920,000 |
I am running
$ uname -a
Linux myhostname 4.14.15-041415-generic #201801231530 SMP Tue Jan 23 20:33:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Nitrux
Description: Nitrux 1.1.4
Release: 1.1.4
Codename: nxos
It has a single hard disk with a system ext4 partition and a swap partition. The hard disk can't complete neither the Smart short test, nor the long one.
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 90% 32232 11202419
# 2 Extended offline Completed: read failure 90% 32229 11202419
Maybe the disk should be replaced.
In the meanwhile, is it possible to simply instruct the filesystem to avoid the block corresponding to that LBA? So that no further read/write errors are generated from there. In fact, it seems to be an isolated error and the hard disk (except, of course, for that area) is still able to work.
The SMART parameters are weird, because there are pending sectors to be re-allocated, but there are also 0 reallocated sectors. Note that this hard disk is about 10 years old.
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 19
3 Spin_Up_Time 0x0027 140 139 021 Pre-fail Always - 3966
4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2058
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0
9 Power_On_Hours 0x0032 056 056 000 Old_age Always - 32232
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2001
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 206
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1851
194 Temperature_Celsius 0x0022 103 086 000 Old_age Always - 40
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 78
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 70
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 89
In the linked page there is no chosen answer. I must keep the system up and I would like to avoid dd (and there is no clear example about how to use it in this case). Can I run fsck.ext2 -c on a mounted filesystem?
|
From the e2fsck man page (e2fsck is also linked to names fsck.ext2, fsck.ext3 and fsck.ext4):
Note that in general it is not safe to run e2fsck on mounted filesystems. The only exception is if the -n option is specified, and -c, -l, or -L options are not specified. However, even if it is safe to do so, the results printed by e2fsck are not valid if the filesystem is mounted. If e2fsck asks whether or not you should check a filesystem which is mounted, the only correct answer is "no". Only experts who really know what they are doing should consider answering this question in any other way.
So the answer is "no, you cannot run fsck on a mounted ext2/3/4 filesystem in any mode that would make any changes to the filesystem at all".
At boot time, the root filesystem may be checked while it's mounted in read-only mode or the system is still running on initramfs. But in this situation, the system should be rebooted immediately afterwards if the fsck indicates it had to make any changes.
If a disk block has totally failed so that even repeated retries won't result in the disk being confident that the data has been read correctly, the disk cannot automatically reallocate that block until its contents are overwritten by the OS - because doing the reallocation without having the correct data is equivalent to silently corrupting the data (by replacing a block of data with zeroes). That is worse than a file that simply produces a read error, because the corrupt data may be used in further processing and silently cause other results to be corrupted until it is finally noticed.
A file that produces read errors is usually pretty straightforward to restore from backups, unless it is a critical system file and the system crashes or is unable to run the restore tool if that file is missing.
The fact that SMART indicates there are sectors pending to be re-allocated but no actual re-allocations might mean just that the failed sectors are occupied by system files that are normally only ever read and practically never written. If you can figure out which package those files belong to, you can instruct the package management system to reinstall that package; Nitrux seems to use .deb packages, so apt-get reinstall <package name> would be the command to run. This would cause the file to be rewritten, allowing the disk to complete the re-allocation.
Unfortunately, some disk manufacturers have created disks with incomplete SMART implementations, so you can only really trust SMART if it's telling bad news; if it says things are OK but the operating system is reporting read/write errors, then something is bad regardless of what SMART says - and since HDDs are a wear item, in most cases it's the disk that is faulty.
I've worked in various roles in server administration for a living for more than 20 years now. Through all that time, our team's reaction on seeing a 10+ years old disk still in use would have been - and still is:
"Holy ****! If that disk spins down for any reason, there is practically no guarantee at all that it will ever restart again. Can we even get spare parts for old hardware like that with any reasonable price and response time? At the very least, we need to make a very realistic plan on what to do when (not if) that thing fails, preferably get the ball rolling right now on either replacement or virtualization of that old thing ASAP."
Granted, we deal with servers that are almost always running 24 hours a day, every day of every year, through their whole lifetime - and that might not be the case with your system.
But a 10 year old disk, if used anywhere near the "typical" way for the market segment it's designed for, is definitely well into the rising edge of the bathtub curve: its design lifetime has been exceeded and it's wearing out.
| Avoid damaged block in ext4 |
1,562,066,920,000 |
The tool badblocks can give a list of unreadable LBA's, including logical errors I guess.
How can I differentiate between logical (soft) bad blocks and physical (hard) bad blocks?
List logical and physical errors seperately or marked as.
Indicate type of error for any given LBA.
|
As far as the harddisk is concerned, the LBA (logical block address) is supposed to be the "physical" address of the block.
For modern harddisks this is no longer true, there is an additional level of indirection which maps bad LBAs no blocks from a spare list. There is no way to get at this list, unless you hack the harddrive's firmware. However, SMART values will tell you how many blocks are mapped this way, and how many are left.
This is also the reason badblocks is basically useless for modern harddisks: The harddisk itself will transparently remap the block on the next write (or whenever it feels like it) as soon as it discovers a problem. So badblocks will nearly always tell you "there are no bad blocks", and the harddisk will remap them until it runs out of spares, at which point you'll be in trouble, because by then the harddisk is at the end of its life, and will fail completely and catastrophically very soon.
I am not sure what you mean by "logical errors" and "physical errors": The harddisk doesn't distinguish between different kinds of bad blocks in the error messages you'll see from the harddisk controller.
If this is an XY problem, and your Y is "I need to distinguish between logical and physical bad blocks", please edit the question and describe the X you want to achieve.
| Differentiate bad logical and physical blocks? (list seperately) |
1,562,066,920,000 |
Recently my hard disk show some error message in SMART utility, I have taken a screenshot from error messages. That's something like : Current pending sector count error && Relocated sector count . Can someone explain me how to fix such Bad Sector and Errors ?
notice : Sometimes system hangout and syslog show "Kernel:Journal commit I/O error"
|
The best way to fix these bad sectors and to get rid of the warnings is by backing up, replacing the hardware and restoring (if the drive is part of a RAID-5, you should just swap the drive and let the RAID software reconstruct the contents).
Although you could get rid of the problems with these sectors by remapping (or having the drive remap them for you if it is smart enough), that doesn't take away the cause for the problems. For me these error counts are to high to trust the system to continue working.
| Current pending sector count error |
1,514,163,228,000 |
Is there a simple way to reverse an array?
#!/bin/bash
array=(1 2 3 4 5 6 7)
echo "${array[@]}"
so I would get: 7 6 5 4 3 2 1
instead of: 1 2 3 4 5 6 7
|
I have answered the question as written, and this code reverses the array. (Printing the elements in reverse order without reversing the array is just a for loop counting down from the last element to zero.) This is a standard "swap first and last" algorithm.
array=(1 2 3 4 5 6 7)
min=0
max=$(( ${#array[@]} -1 ))
while [[ min -lt max ]]
do
# Swap current first and last elements
x="${array[$min]}"
array[$min]="${array[$max]}"
array[$max]="$x"
# Move closer
(( min++, max-- ))
done
echo "${array[@]}"
It works for arrays of odd and even length.
| Bash - reverse an array |
1,514,163,228,000 |
VAR=a,b,c,d
# VAR=$(echo $VAR|tr -d '\n')
echo "[$VAR]"
readarray -td, ARR<<< "$VAR"
declare -p ARR
Result:
[a,b,c,d]
declare -a ARR=([0]="a" [1]="b" [2]="c" [3]=$'d\n')
How can I tell readarray not to add the final newline \n? What is the meaning of the latest $ symbol?
|
The implicit trailing new-line character is not added by the readarray builtin, but by the here-string (<<<) of bash, see Why does a bash here-string add a trailing newline char?. You can get rid of that by printing the string without the new-line using printf and read it over a process-substitution technique < <()
readarray -td, ARR < <(printf '%s' "$VAR")
declare -p ARR
would properly generate now
declare -a ARR=([0]="a" [1]="b" [2]="c" [3]="d")
| How to remove new line added by readarray when using a delimiter? |
1,514,163,228,000 |
Suppose I have a graphical program named app. Usage example: app -t 'first tab' -t 'second tab' opens two 'tabs' in that program.
The question is: how can I execute the command (i.e. app) from within a bash script if the number of arguments can vary?
Consider this:
#!/bin/bash
tabs=(
'first tab'
'second tab'
)
# Open the app (starting with some tabs).
app # ... How to get `app -t 'first tab' -t 'second tab'`?
I would like the above script to have an effect equivalent to app -t 'first tab' -t 'second tab'. How can such a bash script be written?
Edit: note that the question is asking about composing command line arguments on the fly using an array of arguments.
|
Giving the arguments from an array is easy, "${array[@]}" expands to the array entries as distinct words (arguments). We just need to add the -t flags. To do that, we can loop over the first array, and build another array for the full list of arguments, adding the -t flags as we go:
#!/bin/bash
tabs=("first tab" "second tab")
args=()
for t in "${tabs[@]}" ; do
args+=(-t "$t")
done
app "${args[@]}"
Use "$@" instead of "${tabs[@]}" to take the command line arguments of the script instead of a hard coded list.
Related: How can we run a command stored in a variable?
| Run a command using arguments that come from an array |
1,514,163,228,000 |
The following script fails when run with bash 4.4.20(1)
#!/bin/bash
bar() {
local args=("y")
}
foo() {
local -r args=("x")
bar
}
foo
with error line 3: args: readonly variable but succeeds when run with bash 4.2.46(2), which makes sense after reading 24.2. Local Variables.
The following script with non-array variables runs without any issues:
#!/bin/bash
bar() {
local args="y"
}
foo() {
local -r args="x"
bar
}
foo
I could not find any changes that explain the difference between bash 4.2.46(2) and bash 4.4.20(1).
Q: is this a bug in bash 4.4.20(1)? if this is expected behavior then why does the second script not fail?
|
Your script runs correctly with release 5.1 of the bash shell, but not with intermediate releases after 4.3.
The bug or bugs might have been introduced around release 4.3 or 4.4. Multiple changes touched on how read-only declarations and variables work in the development leading up to both releases.
There are at least two entries in the change log that may relate to fixing the involved bugs:
For bash-5.0-alpha, the change log has this to say (I'm assuming that referencing the readonly builtin also means local -r was affected):
Fixed a bug that could cause builtins like readonly to behave differently when applied to arrays and scalar variables within functions.
For bash-5.1-alpha, an additional bug that could be related to this is also mentioned:
Fixed a bug that caused local variables with the same name as variables appearing in a function's temporary environment to not be marked as local.
Bisecting using the shell's Git repository and your script, we arrive at the bug being introduced with bash-4.4-rc1, and then later fixed in bash-5.1-alpha. Since both are big commits, it's difficult to point to any particular changes in the code.
| Bash 4.4 local readonly array variable scoping: bug? |
1,514,163,228,000 |
Is there a way to find the length of the array *(files names) in zsh without using a for loop to increment some variable?
I naively tried echo ${#*[@]} but it didn't work. (bash syntax are welcome as well)
|
${#*[@]} would be the length of the $* array also known as $@ or $argv, which is the array of positional parameters (in the case of a script or function, that's the arguments the script or function received). Though you'd rather use $# for that.
* alone is just a glob pattern. In list context, that's expanded to the list of files in the current directory that match that pattern. As * is a pattern that matches any string, it would expand to all file names in the current directory (except for the hidden ones).
Now you need to find a list context for that * to be expanded, and then somehow count the number of resulting arguments. One way could be to use an anonymous function:
() {echo There are $# non hidden files in the current directory} *(N)
Instead of *, I used *(N) which is * but with the N (for nullglob) globbing qualifier which makes it so that if the * pattern doesn't match any file, instead of reporting an error, it expands to nothing at all.
The expansion of *(N) is then passed to that anonymous function. Within that anonymous function, that list of file is available in the $@/$argv array, and we get the length of that array with $# (same as $#argv, $#@, $#* or even the awkward ksh syntax like ${#argv[@]}).
| Find array length in zsh script |
1,514,163,228,000 |
The following post solution works as expected:
How to pass an array as function argument?
Therefore - from his answer:
function copyFiles() {
arr=("$@")
for i in "${arr[@]}";
do
echo "$i"
done
}
array=("one 1" "two 2" "three 3")
copyFiles "${array[@]}"
The reason of this post is what if the following case happens:
copyFiles "${array[@]}" "Something More"
copyFiles "Something More" "${array[@]}"
Problem: I did do realize about the arguments sent, when they are received how parameters in the function - they are practically merged - so $1 and $2 does not work as expected any more, the "array" argument integrates with the other argument
I already did do a research:
How do I pass an array as an argument?
Sadly typeset -n does not work
and in:
How to pass an array to a function as an actual parameter rather than a global variable
does not work as expected - in that answer there is a comment indicating an issue - with a link with a demo testing/verification - about the array size (${#array[@]}) being different within the function.
So how accomplish this goal?
|
It is not possible to pass an array as argument like that. Even though it looks like you do that, it does not work as you expect it
Your shell (e.g. here: bash) will expand "${array[@]}" to the individual items before executing the function !
So, this
copyFiles "Something More" "${array[@]}"
will actually call
copyFiles "Something More" "one 1" "two 2" "three 3"
So, inside the function it is not possible to distinguish the array from other arguments.
(You could add a reference to an array, but I would argue against using it, as it doesn't seem to be very portable, also you won't want to mix scopes if not necessary).
You can use shift, e.g.
copyFiles() {
var1=$1
shift
for i in "$@"; do ... ; done
}
(Note, that arr=("$@") is superfluous, $@ is already an array, you don't even need to specify "$@", you could also use for i; do ...; done)
or parse arguments with something like getopts.
| How to pass an array as function argument but with other extra parameters? |
1,514,163,228,000 |
I have a piece of code which works, something like this (note this is inside CloudFormation Template for AWS auto deployment):
EFS_SERVER_IPS_ARRAY=( $(aws efs describe-mount-targets --file-system-id ${SharedFileSystem} | jq '.MountTargets[].IpAddress' -r) )
echo "IPs in EFS_SERVER_IPS_ARRAY:"
for element in "${EFS_SERVER_IPS_ARRAY[@]}"
do
echo "$element"
echo "$element $MOUNT_SOURCE" >> /etc/hosts
done
This works but looks ugly. I want to avoid the array variable and the for loop (basically I don't care about the first echo command).
Can I somehow use the output ($element, which is 1 or more, currently 2 lines of IPs) and funnel it into two executions of something like:
long AWS command >> echo $element $MOUNT_SOURCE >> /etc/hosts
with echo executing as many times as there are variables in the array, in current implementation? How would I rewrite this?
The output of the AWS command is like this:
10.10.10.10
10.22.22.22
Then, the added lines in /etc/hosts look like:
10.10.10.10 unique-id.efs.us-east-1.amazonaws.com
10.22.22.22 unique-id.efs.us-east-1.amazonaws.com
|
aws efs describe-mount-targets --file-system-id ${SharedFileSystem} \
| jq --arg mntsrc "$MOUNT_SOURCE" '.MountTargets[].IpAddress | . + $mntsrc' -r >> /etc/hosts
or, if you prefer,
aws efs describe-mount-targets --file-system-id ${SharedFileSystem} \
| jq '.MountTargets[].IpAddress' -r | sed -e "s~\$~$MOUNT_SOURCE~" >> /etc/hosts
All that's happening is adding some extra fixed text to the end of each line, which can happen either in jq (top) or in various ways outside (bottom). There's not really any array context here or anything being repeated, so you don't need a loop.
| How to pipe multiple results into a command? |
1,514,163,228,000 |
In reading through the source to fff to learn more about Bash programming, I saw a timeout option passed to read as an array here:
read "${read_flags[@]}" -srn 1 && key "$REPLY"
The value of read_flags is set like this:
read_flags=(-t 0.05)
(The resulting read invocation intended is therefore read -t 0.05 -srn 1).
I can't quite figure out why a string could not have been used, i.e.:
read_flags="-t 0.05"
read "$read_flags" -srn 1 && key "$REPLY"
This string based approach results in an "invalid timeout specification".
Investigating, I came up with a test script parmtest:
show() {
for i in "$@"; do printf '[%s]' "$i"; done
printf '\n'
}
opt_string="-t 1"
opt_array=(-t 1)
echo 'Using string-based option...'
show string "$opt_string" x y z
read "$opt_string"
echo
echo 'Using array-based option...'
show array "${opt_array[@]}" x y z
read "${opt_array[@]}"
Running this, with bash parmtest ($BASH_VERSION is 5.1.4(1)-release), gives:
Using string-based option...
[string][-t 1][x][y][z]
parmtest: line 11: read: 1: invalid timeout specification
Using array-based option...
[array][-t][1][x][y][z]
(1 second delay...)
I can see from the debug output that the value of 1 in the array based approach is separate and without whitespace. I can also see from the error message that there's an extra space before the 1: read: 1: invalid timeout specification. My suspicions are in that area.
The strange thing is that if I use this approach with another command, e.g. date, the problem doesn't exist:
show() {
for i in "$@"; do printf '[%s]' "$i"; done
printf '\n'
}
opt_string="-d 1"
opt_array=(-d 1)
echo 'Using string-based option...'
show string "$opt_string" x y z
date "$opt_string"
echo
echo 'Using array-based option...'
show array "${opt_array[@]}" x y z
date "${opt_array[@]}"
(The only differences are the opt_string and opt_array now specify -d not -t and I'm calling date not read in each case).
When run with bash parmtest this produces:
Using string-based option...
[string][-d 1][x][y][z]
Wed Sep 1 01:00:00 UTC 2021
Using array-based option...
[array][-d][1][x][y][z]
Wed Sep 1 01:00:00 UTC 2021
No error.
I've searched, but in vain, to find an answer to this. Moreover, the author wrote this bit directly in one go and used an array immediately, which makes me wonder.
Thank you in advance.
Update 03 Sep : Here's the blog post where I've written up what I've learned so far from reading through fff, and I've referenced this question and the great answers in it too: Exploring fff part 1 - main.
|
The reason is a difference in how the read builtin function and the date command interpret their command-line arguments.
But, first things first. In both of your examples, you place - as is recommended - quotes around the dereferencing of your shell variables, be it "${read_flags[@]}" in the array case or "$read_flags" in the scalar case.
The main reason why it is recommended to always quote your shell variables is to prevent unwanted word splitting. Consider the following
You have a file called My favorite songs.txt with spaces in it, and want to move it to the directory playlists/.
If you store the filename in a variable $fname and call
mv $fname playlists/
the mv command will see four arguments: My, favorite, songs.txt and playlists/ and try to move the three nonexistant files My, favorite and songs.txt to the directory playlists/. Obviously not what you want.
Instead, if you place the $fname reference in double-quotes, as in
mv "$fname" playlists/
it makes sure the shell passes this entire string including the spaces as one word to mv, so that it recognizes it is just one file (albeit with spaces in its name) that needs to be moved.
Now you have a situation in which you want to store option arguments in a shell variable. These are tricky, because sometimes they are long, sometimes short, and sometimes they take a value. There are numerous ways on how to specify options that take arguments, and usually how they are parsed is left entirely at the discretion of the programmer (see this Q&A) for a discussion). The reason why Bash's read builtin and the date command react differently is therefore likely in the internal workings on how these two parse their command-line arguments. However, we may speculate a little.
When storing -t 0.05 in a scalar shell variable and passing it as "$opt_string", the recipient will see this as one string containing a space (see above).
When storing -t and 0.05 in an array variable and passing it as "${opt_array[@]}" the recipient will see this as two separate items, the -t and the 0.05.(1)(2)
Many programs will use the getopt() function from the GNU C library for parsing command-line arguments, as is recommended by the POSIX guidelines.
The getopt() distinguishes "short" options and "long" option format, e.g. date -u or date --utc in case of the date command. The way option values for an option (say, -o / --option) are interpreted by getopt is usually -ovalue or -o value for short options and --option=value or --option value for long options.
When passing -t 0.05 as two words to a tool that uses getopt(), it will take the first character after the - as being the option name and the next word as the option value (the -o value syntax). So, read would take t as option name and 0.05 as option value.
When passing -t 0.05 as one word, it will be interpreted as the -ovalue syntax: getopt() will take (again) the first character after the - as the option name and the remainder of the string as option value, so the value would be 0.05 with a leading space.
The read command apparently doesn't accept timeout specifications with a leading space. And indeed, if you call
read -t " 0.05" -srn 1
where the value is explicitly a string with leading space, read also complains about this.
As a conclusion, the date command is obviously written in a more lenient way when it comes to the option value for -d and doesn't care if the value string starts with a space. This is perhaps not unexpected, as the values that the date specifications can take on are very diverse, as opposed to the case of a timeout specification that (clearly) needs to be a number.
(1) Note that using the @ (as opposed to *) makes a great difference here, because when the array reference is quoted, all array elements will then appear as if they were individually quoted and thus could contain spaces themselves without being split further.
(2) In principle, there is a third option: Store -t 0.05 in a scalar variable $opt_string, but pass it as $opt_string without the quotes. In this case, we would have word-splitting at the space, and again two items, -t and 0.05, would be passed separately to the program. However, this is not the recommended way because sometimes your argument value will have explicit whitespaces that need preserving.
| Bash's read builtin errors on a string-based timeout option specification but not an array-based one. Why? |
1,514,163,228,000 |
I don't understand why "${ARRAY[@]}" gets expanded to multiple words, when it's quoted ("...")?
Take this example:
IFS=":" read -ra ARRAY <<< "foo:bar:baz"
for e in "${ARRAY[@]}"; do echo $e; done
foo
bar
baz
Any other variable that I expand in quotes, say "${VAR}", results in a single word:
VAR="foo bar baz"
for a in "${VAR}"; do echo $a; done
foo bar baz
Can anyone explain this to a novice Linux user?
|
Because arrays when indexed with @ and double quoted expand to a list of the elements. It's documented in man bash under "Arrays":
If the word is double-quoted, ... ${name[@]} expands each element
of name to a separate word.
This behaviour is required if you don't want each element to be subject to word splitting on $IFS characters and globbing (i.e. expansion of *, ?, or [...]).
#!/bin/bash
arr=(a 'b c' d)
for el in ${arr[@]} ; do echo "1: $el" ; done
for el in "${arr[@]}" ; do echo "2: $el" ; done
Output:
1: a
1: b
1: c
1: d
2: a
2: b c
2: d
If you want the array expanded as a single string (with the elements of the array delimited by the first character of $IFS, by default a space), then use "${arr[*]}".
The "${arr[@]}" and "${arr[*]}" syntax for array expansion is analogous to "$@" and "$*" for the positional parameters.
| Why is "${ARRAY[@]}" expanded into multiple words, when it's quoted? |
1,514,163,228,000 |
#!/usr/bin/bash
ARGENT=("Nous devons économiser de l'argent."
"Je dois économiser de l'argent.")
BIENETRE=("Comment vas-tu?" "Tout va bien ?")
aoarrs=("${ARGENT}" "${BIENETRE}")
select arr in "${aoarrs[@]}"; do
for el in "${arr[@]}"; do
echo "$el"
done
break
done
I want this script to print the array names to the user, ARGENT and BIENETRE,
so that the user can select one of them. After the user's input the script is meant
to print every element of an array selected. I want to select with select an array to loop through from an array of arrays (aoarrs). The reason why I want to use select is because in the real world my array of arrays may have many more than just two arrays in it. How might I accomplish that?
|
You'll store the array names in aoarrs, and inside the select body declare a nameref to the chosen name:
ARGENT=("Nous devons économiser de l'argent."
"Je dois économiser de l'argent.")
BIENETRE=("Comment vas-tu?" "Tout va bien ?")
aoarrs=(ARGENT BIENETRE)
PS3='Which array? '
select arr in "${aoarrs[@]}"; do
[[ $arr ]] || continue
declare -n ref=$arr
for i in "${!ref[@]}"; do
printf '%d\t%s\n' $i "${ref[i]}"
done
break
done
Running might look like
1) ARGENT
2) BIENETRE
Which array? 3
Which array? 4
Which array? 5
Which array? 2
0 Comment vas-tu?
1 Tout va bien ?
| How do I select an array to loop through from an array of arrays? |
1,514,163,228,000 |
With the following code:
#! /bin/bash
declare -a arr=("element1"
"element2" "element3"
"element4" )
echo "1"
echo "${arr[@]}"
echo "2"
echo ${arr[*]}
The output is:
1
element1 element2 element3 element4
2
element1 element2 element3 element4
So the output is the same.
So when is mandatory use one approach over the other?
|
Compare the output of these three loops:
#!/bin/bash
declare -a arr=("this is" "a test" "of bash")
echo "LOOP 1"
for x in ${arr[*]}; do
echo "item: $x"
done
echo
echo "LOOP 2"
for x in "${arr[*]}"; do
echo "item: $x"
done
echo
echo "LOOP 3"
for x in "${arr[@]}"; do
echo "item: $x"
done
The above script will produce this output:
LOOP 1
item: this
item: is
item: a
item: test
item: of
item: bash
LOOP 2
item: this is a test of bash
LOOP 3
item: this is
item: a test
item: of bash
The use of "${array[@]}" in double quotes preserves the items in the array, even if they contain whitespace, whereas you lose that information using either "${array[*]}" or ${array[*]}.
This is explained in the "Arrays" section of the bash(1) man page, which says:
Any element of an array may be referenced using ${name[subscript]}. The braces are required to avoid conflicts with pathname expansion. If subscript is @ or *, the word expands to all members of name. These subscripts differ only when the word appears within double quotes. If the word is double-quoted, ${name[*]} expands to a single word with the value of each array member separated by the first character of the IFS special variable, and ${name[@]} expands each element of name to a separate word...
| What is the difference between ${array[*]} and ${array[@]}? When use each one over the other? [duplicate] |
1,514,163,228,000 |
I have an array like this:
array=(1 2 7 6)
and would like to search for the second largest value, with the output being
secondGreatest=6
Is there any way to do this in bash?
|
printf '%s\n' "${array[@]}" | sort -n | tail -2 | head -1
Print each value of the array on it's own line, sort it, get the last 2 values, remove the last value
secondGreatest=$(printf '%s\n' "${array[@]}" | sort -n | tail -2 | head -1)
Set that value to the secondGreatest variable.
Glenn Jackman had an excellent point about duplicate numbers that I didn't consider. If you only care about unique values you can use the -u flag of sort:
secondGreatest=$(printf '%s\n' "${array[@]}" | sort -nu | tail -2 | head -1)
| Find second largest value in array |
1,514,163,228,000 |
I have an input file, names.txt, with the 1 word per line:
apple
abble
aplle
With my bash script I am trying to achieve the following output:
apple and apple
apple and abble
apple and aplle
abble and apple
abble and abble
abble and aplle
aplle and apple
aplle and abble
aplle and aplle
Here is my bash script
#!/usr/bin bash
readarray -t seqcol < names.txt
joiner () {
val1=$1
val2=$2
echo "$val1 and $val2"
}
export -f joiner
parallel -j 20 '
line=($(echo {}))
for word in "${line[@]}"; do
joiner "${line}" "${word}"
done
' ::: "${seqcol[@]}"
but it is only outputting the following 3 lines comparing identical elements from the array
apple and apple
abble and abble
aplle and aplle
I have the script that uses while read line loop, but it is too slow (my actual datafile is has about 200k lines). That is why I want to use array elements and gnu parallel at the same to speed the process up.
I have tried different ways of accessing the array elements within the parallel ' ' command (by mainly modifying this loop - for word in "${line[@]}", or by supplying the array to parallel via printf '%s\n' "${seqcol[@]}") but they are either leading to errors or output blank lines.
I would appreciate any help!
|
GNU Parallel can generate all combinations of input sources.
In your case you simply use names.txt twice:
parallel -k echo {1} and {2} :::: names.txt names.txt
Or (if you really have an array):
readarray -t seqcol < names.txt
parallel -kj 20 echo {1} and {2} ::: "${seqcol[@]}" ::: "${seqcol[@]}"
| Iterating over array elements with gnu parallel |
1,514,163,228,000 |
I have 2 arrays to prcoess in bash script simultaneously.
First array contains sort of lables.
Second array contains values, as under
LABELS=(label1 label2 label3 labe4 )
VALUES=(91 18 7 4)
What's required is:
a loop that will echo the indexed-item from LABELS array & and in front of that corresponding value for that item from VALUES array, as under
label1 91
label2 18
label3 7
label4 4
I guess nested loop will not work, I tried below, but it won't work by syntax
for label in {LABELS[@]} && value in {VALUES[@]}
do
echo ${label} ${value}
done
|
Just loop over the indices. e.g.
for (( i = 0; i < "${#LABELS[@]}"; i++ ))
do echo "${LABELS[$i]} ${VALUES[$i]}"
done
Instead of echo, you can use printf for more format control, e.g.
printf '%6s: %3d\n' "${LABELS[$i]}" "${VALUES[$i]}"
to line up labels with up to 6 letters and numbers with up to 3 digits.
| echoing value in same indexes of 2 arrays simulataneously |
1,514,163,228,000 |
I need to process some strings containing paths. How do I split such a string by / as delimiter resulting in an unknown number of path-parts and how do I, in the end, extract the resulting path-parts?
cut is obviously not the tool of choice as it needs you to know the number of parts beforehand and it also doesn't output each part such that I could use readarray or mapfile to collect them into an array.
|
In Bash, you can use read -a and a here-string to split the string into an array:
path=/foo/bar/doo
IFS=/ read -r -a parts <<< "$path"
That would give an array with the four elements (empty), foo, bar, and doo.
That doesn't work with paths containing newlines, since read treats the newline as a separator by default. To prevent that, you'd need to add -d '', but then there's the problem that the here-string adds a newline, which then must be removed from the last element:
path=$'/path/with/new\nlines'
IFS=/ read -d '' -r -a parts <<< "$path"
parts[-1]=${parts[-1]%$'\n'}
(parts[-1] refers to the last element of the array, and ${var%text} expands to the value of var with the trailing part matching text removed.)
Also note that if the path can contain duplicate slashes, e.g. foo//bar, you'll get empty array elements in the middle. Similarly if the path ends with a slash, you'll get an empty element at the end.
You could either ignore them, or preprocess the path to remove them, with something like this, to remove duplicate slashes
shopt -s extglob
path="${path//+('/')/'/'}"
and to remove trailing slashes:
shopt -s extglob
path="${path%+('/')}"
But then again, note that at the start of a pathname, a double slash //foo is a reserved special notation, different from a single (or triple etc.) slash, but you're not likely to see that in practice, so I'll ignore it.
| How do I split a string by a delimiter resulting in an unknown number of parts and how can I collect the results in an array? |
1,514,163,228,000 |
I have an array
snapshots=(1 2 3 4)
When I run
printf "${snapshots[*]}\n"
It prints as expected
1 2 3 4
But when I run
printf "${snapshots[@]}\n"
It just prints
1
without a newline. My understanding is that accessing an array with @ is supposed to expand the array so each element is on a newline but it does not appear to do this with printf while it does do this with echo. Why is this?
|
printf interprets its first argument as a format string, and prints that; any further arguments are only used as required in the format string.
With printf "${snapshots[*]}\n", the first argument is the elements of the array joined with the first character of $IFS (space by default) followed by backslash and n: "1 2 3 4\n". Printing that shows all the values in the array separated by spaces and followed by a newline.
With printf "${snapshots[@]}\n", the first argument is the first entry in the array, "1", and the rest of the array is provided as separate arguments for the format string to use. The last argument has \n appended: "2" "3" "4\n". Since the format string doesn’t reference any additional arguments, they are all ignored. All that is output is the first value, with no following newline.
To see all the values when using @, you need to provide an actual format string:
printf "%s\n" "${snapshots[@]}"
A format string containing a reference to arguments is repeated as many times as necessary to consume all the arguments. So a single reference is sufficient to print all the values in the array, here with each followed by a newline character.
| Why does printing an array with @ using printf in bash only print the first element? |
1,514,163,228,000 |
Consider the following example, it seems it's working fine with the index 0:
$ a1=(1 2 3)
$ a2=(a b c)
$ for x in a1 a2; do echo "${!x}"; done
1
a
$ for x in a1 a2; do echo "${!x[0]}"; done
1
a
However with the index 1 it prints nothing:
$ for x in a1 a2; do echo "${!x[1]}"; done
Arrays just by themselves are fine:
$ echo "${a1[1]} ${a2[1]}"
2 b
Edit - A real life use case based on ilkkachu answer
SHIBB=(https://shibboleth.net/downloads/service-provider/3.0.2/ shibboleth-sp-3.0.2 .tar.gz)
XERCES=(http://apache.mirrors.nublue.co.uk//xerces/c/3/sources/ xerces-c-3.2.1 .tar.gz)
XMLSEC=(http://apache.mirror.anlx.net/santuario/c-library/ xml-security-c-2.0.1 .tar.gz)
XMLTOOL=(http://shibboleth.net/downloads/c++-opensaml/latest/ xmltooling-3.0.2 .tar.gz)
OPENSAML=(http://shibboleth.net/downloads/c++-opensaml/latest/ opensaml-3.0.0 .tar.gz)
typeset -n x
for x in XERCES XMLSEC XMLTOOL OPENSAML SHIBB; do
url="${x[0]}" app="${x[1]}" ext="${x[2]}"
[ -f "./${app}${ext}" ] || wget "${url}${app}${ext}"
tar -xf "./${app}${ext}"
cd "./${app}" && ./configure && make -j2 && make install && ldconfig
cd ..
done
|
"${!x[1]}" is an indirect reference using the element at index 1 of the array x.
$ foo=123; bar=456; x=(foo bar); echo "${!x[1]}"
456
In current versions of Bash (4.3 and above), you can use namerefs to get what you want:
$ a=(00 11 22 33 44)
$ typeset -n y=a
$ echo "${y[3]}"
33
that is, with the nameref set up, "${y[3]}" is a reference to element 3 in the array named by y.
To loop over the arrays as you do in your question, you'd simply make x a nameref.
a1=(1 2 3); a2=(a b c)
typeset -n x;
for x in a1 a2; do
echo "${x[1]}"
done
The assignments done by the for loop change the value of x itself (changing what the reference points to). A regular assignment (x=123, or x[1]=123) changes the variable currently referenced by x. So this would change both a1[1] and a2[1] to foo:
typeset -n x;
for x in a1 a2; do
x[1]=foo
done
The reason "${!x[0]}" seems to work is that x and x[0] are equivalent. If you had echo "${x[0]}" inside your loop (without the bang), you'd get a1, a2, the same as with echo "$x".
| How to access further members of an array when using bash variable indirection? |
1,514,163,228,000 |
I was trying to create a bash "multidimensional" array, I saw the ideas on using associative arrays, but I thought the simplest way to do it would be the following:
for i in 0 1 2
do
for j in 0 1 2
do
a[$i$j]="something"
done
done
It is easy to set and get values, but the jumps the indexes might make it terrible for arrays a bit bigger if bash allocates space for the elements from index 00 to 22 sequentially (I mean allocating positions {0,1,2,3,4,...,21,22}), instead of just the elements which were actually set: {00,01,02,10,11,...,21,22}.
This made me wonder, what happens when we start a bash array with an index 'n'? Does it allocate enough space for indexes 0 to n, or does it allocate the nth element sort of individually?
|
Array indices in bash like in ksh (whose array design bash copied) can be any arithmetic expression.
In a[$i$j]="something", the $i and $j variables are expanded, so with i=0 j=1, that becomes a[01]="something", 01 as an arithmetic expression means octal number 1 in bash. With i=0 j=10, that would be a[010]="something" same as a[8]="something". And you'd get a[110]="something" for both x=11 y=0 and x=1 y=10.
It should be obvious by now that it's not what you want.
Instead, you'd do like you do in C for bidimensional arrays (matrices):
matrix_size=3
for (( i = 0; i < matrix_size; i++ )) {
for (( j = 0; j < matrix_size; j++ )) {
a[i * matrix_size + j]="something"
}
}
(the for (( ...; ...; ...)) C-like construct copied from ksh93).
Or switch to ksh93 which has multidimensional array support:
for (( i = 0; i < 3; i++ )) {
for (( j = 0; j < 3; j++ )) {
a[i][j]="something"
}
}
It's also somewhat possible to implement multidimensional arrays using associative arrays whose keys are just strings:
typeset -A a
for (( i = 0; i < 3; i++ )) {
for (( j = 0; j < 3; j++ )) {
a[$i,$j]="something"
}
}
The resulting variable you get in all three as reported by typeset -p:
declare -a a=([0]="something" [1]="something" [2]="something" [3]="something" [4]="something" [5]="something" [6]="something" [7]="something" [8]="something")
typeset -a a=((something something something) (something something something) (something something something) )
declare -A a=([0,2]="something" [0,1]="something" [0,0]="something" [2,1]="something" [2,0]="something" [2,2]="something" [1,2]="something" [1,0]="something" [1,1]="something" )
Now to answer the question in the subject, in bash like in ksh, plain arrays are sparse, which means you can have a[n] defined without a[0] to a[n-1] being defined, so in that sense they're are not like the arrays of C or most other languages or shells.
Initially in ksh, array indices were limited to 4095, so you could have matrices at most 64x64 large, that limit has been raised to 4,194,303 since. In ksh93, I see doing a[4194303]=1 does allocate over 32MiB of memory I guess to hold 4194304 64bit pointers and some overhead, while that doesn't seem to happen in bash, where array indices can go up to 9223372036854775807 (at least on GNU/Linux amd64 here) without allocating more memory than that is needed to store the elements that are actually set.
In all other shells with array support ((t)csh, zsh, rc, es, fish...), array indices start at 1 instead of 0 and arrays are normal non-sparse arrays where you can't have a[2] set without a[1] also set even if that's to the empty string.
Like in most programming languages, associative arrays in bash are implemented as hash tables with no notion of order or rank (you'll notice typeset -p shows them in seemingly random order above).
For more details on array design in different shells, see this answer to Test for array support by shell.
| What happens if I start a bash array with a big index? |
1,514,163,228,000 |
I have two different arrays with the same length:
s=(c d e f g a b c)
f=(1 2 3 1 2 3 4 5)
how can I mix/merge/combine this two arrays, so I would get this output:
c1 d2 e3 f1 g2 a3 b4 c5
|
Something like: building a counter from 0 to arraylength - 1, then combining these elements from the arrays. Free-hand:
#!/bin/bash
...
len=${#s[@]}
for (( idx = 0; idx < len; idx++ ));
do
echo "${s[idx]}${f[idx]}"
done
| Bash - mix/merge/combine two different arrays with same length |
1,514,163,228,000 |
I am working with a server running Ubuntu 18.01 LTS and I'm trying to automate the backup of multiple virtual machines.
I have the VM names in an array and then a for loop to shut down, backup and then restart each VM. I ran this over the weekend, came in today and all the commands seem to have run, but only for the first index of the array and the script doesn't exit.
Here is my script.
#!/bin/bash
######################
#
# Shut down and back up select VMs
#
#####################
#make new date formatted directory
sudo mkdir /mnt/md1/VirtualMachines/bak/$(date +%Y_%m_%d) |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
sudo chown bvserv /mnt/md1/VirtualMachines/bak/$(date +%Y_%m_%d) |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
#Array of VMs
declare -a VM=("Win-10-POS-1" "Win-10-POS-2" "Desktop_Neil")
#loop through array of VMs
for i in "${VM[@]}"
do
# Shut down virtual machine
sudo -u bvserv VBoxManage controlvm "$i" poweroff |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
# Export virtual machine to dated file
sudo -u bvserv VBoxManage export "$i" -o /mnt/md1/VirtualMachines/bak/$(date +%Y_%m_%d)/"$i".ova |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
# Restart virtual machine
sudo -u bvserv VBoxHeadless --startvm "$i" |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt
done
|
The issue turns out to be that the VBoxHeadless command starts each VM as a foreground process, so execution of the loop does not continue to the next VM until the previous one exits.
For the restart portion of the script I had to use VBoxManage instead of VBoxHeadless to start the machines. After making that change everything is working. Here is the updated script now loading an external array for reference.
#!/bin/bash
######################
#
# Shut down and back up select VMs
#
#####################
#make new date formatted directory
sudo mkdir /mnt/md1/VirtualMachines/bak/$(date +%Y_%m_%d) |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
sudo chown bvserv /mnt/md1/VirtualMachines/bak/$(date +%Y_%m_%d) |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
#Read array of virtual machines from file
readarray -t VM < /mnt/md1/VirtualMachines/auto-start_list.txt
#loop through array of VMs
for i in "${VM[@]}"
do
# Shut down virtual machine
sudo -u bvserv VBoxManage controlvm "$i" poweroff |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
# Export virtual machine to dated file
sudo -u bvserv VBoxManage export "$i" -o /mnt/md1/VirtualMachines/bak/$(date +%Y_%m_%d)/"$i".ova |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt;
# Restart virtual machine
sudo -u bvserv VBoxManage startvm "$i" --type headless |& tee -a /mnt/md1/Scripts/log_vboxBak_$(date +%Y_%m_%d).txt
#echo "$i"
done
Reference:
How can I send VBoxHeadless to the background so I can close the Terminal?
| Bash array only executes first index |
1,514,163,228,000 |
I have a array:
ARRAY=(12.5 6.2)
I wish to return the maximum value in ARRAY which Output is 12.5
Anyone can share me ideas?
I have try this:
max=0
for v in ${ARRAY[@]}; do
if (( $v > $max )); then max=$v; fi;
done
echo $max
But it return me:
((: 12.5 > 0 : syntax error: invalid arithmetic operator (error token is ".5 > 0 ")
((: 6.2 > 0 : syntax error: invalid arithmetic operator (error token is ".2 > 0 ")
|
printf '%s\n' "${ARRAY[@]}" |
awk '$1 > m || NR == 1 { m = $1 } END { print m }'
Since the bash shell does not do floating point arithmetics, it's easier to compare floating point numbers in another language. Here I'm using awk to find the maximum of all the elements in the ARRAY array.
The printf command will output each element of the array on its own line and the awk code will update its m value to be the maximum of the values seen so far. At the end, the m value is printed.
The test on NR == 1 will be true on for the first line read by the awk program and would initialise the value of m to the first value of the array (something that you fail to do, which means that your code would have returned 0 for an array with all negative numbers, had it worked).
| Shell script-How to return maximum value in array? |
1,514,163,228,000 |
The following code is meant to look for subdirectories in ~/Downloads. I run it with . ./script.sh. It will find them even when the user submits an incomplete name.
#!/usr/bin/bash
echo -e "\nGive me the name of a nonexitent directory and I will look for it in ~/Downloads?\n"
read word
TOGOs=("$(find ~/Downloads -maxdepth 1 -iname "*$word*" -type d -execdir echo {} + | sed 's;./;/home/smith/Downloads/;g')"
"")
for togo in ${TOGOs[@]}
do
if [[ $togo != "" ]]; then
echo $togo
export togo && cd $togo && return 0
else
echo "Haven't found it in ~/Downloads ." && cd ~/Downloads #This line does not work
fi
done
The if part works as expected - when I give it a name/part of the name of a subdirectory of ~/Downloads/, but the else part of the block never gets executed when I give it a nonexisting directory. I can get the else part executed when I get rid of the loop, like so:
#!/usr/bin/bash
echo -e "\nGive me the name of a nonexitent directory and I will look for it in ~/Downloads?\n"
read word
TOGO=$(find ~/Downloads -maxdepth 1 -iname "*$word*" -type d -execdir echo {} + | sed 's;./;/home/smith/Downloads/;g')
if [[ $TOGO != "" ]]; then
echo $TOGO
export TOGO
cd $TOGO && return 0
else
echo "Haven't found it in ~/Downloads." && cd ~/Downloads
fi
Why is it so the else arm gets executed when I get rid of the loop? How might I get
my code executed while preserving the loop?
|
${TOGOs[@]} means to take the elements of the array, break them into separate words at whitespace (assuming the default IFS), and interpret each word as a glob pattern. It's the same as $var for a scalar variable, except that it lists all the elements in turn.
If there's an empty element in the array, the splitting step turns it into a list of 0 words, so it's effectively removed.
As always: use double quotes around variable and command substitutions unless you know that you need to leave them out. To list the elements of an array, the correct syntax is "${TOGOs[@]}". The array access “wins” over the double quotes: each element of the array is placed into a separate word.
Using the correct syntax for array listing won't help you, though, because the way you construct the array doesn't make sense. You're putting the whole output of find into a single array element. You can't parse the output of find that way: there's no way to distinguish between a newline that's part of a file name and a newline that find uses to distinguish file names.
Instead of parsing the output of find, do the processing in bash, or use find -exec …. Bash has recursive globbing (enabled by shopt -s globstar) if you need it, but since you're using find -maxdepth 1 you actually don't need it, and find isn't really doing anything useful.
shopt -s dotglob nullglob
TOGOs=(~/Downloads/*$word*/)
if [[ ${#TOGOs[@]} -eq 0 ]]; then
echo "Haven't found it in ~/Downloads."
fi
for togo in "${TOGOs[@]}"; do
…
done
| Why isn't the `else` arm executed in this Bash script (for loop through an array)? |
1,514,163,228,000 |
I'm trying to build a basic REPL in bash.
The script dynamically populates a list of files in a directory for the user to run.
File space:
|
|\ scripts/
|| script1.sh
|| script2.sh
|
\ shell/
| shell.bashrc
| shell.desktop
From shell.bashrc, I'm using the following command to get an array of filenames:
readarray -d " " filenames < <(find ../bin -type f -executable)
I can print the array filenames just fine and it contains a space separated string that holds "script1.sh script2.sh" as expected.
But when I try to access the first element of the array with echo ${filenames[0]} it prints every element. Any other index besides 0 returns an empty string.
My bash version is 5.0.17, and the first line of the file is #!/bin/bash
I moved to using "readarray" after trying the following led to similar results:
filenames=($(find "../bin" -type f -executable))
Edit: Found a dumb workaround and would still like to know where the original post is messing up.
Workaround:
readarray -d " " filenames < <(find ../bin -type f -executable)
arr=($filenames)
echo ${arr[1]}
Which prints the 6th element of the array as expected.
|
By default, find outputs results separated by newlines. By setting -d " " in the mapfile/readarray command, you are causing (assuming none of the names contains a space character) all of the results to be concatenated into a single string - newlines and all. When you then echo ${filenames[0]} (with unquoted variable expansion ${filenames[0]} and the default space-tab-newline value of IFS), the shell splits on newline, and echo reassembles the result using spaces1.
Instead use
readarray -t filenames < <(find ../bin -type f -executable)
which will parse the input as newline separated data, but strip the trailing newlines from the stored elements. Or - better - if your bash version supports it,
readarray -t -d '' filenames < <(find ../bin -type f -executable -print0)
which uses null bytes instead of newlines (making it safe for all legal filenames, even those that contain newlines).
1 See When is double-quoting necessary?
| Can't access elements of an array built from readarray |
1,514,163,228,000 |
I trying to run command recon-all with GNU parallel freesurfer preproc i have a bash array of list of patients to run 8 patents simultaneously:
root@4d8896dfec6c:/tmp# echo ${ids[@]}
G001 G002 G003 G004 G005 G006 G007 G008
and try to run with command:
echo ${ids[@]} | parallel --jobs 28 recon-all -s {.} -all -qcache
it doesn't work because i suppose i need to have bash array in ls representation, smth like:
ls ${ids[@]} | parallel --jobs 28 recon-all -s {.} -all -qcache
How can i do that?
|
The problem is that parallel wants the input to be separated by newlines but when you use echo it is separated by spaces. In order to print some words separated by newlines you can try one of these
echo one two three | tr ' ' '\n' # in case your input can not be controlled by you
printf '%s\n' one two three # if you can control the words eg if you have an array
So you should probably do it like this:
printf '%s\n' "${ids[@]}" | parallel --jobs 28 recon-all -s {.} -all -qcache
Remember to quote your array substitutions and variables in general in order to prevent accidental word splitting and other side effects if your values contain special characters.
| gnu parallel with bash array |
1,514,163,228,000 |
Perhaps this is a stupid question but two hours on Google hasn't turned up anything on point.
Simply, does a difference exist in Bash between:
X="
a
b
c
"
and
X=(
a
b
c
)
The former conforms with the definition of a variable, the latter, the definition of an array.
An array is a multi-element variable, so is this to say that the former also is an array for all purposes?
If the former is an array, is the only difference in operation as between (a) the double quotes and (b) the parentheses, the operation of quoting rules on the array's elements?
Many thanks for any insights.
|
A string of newline-delimited substrings is not the same thing as an array of strings. One is a string; the other contains strings.
The fact that your string is divided into lines by the inclusion of newline characters in the string has no particular significance to the storage of the string. The shell can not index it on lines, and a single line can't contain an embedded newline character without encoding it somehow.
The array is an ordered set of separate strings. Each string is immediately accessible via an index into the array. A single array element may contain any standard string, with or without newlines or other delimiting characters (except the nul character in the bash shell). However, an array element can't be another array, as bash does not support multi-dimensional arrays.
string1='Hello World'
string2="'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe."
array=( "$string1" "$string2" )
printf '%s\n' "${array[1]}"
The above script fragment prints the first verse from the poem Jabberwocky by Lewis Carroll. It does not print Hello World as we choose to output the array's second element, not the first. The second element is a single string made up of characters. Some of those characters happen to be newlines and blanks, but this is done only for presentation purposes.
To output only a single line, or any other substring, from the poem in the second array element, we need to use some utility to parse the string. Extracting individual newline-delimited substrings from a string does not have anything to do with the concept of arrays in the shell.
| Array Declaration: Double Quotes & Parentheses |
1,514,163,228,000 |
I am trying to add an element to a bash array. I looked at this question and tried to follow its advice.
This is my code:
selected_projects=()
for project_num in ${project_numbers[@]}; do
selected_project=${projects[$project_num]}
echo "selected project: $project_num $selected_project"
$selected_projects+="$selected_project"
done
When I do this, I get an error:
line 88: +=someProject: command not found
I tried many different alternatives to that line with lots of parenthesis and dollar signs, but I cannot figure out what I'm doing wrong and what it should be. Any ideas?
Thanks!
|
Use
selected_projects+="$selected_project"
instead of
$selected_projects+="$selected_project
Variable assignment in bash never contains $ at the beginning of variable name.
| Why isn't $ARRAY+=$var working for me? |
1,514,163,228,000 |
we want to set variable that includes words as array
folder_mount_point_list="sdb sdc sdd sde sdf sdg"
ARRAY=( $folder_mount_point_list )
but when we want to print the first array value we get all words
echo ${ARRAY[0]}
sdb sdc sdd sde sdf sdg
expected results
echo ${ARRAY[0]}
sdb
echo ${ARRAY[1]}
sdc
how to convert variable to array?
|
It seems that you have (maybe unintentionally) changed the important shell variable IFS in the script. Restoring it to its usual value or unsetting it (i.e. activating its default value) solves the problem:
IFS=$' \t\n'
or
unset IFS
| linux + how to convert variable to array |
1,514,163,228,000 |
Is there a case where mapfile has benefits over arr+=(input)?
Simple examples
mapfile array name, arr:
mkdir {1,2,3}
mapfile -t arr < <(ls)
declare -p arr
output:
declare -a arr=([0])="1" [1]="2" [2]="3")
Edit:
changed title for below; the body had y as the array name, but the title had arr as the name, which this could lead to confusion.
y+=(input)
IFS=$'\n'
y+=($(ls))
declare -p y
output:
declare -a y=([0])="1" [1]="2" [2]="3")
An advantage to mapfile is you don't have to worry about word splitting I think.
For the other way you can avoid word splitting by setting IFS=$'\n' although for this example it's nothing to worry about.
The second example just seems easier to write, anything I'm missing out on?
|
They're not the same thing at all, even after IFS=$'\n'.
In bash specifically (though that syntax was borrowed from zsh¹):
arr=( $(cmd) )
(arr+=( $(cmd) ) would be used to append elements to the array; so would be compared with keys=( -1 "${!arr[@]}" ); readarray -tO "$(( ${keys[@]: -1} + 1))" arr < <(cmd)²).
Does:
Run cmd in a subshell with its stdout open on the writing end a pipe.
Simultaneously, the parent shell process reads from the other end of the pipe and:
removes the NUL characters and trailing newline characters
splits the resulting string based on the contents of the $IFS special variable. For those characters in $IFS that are whitespace characters such as newline, the behaviour is more complex in that:
leading and trailing ones are removed (in the case of newline, they've been removed by command substitution already as seen above)
sequences of one or more are treated as one separator. As an example, the output of printf '\n\n\na\n\n\nb\n\n\n' is split into two elements only: a and b.
each of these words is then subject to filename generation aka globbing, whose behaviour is affected by a number of options including noglob, nullglob, failglob, extglob, globasciiranges, glabstar, nocaseglob. That applies to those words that contain characters such as *, ?, [, and with some bash versions \, and more if extglob is enabled.
Then the resulting words are assigned as elements to the $arr array.
Example:
bash-5.1$ touch x '\x' '?x' aX $'foo\n\n\n\n*'
bash-5.1$ IFS=$'\n'
bash-5.1$ ls | cat
aX
foo
*
?x
\x
x
bash-5.1$ arr=( $(ls) )
bash-5.1$ typeset -p arr
declare -a arr=([0]="aX" [1]="foo" [2]="aX" [3]=$'foo\n\n\n\n*' [4]="?x" [5]="\\x" [6]="x" [7]="?x" [8]="\\x" [9]="\\x" [10]="x")
As you can see, the $'foo\n\n\n\n*' file was split into foo and * and * was expanded to the list of files in the current working directory which explains why we get both foo and $'foo\n\n\n\n*', same for ?x which explains why we get \x (shown as "\\x") 3 times as there's the \x line in the output of ls and it's matched by both * and ?x.
With bash 5.0, we get:
bash-5.0$ arr=( $(ls) )
bash-5.0$ typeset -p arr
declare -a arr=([0]="aX" [1]="foo" [2]="aX" [3]=$'foo\n\n\n\n*' [4]="?x" [5]="\\x" [6]="x" [7]="?x" [8]="\\x" [9]="x" [10]="x")
With \x only twice but x three times as in that version, backslash was a globbing operator even when not followed by a globbing operator so \x as a glob matches x.
After shopt nocaseglob, we get:
bash-5.1$ shopt -s nocaseglob
bash-5.1$ arr=( $(ls) )
bash-5.1$ typeset -p arr
declare -a arr=([0]="aX" [1]="foo" [2]="aX" [3]=$'foo\n\n\n\n*' [4]="?x" [5]="\\x" [6]="x" [7]="aX" [8]="?x" [9]="\\x" [10]="\\x" [11]="x")
With aX shown 3 times as it matches ?x as well.
After shopt -s failglob:
bash-5.0$ shopt -s failglob
bash-5.0$ arr=( $(printf '\\z\n') )
bash: no match: \z
bash-5.0$ arr=( $(printf 'WTF\n?') )
bash: no match: WTF?
And arr=( $(echo '/*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/*') )
Runs out of memory after having made your system unusable for several minutes.
So, to sum up, IFS=$'\n'; arr=( $(cmd) ) doesn't store the lines of the output of cmd in the array, but the filenames resulting from the expansion of the non-empty lines of the output of cmd which are treated as glob patterns.
With mapfile or its less misleading readarray alias:
readarray -t arr < <(cmd)
as above runs cmd in a subshell with its stdout open on the writing end of a pipe.
the <(...) is expanded to something like /dev/fd/63 or /proc/self/fd/63 where 63 is a file descriptor of the parent shell open on the reading end of that pipe.
with the < redirection short for 0<, that /dev/fd/63 is opened for reading on fd 0, which means the stdin of readarray will also be the reading end of that pipe.
readarray reads each line from that pipe (simultaneously from cmd writing to it), discards the line delimiter (-t), and stores it (up to the first NUL if it contains any, at least in current versions of bash) in a new element of the $arr.
So in the end $arr, assuming cmd outputs no NUL will contain the contents of each line of the output of cmd, regardless of whether they're empty or not of whether they contain glob characters or not.
With the example above:
bash-5.1$ readarray -t arr < <(ls)
bash-5.1$ typeset -p arr
declare -a arr=([0]="aX" [1]="foo" [2]="" [3]="" [4]="" [5]="*" [6]="?x" [7]="\\x" [8]="x")
That's consistent with what we saw in the output of ls | cat earlier, but that's still wrong if the intention was to get the list of files in the current working directory. The output of ls cannot be post-processed unless you use some extensions of the GNU implementation of ls such as --quoting-style=shell-always or the --zero of recent versions (9.0 or above):
bash-5.2$ readarray -td '' arr < <(ls --zero)
bash-5.2$ typeset -p arr
declare -a arr=([0]="aX" [1]=$'foo\n\n\n\n*' [2]="?x" [3]="\\x" [4]="x")
This time, readarray stores the contents of the NUL-delimited records into $arr. IFS=$'\0' can't be used in bash as bash can't store NULs in its variables.
Or:
bash-5.1$ eval "arr=( $(ls --quoting-style=shell-always) )"
bash-5.1$ typeset -p arr
declare -a arr=([0]="aX" [1]=$'foo\n\n\n\n*' [2]="?x" [3]="\\x" [4]="x")
In any case, the correct way to get the list of files in the current working directory into an array would be with:
shopt -s nullglob
arr=( * )
You'd only resort to ls --zero if you wanted for instance the list to be sorted by size or modification time which bash globs (contrary to zsh's) cannot do.
As in:
zsh
recent GNU bash + GNU coreutils
new_to_old=( *.txt(Nom) )
readarray -td '' new_to_old < <(ls -td --zero -- *.txt)
four_largest=( *.txt(NOL[1,4]) )
readarray -td '' four_largest < <(ls -tdrS --zero -- *.txt | head -zn4)
Another difference between a=($(cmd)) and readarray < <(cmd) is the exit status which in the former is that of cmd and in the latter that of readarray. With recent versions of bash, you can get the exist status of cmd in the latter with wait "$!"; cmd_status=$?.
¹ the arr=( ... ) syntax comes from zsh (bash didn't have arrays until 2.0 in 1996), but note that in zsh, command substitution, while it's also stripping trailing newlines and subject to $IFS-stripping, does not discard NULs (NUL is even in the default value of $IFS there) and is not subject to globbing like in other Bourne-like shells, contributing to making it a safer shell in general.
² readarray aka mapfile doesn't have an append mode, but in recent versions you can tell it the index of the first element where to start storing the elements with -O as shown here. To find out the index of the last element in bash (where arrays are sparse like in ksh!), it's awfully difficult. Here to append the lines of the output of cmd to $arr, instead of that very convoluted code, you might as well read those lines into a temporary array with readarray -r tmp < <(cmd) and append the elements to $arr with arr+=( "${tmp[@]}" ). Also note that if the arr variable was declared as scalar or assoc, the behaviour will vary between those approaches.
| Creating and appending to an array, mapfile vs arr+=(input) same thing or am I missing something? |
1,514,163,228,000 |
I have a function (not created by me) that outputs a series of strings inside of quotes:
command <args>
“Foo”
“FooBar”
“Foo Bar”
“FooBar/Foo Bar”
When I try to assign it to an array (Bash; BSD/Mac), instead of 4 elements, I get 7. For example, for ${array[2]} I should get “Foo Bar”, but instead, I get ”Foo with the next element being Bar”. Any element without the space works correctly (i.e. ${array[0]} = “Foo”)
How can assign each of these elements between the quote including the space to an array that the elements are separated by spaces(?) themselves?
Right now, I am thinking of using sed/awk to “strip” out the quotes, but I think there should be a better and more efficient way.
Currently, I am assigning the output of the command (looks exactly like the output above including the quotes) to a temporary variable then assigning it to an array.
_tempvar=“$(command <args>)”
declare -a _array=(${_tempvar})
|
You get 7 elements because word splitting is occuring, caused by the spaces.
Set IFS=$'\n' before adding the strings to the array then you'll get 4 elements but with double quotes.
Example:
IFS=$'\n'
arr=($(command <args>))
If you want 4 elements without quotes do this:
IFS=$'\n'
arr=($(command <args> | sed s'#"##'g))
Full example:
IFS=$'\n'
# tst.txt has your strings:
arr=($(cat tst.txt | sed s'#"##'g))
declare -p arr
Output:
declare -a arr=([0]="Foo" [1]="FooBar" [2]="Foo Bar" [3]="FooBar/Foo Bar")
| Bash: converting a string with both spaces and quotes to an array |
1,514,163,228,000 |
I'm having a bit of difficulty understanding parallel procedures. Atm I'm trying to mass wipe hard drives, so have created a script, however it won't run in parallel.
for i in "${!wipe[@]}"; do
dd if=/dev/zero of=/dev/${wipe[$i]} &
wait
The dd zeros the disks but it does this one after the other so when doing 8 disks, can be very time consuming.
Thanks
|
The script as given shouldn't run at all, because you are missing the done on the for loop. This must be an excerpt, and you've left out important parts.
Assuming the missing done is after this snippet, the wait is inside the for loop, so you start the dd in the background and then wait for it to finish before going to the next iteration.
Basically, your indention doesn't match the code shown, and this isn't python. Unlike python, bash ignores indention. I'm sure the indention matches what you want, but it's meaningless without the done that belongs before the wait.
| Getting an array into a parallel bash script |
1,514,163,228,000 |
For example, in the snippet below, (how) is it possible to make array2 identical to array1 while still using a str variable?
~$ { str='a "b c" d'; array1=(a "b c" d); array2=( $str )
echo "${array1[1]} ${array1[2]}"
echo "${array2[1]} ${array2[2]}"; }
b c d
"b c"
|
Running str='a "b c" d', the quotes are taken literal and have no special meaning afterwards, they are just a character like any other and do not prevent word splitting anymore.
While when assigning the array using quotes, the quotes are evaluated from your shell before the assignment to prevent word splitting:
array1=(a "b c" d);
Btw: Using printf is a bit easier to showcase the issue then setting up an array and using a loop to echo the elements:
printf '%s\n' $str
You might use eval as a workaround, but I would not recommend doing that for any input you cannot 100% control or trust (user input, webscraping stuff, etc.):
eval "printf '%s\n' $str"
#or
eval "array2=( $str )"
Anyways, from your example, I see no reason to use an intermediate variable, just use arrays directly.
| Why can't I convert a string variable into an array when some items include spaces? |
1,514,163,228,000 |
i'm trying to generate a script that ftp some files to a server using lftp. when i run these commands in shell:
DBNAME=TESTDB
ls -t /data*/${DBNAME,,}Backup/$DBNAME.0.db21.DBPART000.`date +%Y%m%d`*
i get 2 paths:
/data4/testdbBackup/TESTDB.0.db1.DBPART000.20191007010004.001
/data5/testdbBackup/TESTDB.0.db1.DBPART000.20191007010004.002
but when i use this command to create an array and loop through it i only get the first element. here is the script:
echo "lftp -u $FTPUSER,$FTPPASSWD $FTPSRV <<end_script
mkdir BackUp
cd BackUp
mkdir $CURRENTDATE
cd $CURRENTDATE
mkdir $IP
cd $IP " >> $FTPFILES
for DBNAME in "${DBNAME_ARRAY[@]}"
do
BACKUP_FILE_COUNT=$(ls -t /data*/${DBNAME,,}Backup/$DBNAME.0.db21.DBPART000.`date +%Y%m%d`*|wc -l)
COUNTER=($(echo $COUNTER + $BACKUP_FILE_COUNT | bc))
mapfile -t BACKUP_FILE_ARRAY < <(ls -t /data*/${DBNAME,,}Backup/$DBNAME.0.db21.DBPART000.`date +%Y%m%d`*)
for BACKUP_FILE in "${BACKUP_FILE_ARRAY=[@]}"
do
echo "lcd $(dirname $BACKUP_FILE)" >> $FTPFILES
echo "put $(basename $BACKUP_FILE)" >> $FTPFILES
done
done
echo "quit
end_script
exit 0 " >> $FTPFILES
the output of this script is:
lftp -u someuser,somepassword 1.1.1.1 <<end_script
mkdir BackUp
cd BackUp
mkdir 19-10-07
cd 19-10-07
mkdir 192.168.22.22
cd 192.168.22.22
lcd /data4/testdbBackup
put TETSTDB.0.db21.DBPART000.20191007010004.001
quit
end_script
exit 0
in that part of changing directories i expect this:
lcd /data4/testdbBackup
put TETSTDB.0.db21.DBPART000.20191007010004.001
lcd /data5/testdbBackup
put TETSTDB.0.db21.DBPART000.20191007010004.002
i also added a echo "${BACKUP_FILE_ARRAY=[@]}" to my script and it only has one element.
i had this problem before in this question and i used the solution in many scripts and they worked perfectly. what am i missing here?
|
"${BACKUP_FILE_ARRAY=[@]}" has one too many = in it.
Also, to set the array, don't use mapfile. Just use the shell pattern:
BACKUP_FILE_ARRAY=( /data*/"${DBNAME,,}"Backup/"$DBNAME.0.db21.DBPART000.$(date +%Y%m%d)"* )
(if $CURRENTDATE is set with CURRENTDATE=$(date +%Y%m%d) somewhere at the top of the script, then use that variable instead of the command substitution with date, so that the script is not confused if it happens to run across midnight).
Using the output of ls is a bit problematic in the general case, as it disqualifies the script from working with some filenames. It also makes the script difficult to read.
To count the number of files that match that pattern, first create the array, then use
BACKUP_FILE_COUNT=${#BACKUP_FILE_ARRAY[@]}
to get the number of elements in it.
To add this number to COUNTER, use a standard arithmetic expansion:
COUNTER=$(( COUNTER + BACKUP_FILE_COUNT ))
If you don't know when you need to double quote a variable's expansion and when it's not necessary to do so, opt for using double quotes (as in "$myvar"), or you will likely run into issues when using variables whose values contain whitespace or shell globbing patterns.
Related:
Why *not* parse `ls` (and what to do instead)?
Why does my shell script choke on whitespace or other special characters?
When is double-quoting necessary?
| Array only returns one element |
1,514,163,228,000 |
I've got an array that contains duplicate items, e.g.
THE_LIST=(
"'item1' 'data1 data2'"
"'item1' 'data2 data3'"
"'item2' 'data4'"
)
Based on the above, I want to create an associative array that would assign itemN as key and dataN as value.
My code iterates over the list, and assigns key => value like this (the additional function is shortened, as it performs some additional jobs on the list):
function get_items(){
KEY=$1
VALUES=()
shift $2
for VALUE in "$@"; do
VALUES[${#VALUES[@]}]="$VALUE"
done
}
declare -A THE_LIST
for ((LISTID=0; LISTID<${#THE_LIST[@]}; LISTID++)); do
eval "LISTED_ITEM=(${THE_LIST[$LISTID]})"
get_items "${LISTED_ITEM[@]}"
THE_LIST=([$KEY]="${VALUES[@]}")
done
when I print the array, I'm getting something like:
item1: data1 data2
item1: data2 data3
item2: data4
but instead, I want to get:
item1: data1 data2 data3
item2: data4
Cannot find a way of merging the duplicate keys as well as removing duplicate values for the key.
What would be the approach here?
UPDATE
The actual code is:
THE_LIST=(
"'item1' 'data1 data2'"
"'item1' 'data2 data3'"
"'item2' 'data4'"
)
function get_backup_locations () {
B_HOST="$2"
B_DIRS=()
B_DIR=()
shift 2
for B_ITEM in "$@"; do
case "$B_ITEM" in
-*) B_FLAGS[${#B_FLAGS[@]}]="$B_ITEM" ;;
*) B_DIRS[${#B_DIRS[@]}]="$B_ITEM" ;;
esac
done
for ((B_IDX=0; B_IDX<${#B_DIRS[@]}; B_IDX++)); do
B_DIR=${B_DIRS[$B_IDX]}
...do stuff here...
done
}
function get_items () {
for ((LOCIDY=0; LOCIDY<${#LOCATIONS[@]}; LOCIDY++)); do
eval "LOCATION=(${LOCATIONS[$LOCIDY]})"
get_backup_locations "${LOCATION[@]}"
THE_LIST=([$B_HOST]="${B_DIR[@]}")
done | sort | uniq
}
when printing the array with:
for i in "${!THE_LIST[@]}"; do
echo "$i : ${THE_LIST[$i]}"
done
I get
item1: data1 data2
item1: data2 data3
item2: data4
|
If the keys and values are guaranteed to be purely alphanumerical, something like this might work:
declare -A output
make_list() {
local IFS=" "
declare -A keys # variables declared in a function are local by default
for i in "${THE_LIST[@]}"
do
i=${i//\'/} # since everything is alphanumeric, the quotes are useless
declare -a keyvals=($i) # split the entry, filename expansion isn't a problem
key="${keyvals[0]}" # get the first value as the key
keys["$key"]=1 # and save it in keys
for val in "${keyvals[@]:1}"
do # for each value
declare -A "$key[$val]=1" # use it as the index to an array.
done # Duplicates just get reset.
done
for key in "${!keys[@]}"
do # for each key
declare -n arr="$key" # get the corresponding array
output["$key"]="${!arr[*]}" # and the keys from that array, deduplicated
done
}
make_list
declare -p output # print the output to check
With the example input, I get this output:
declare -A output=([item1]="data3 data2 data1" [item2]="data4" )
The data items are out of order, but deduplicated.
Might be best to use Python with the csv module instead.
| Merge duplicate keys in associative array BASH |
1,514,163,228,000 |
I have a long line that comes as output from a git command: a=$(git submodule foreach git status). It looks like this:
a = "Entering 'Dir1/Subdir' On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean Entering 'Dir2' HEAD detached at xxxxxx nothing to commit, working tree clean Entering 'Dir3' On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean Entering 'Dir4' On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean"
I want to separate it into an array:
ARR[0] = "'Dir1/Subdir' On branch master ..."
ARR[1] = "'Dir2' HEAD detached at ..."
etc.
To do that, I have tried to substitute "Entering " for a symbol (I have tried # $ % & \t ...) with a=${a//Entering /$} and it works alright. Then, I try to use IFS and read to separate it into an array: IFS='$' read -ra ARR <<< "$a"
It's here where I am facing problems.
The output that I get of echo ${ARR[@]} is "Dir1/Subdir1" so I think that read is being affected by spaces or by how the output from git is, but I don't understand what is happening and how to fix it. Could you please give me any suggestions?
Thank you.
|
You can use readarray bash builtin and specify the delimiter within the same command:
readarray -d 'char delimiter' array <<< $variable
For example:
readarray -d '@' array <<< ${a//Entering /@}
Finally when you print each result you might want to remove the @ (or any other character used as delimiter):
echo ${array[1]%@}
echo ${array[2]%@}
echo ${array[@]%@}
If you want to delete the index 0 (because it contains @) you can reassign the array by copying the items from index 1 to last index:
array=("${array[@]:1}")
Tip: If you want to avoid use ${array[index]%@} each time you want to get some item, you can reassign the array again by removing the @ with:
array=("${array[@]/@}")
| How to separate long string into a string array with IFS and read, or any other method |
1,514,163,228,000 |
I'm trying to learn more bash by updating my bash_profile so that I can quickly do some adb commands that I usually have to copy-paste. I found I was creating many similar functions that all looked like this:
function andVid() {
minInputs=0
fileName="$(filNamInc $MEDIA_DIR/Videos/aaaAndroidVideo mp4)"
origCmd="adb shell screenrecord --time-limit 60 /sdcard/screenrecord.mp4; sleep 3; adb pull /sdcard/screenrecord.mp4 $fileName"
cmd="$(andAddSer $minInputs "$origCmd" "$@")"
echo "Use ctrl+c to stop recording"
eval $cmd
}
Usually, when I see a bunch of similar functions, I try to combine them into 1 function. So I made a function that would accept an array of arguments and would do the same actions just dependent on the array:
andVid=(4 'adb shell screenrecord --time-limit 60 /sdcard/screenrecord.mp4; sleep 3; adb pull /sdcard/screenrecord.mp4' '/Videos/aaaAndroidVideo' 'mp4')
function adbRnr() {
minInputs=$1
cmd=$2
if (( $# > 3 )); then
fileTarget=$3
fileExtension=$4
fileName="$(filNamInc $MEDIA_DIR$fileTarget $fileExtension)"
cmd="$cmd $fileName"
fi
if (( $# > $minInputs )); then
cmd="${cmd:0:4} -s ${@: -1} ${cmd:4}"
fi
eval $cmd
(Note: here you see what andAddSer was doing in the first function.) This means that in order to run the function, you need to use a command line entry like this:
adbRnr "${andVid[@]}"
Which is both slow to type and hard to remember. I'd rather enter just the name of the array, and then do the whole "${[@]}" part once it's in the function, such that the command line input would look like this:
adbRnr andVid
However... passing the array name has proved a significant problem. I've tried pretty much every combination of calling the argument with "!", and it hasn't worked. Example:
andVid=(4 'adb shell screenrecord --time-limit 60 /sdcard/screenrecord.mp4; sleep 3; adb pull /sdcard/screenrecord.mp4' '/Videos/aaaAndroidVideo' 'mp4')
function arrayParser() {
echo "${andVid[*]}" # echos as expected
echo $# # echos "1" as expected
param=$1
echo $param # echos "andVid" as expected
cmd=("${!param[3]}")
echo $cmd # expected "mp4", nothing printed
}
arrayParser andVid
I know that you can't just pass arrays to functions in bash, but the array I'm referencing is already part of the profile. How do I reference the array using the argument?
|
Using a name reference variable in the function:
arrayParser () {
declare -n arr="$1"
printf 'Array: %s\n' "${arr[*]}"
printf 'Array element at index 3: %s\n' "${arr[3]}"
}
myarray=( alpha beta gamma "bumbling bee" )
arrayParser myarray
Inside the function, any reference to the name reference variable arr will reference the variable passed to the function as its 1st argument.
Name reference variables were introduced in bash release 4.3.
| Pass the name of an array in command line to reference the array in a function |
1,514,163,228,000 |
Can the below code be easily achieved with minimum coding.
$ cluster1=(x y)
$ cluster2=(a b)
$ cluster3=(m)
$ my=$((${cluster1[0]+1}+${cluster2[0]+1}+${cluster2[0]+1}))
$ echo $my
3
$ my=$((${cluster1[1]+1}+${cluster2[1]+1}+${cluster3[1]+1}))
-bash: 1+1+: syntax error: operand expected (error token is "+")
|
Your code is generating a syntax error for each element that is not set.
$ echo "${cluster1[0]+1}+${cluster2[0]+1}+${cluster2[0]+1}"
1+1+1
$ echo "${cluster1[1]+1}+${cluster2[1]+1}+${cluster3[1]+1}"
1+1+
It would be better to count the set elements instead of trying to calculate with a generated expression in this case:
#!/bin/bash
cluster1=(x y)
cluster2=(a b)
cluster3=(m)
for (( i = 0; i < 3; ++i )); do
is_set=( ${cluster1[i]+"1"} ${cluster2[i]+"1"} ${cluster3[i]+"1"} )
printf 'i=%d:\t%d\n' "$i" "${#is_set[@]}"
done
This creates a new array, is_set, that will contain a 1 for each array that contains an element at index i. The 1 is unimportant and could be any string. The number of elements in the is_set array (${#is_set[@]}) is then the number of set elements from the cluster arrays at that index.
Testing:
$ bash script.sh
i=0: 3
i=1: 2
i=2: 0
| Counting and adding if some arrays has element at some index |
1,514,163,228,000 |
I've some associative arrays in a bash script which I need to pass to a function in which I need to access the keys and values as well.
declare -A gkp=( \
["arm64"]="ARM-64-bit" \
["x86"]="Intel-32-bit" \
)
fv()
{
local entry="$1"
echo "keys: ${!gkp[@]}"
echo "vals: ${gkp[@]}"
local arr="$2[@]"
echo -e "\narr entries: ${!arr}"
}
fv $1 gkp
Output for above:
kpi: arm64 x86
kpv: ARM-64-bit Intel-32-bit
arr entries: ARM-64-bit Intel-32-bit
I could get values of array passed to function, but couldn't figure out how to print keys (i.e. "arm64" "x86") in the function.
Please help.
|
You need to make the arr variable a nameref. From man bash:
A variable can be assigned the nameref attribute using the -n option
to the declare or local builtin commands (see the descriptions of de‐
clare and local below) to create a nameref, or a reference to another
variable. This allows variables to be manipulated indirectly. When‐
ever the nameref variable is referenced, assigned to, unset, or has
its attributes modified (other than using or changing the nameref at‐
tribute itself), the operation is actually performed on the variable
specified by the nameref variable's value. A nameref is commonly used
within shell functions to refer to a variable whose name is passed as
an argument to the function. For instance, if a variable name is
passed to a shell function as its first argument, running
declare -n ref=$1
inside the function creates a nameref variable ref whose value is the
variable name passed as the first argument. References and assign‐
ments to ref, and changes to its attributes, are treated as refer‐
ences, assignments, and attribute modifications to the variable whose
name was passed as $1. If the control variable in a for loop has the
nameref attribute, the list of words can be a list of shell variables,
and a name reference will be established for each word in the list, in
turn, when the loop is executed. Array variables cannot be given the
nameref attribute. However, nameref variables can reference array
variables and subscripted array variables. Namerefs can be unset us‐
ing the -n option to the unset builtin. Otherwise, if unset is exe‐
cuted with the name of a nameref variable as an argument, the variable
referenced by the nameref variable will be unset.
In practice, this would look like:
#!/bin/bash
declare -A gkp=(
["arm64"]="ARM-64-bit"
["x86"]="Intel-32-bit"
)
fv()
{
local entry="$1"
echo "keys: ${!gkp[@]}"
echo "vals: ${gkp[@]}"
local -n arr_name="$2"
echo -e "\narr entries: ${!arr_name[@]}"
}
fv "$1" gkp
And running it gives:
$ foo.sh foo
keys: x86 arm64
vals: Intel-32-bit ARM-64-bit
arr entries: x86 arm64
Obligatory warning: if you find yourself needing to do something like this in a shell script, it is usually a strong indication that you might want to switch to a proper scripting language like Perl or Python or anything else.
| Access values of associative array whose name is passed as argument inside bash function |
1,514,163,228,000 |
I am trying to multiply array values with values derived from the multiplication of a loop index using bc.
#!/bin/bash
n=10.0
bw=(1e-3 2.5e-4 1.11e-4 6.25e-5 4.0e-5 2.78e-5 2.04e-5 1.56e-5 1.29e-5 1.23e-5 1.0e-5)
for k in {1..11};do
a=$(echo "$n * $k" | bc)
echo "A is $a"
arn=${bw[k-1]}
echo "Arn is $arn"
b=$(echo "$arn * $a" | bc -l)
echo "b is $b"
#echo $a $b
done
I am able to echo the array values by assigning it to a new variable within the loop, but when I use that to multiply using bc, I get (standard_in) 1: syntax error.
I searched for clues and tried some but none helped. The expected output is as follows.
10 1.00E-02
20 5.00E-03
30 3.33E-03
40 2.50E-03
50 2.00E-03
60 1.67E-03
70 1.43E-03
80 1.25E-03
90 1.16E-03
100 1.23E-03
110 1.10E-03
All help is greatly appreciated.
|
bc doesn't support the scientific format. Use something that does.
For example, Perl:
a=$(perl -e "print $n * $k" )
arn=${bw[k-1]}
b=$(perl -e "printf '%.2E', $arn * $a")
echo $a $b
Output:
10 1.00E-02
20 5.00E-03
30 3.33E-03
40 2.50E-03
50 2.00E-03
60 1.67E-03
70 1.43E-03
80 1.25E-03
90 1.16E-03
100 1.23E-03
110 1.10E-03
| bash array multiplication using bc |
1,514,163,228,000 |
So, Let's say i have an array arr, with two element in it:
read -a arr <<< "$@"
where i would then either use it in a function or script and input two string or element like so:
read_me() {
read -a arr <<< "$@"
}
read_me "first test"
Now i already know how to get through all the elements of an array:
for i in "${arr[@]}"
do
echo "$i" # where i do something with the respective element of said array.
done
But that only do it using the normal/original order in which the element were added to the previously mentioned array...
Of course, i also know how to get the elements of an array in reverse order:
indices=( ${!arr[@]} )
for ((i=${#indices[@]} - 1; i >= 0; i--)) ; do
echo "${arr[indices[i]]}"
done
Both of these ways work as intended. Problem though, is that i need both the normal and the reverse order on the same loop. Mostly so i wouldn't need to do this:
echo "${arr[0]}" "${arr[1]}"
echo "${arr[1]}" "${arr[0]}"
How could i do this in a single loop?
|
array=( 1 2 3 a b c )
for i in "${!array[@]}"; do
j=$(( ${#array[@]} - i - 1 ))
printf '%s\t%s\n' "${array[i]}" "${array[j]}"
done
Output:
1 c
2 b
3 a
a 3
b 2
c 1
In short, there is nothing stopping you from traversing the array in any order and at the same time calculate a new index from the current index and use that too.
In comments to the question, there is a suggestion for the following arithmetic loop:
for (( i = 0, j = ${#array[@]} - 1; i < ${#array[@]}; ++i, --j ))
do
printf '%s\t%s\n' "${array[i]}" "${array[j]}"
done
This uses the fact that the comma operator can be used in the initialization and update parts of the loop header to maintain two separate loop variables.
Depending on what you want to achieve and depending on what your actual array values are, you may also get away with using tac:
$ paste <( printf '%s\n' "${array[@]}" ) <( printf '%s\n' "${array[@]}" | tac )
1 c
2 b
3 a
a 3
b 2
c 1
| How to both get the original and reverse order of an array? |
1,514,163,228,000 |
I can't append to an array when I use parallel, no issues using a for loop.
Parallel example:
append() { arr+=("$1"); }
export -f append
parallel -j 0 append ::: {1..4}
declare -p arr
Output:
-bash: declare: arr: not found
For loop:
for i in {1..4}; do arr+=("$i"); done
declare -p arr
Output:
declare -a arr=([0]="1" [1]="2" [2]="3" [3]="4")
I thought the first example is a translation of the for loop in functional style, so what's going on?
|
Your parallel appears to be the GNU one, which is a perl script that runs commands in parallel.
It tries very hard to tell what shell it is being invoked from so that the command that you pass to it is interpreted by that shell, but to do that it runs a new invocation of that shell in separate processes.
If you run:
bash-5.2$ env SHELLOPTS=xtrace PS4='bash-$$> ' strace -qqfe /exec,/exit -e signal=none parallel -j 0 append ::: {1..4}
execve("/usr/bin/parallel", ["parallel", "-j", "0", "append", ":::", "1", "2", "3", "4"], 0x7ffe5e848c90 /* 56 vars */) = 0
[...skipping several commands run by parallel during initialisation...]
[pid 7567] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 1"], 0x55a2615f03e0 /* 67 vars */) = 0
bash-7567> append 1
bash-7567> arr+=("$1")
[pid 7567] exit_group(0) = ?
[pid 7568] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 2"], 0x55a2615f03e0 /* 67 vars */) = 0
[pid 7568] exit_group(0) = ?
[pid 7569] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 3"], 0x55a2615f03e0 /* 67 vars */) = 0
bash-7568> append 2
bash-7568> arr+=("$1")
[pid 7569] exit_group(0) = ?
[pid 7570] execve("/usr/bin/bash", ["/usr/bin/bash", "-c", "append 4"], 0x55a2615f03e0 /* 67 vars */) = 0
bash-7569> append 3
bash-7569> arr+=("$1")
[pid 7570] exit_group(0) = ?
bash-7570> append 4
bash-7570> arr+=("$1")
exit_group(0) = ?
Where strace shows what commands are executed by what process and the xtrace option causes the shell to show what it does.
You'll see each bash shell appending an element to their own $arr, and then exit, and of course their own memory space including their individual $arr array is gone, the $arr array is not automagically shared between all bash shell invocations on your system.
In any case, running commands concurrently implies running them in different processes, so there's no way it can run those functions in the invoking shell, those functions will be run in new shell instances in separate processes and they will update the arr variables of those shells, not the one of the shell you run parallel from.
Given that bash has not builtin multithreading support, even if parallel was an internal command of the shell or implemented as a shell function, it would still need to run the commands in separate processes each process having their own memory. You'll find that in:
append 1 & append 2 & append 3 & wait
Or:
append 1 | append 2 | append 3
The $arr array of the parent shell is not modified either.
If you want to collect the result of each job started by parallel, you can do it via stdout or via files.
For instance:
#! /bin/bash -
do_something() {
output=$(
echo "$1: some complex computation or otherwise there would
be no point using GNU parallel and its big overhead"
)
# output the result NUL delimited.
printf '%s\0' "$output"
}
export -f do_something
readarray -td '' arr < <(
PARALLEL_SHELL=/bin/bash parallel do_something ::: {1..4}
)
typeset -p arr
(here telling parallel which shell to use for it to avoid having to guess).
Note that parallel stores the output of each shell in a temporary file and dumps them in order on stdout so you get the elements of the array in correct order.
| Unable to append to array using parallel |
1,327,107,993,000 |
I have been looking at a few scripts other people wrote (specifically Red Hat), and a lot of their variables are assigned using the following notation
VARIABLE1="${VARIABLE1:-some_val}"
or some expand other variables
VARIABLE2="${VARIABLE2:-`echo $VARIABLE1`}"
What is the point of using this notation instead of just declaring the values directly (e.g., VARIABLE1=some_val)?
Are there benefits to this notation or possible errors that would be prevented?
Does the :- have specific meaning in this context?
|
This technique allows for a variable to be assigned a value if another variable is either empty or is undefined. NOTE: This "other variable" can be the same or another variable.
excerpt
${parameter:-word}
If parameter is unset or null, the expansion of word is substituted.
Otherwise, the value of parameter is substituted.
NOTE: This form also works, ${parameter-word}. According to the Bash documentation, for all such expansions:
Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence.
If you'd like to see a full list of all forms of parameter expansion available within Bash then I highly suggest you take a look at this topic in the Bash Hacker's wiki titled: "Parameter expansion".
Examples
variable doesn't exist
$ echo "$VAR1"
$ VAR1="${VAR1:-default value}"
$ echo "$VAR1"
default value
variable exists
$ VAR1="has value"
$ echo "$VAR1"
has value
$ VAR1="${VAR1:-default value}"
$ echo "$VAR1"
has value
The same thing can be done by evaluating other variables, or running commands within the default value portion of the notation.
$ VAR2="has another value"
$ echo "$VAR2"
has another value
$ echo "$VAR1"
$
$ VAR1="${VAR1:-$VAR2}"
$ echo "$VAR1"
has another value
More Examples
You can also use a slightly different notation where it's just VARX=${VARX-<def. value>}.
$ echo "${VAR1-0}"
has another value
$ echo "${VAR2-0}"
has another value
$ echo "${VAR3-0}"
0
In the above $VAR1 & $VAR2 were already defined with the string "has another value" but $VAR3 was undefined, so the default value was used instead, 0.
Another Example
$ VARX="${VAR3-0}"
$ echo "$VARX"
0
Checking and assigning using := notation
Lastly I'll mention the handy operator, :=. This will do a check and assign a value if the variable under test is empty or undefined.
Example
Notice that $VAR1 is now set. The operator := did the test and the assignment in a single operation.
$ unset VAR1
$ echo "$VAR1"
$ echo "${VAR1:=default}"
default
$ echo "$VAR1"
default
However if the value is set prior, then it's left alone.
$ VAR1="some value"
$ echo "${VAR1:=default}"
some value
$ echo "$VAR1"
some value
Handy Dandy Reference Table
Parameter set and not null
Parameter set but null
Parameter unset
${parameter:-word}
substitute parameter
substitute word
substitute word
${parameter-word}
substitute parameter
substitute null
substitute word
${parameter:=word}
substitute parameter
assign word
assign word
${parameter=word}
substitute parameter
substitute null
assign word
${parameter:?word}
substitute parameter
error, exit
error, exit
${parameter?word}
substitute parameter
substitute null
error, exit
${parameter:+word}
substitute word
substitute null
substitute null
${parameter+word}
substitute word
substitute word
substitute null
(Screenshot of source table)
This makes the difference between assignment and substitution explicit: Assignment sets a value for the variable whereas substitution doesn't.
References
Parameter Expansions - Bash Hackers Wiki
10.2. Parameter Substitution
Bash Parameter Expansions
| Using "${a:-b}" for variable assignment in scripts |
1,327,107,993,000 |
This question is a sequel of sorts to my earlier question. The users on this site kindly helped me determine how to write a bash for loop that iterates over string values. For example, suppose that a loop control variable fname iterates over the strings "a.txt" "b.txt" "c.txt". I would like to echo "yes!" when fname has the value "a.txt" or "c.txt", and echo "no!" otherwise. I have tried the following bash shell script:
#!/bin/bash
for fname in "a.txt" "b.txt" "c.txt"
do
echo $fname
if [ "$fname" = "a.txt" ] | [ "$fname" = "c.txt" ]; then
echo "yes!"
else
echo "no!"
fi
done
I obtain the output:
a.txt
no!
b.txt
no!
c.txt
yes!
Why does the if statement apparently yield true when fname has the value "a.txt"? Have I used | incorrectly?
|
If you want to say OR use double pipe (||).
if [ "$fname" = "a.txt" ] || [ "$fname" = "c.txt" ]
(The original OP code using | was simply piping the output of the left side to the right side, in the same way any ordinary pipe works.)
After many years of comments and misunderstanding, allow me to clarify.
To do OR you use ||.
Whether you use [ or [[ or test or (( all depends on what you need on a case by case basis. It's wrong to say that one of those is preferred in all cases. Sometimes [ is right and [[ is wrong. But that's not what the question was. OP asked why | didn't work. The answer is because it should be || instead.
| In a bash script, using the conditional "or" in an "if" statement |
1,327,107,993,000 |
I know that shell scripts just run commands as if they were executed in at the command prompt. I'd like to be able to run shell scripts as if they were functions... That is, taking an input value or string into the script. How do I approach doing this?
|
The shell command and any arguments to that command appear as numbered shell variables: $0 has the string value of the command itself, something like script, ./script, /home/user/bin/script or whatever. Any arguments appear as "$1", "$2", "$3" and so on. The count of arguments is in the shell variable "$#".
Common ways of dealing with this involve shell commands getopts and shift. getopts is a lot like the C getopt() library function. shift moves the value of $2 to $1, $3 to $2, and so on; $# gets decremented. Code ends up looking at the value of "$1", doing things using a case…esac to decide on an action, and then doing a shift to move $1 to the next argument. It only ever has to examine $1, and maybe $#.
| How can I pass a command line argument into a shell script? |
1,327,107,993,000 |
I know this question has probably been answered before. I have seen many threads about this in various places, but the answers are usually hard to extract for me. I am looking for help with an example usage of the 'sed' command.
Say I wanted to act upon the file "hello.txt" (in same directory as prompt). Anywhere it contained the phrase "few", it should be changed to "asd". What would the command look like?
|
sed is the stream editor, in that you can use | (pipe) to send standard streams (STDIN and STDOUT specifically) through sed and alter them programmatically on the fly, making it a handy tool in the Unix philosophy tradition; but can edit files directly, too, using the -i parameter mentioned below.
Consider the following:
sed -i -e 's/few/asd/g' hello.txt
s/ is used to substitute the found expression few with asd:
The few, the brave.
The asd, the brave.
/g stands for "global", meaning to do this for the whole line. If you leave off the /g (with s/few/asd/, there always needs to be three slashes no matter what) and few appears twice on the same line, only the first few is changed to asd:
The few men, the few women, the brave.
The asd men, the few women, the brave.
This is useful in some circumstances, like altering special characters at the beginnings of lines (for instance, replacing the greater-than symbols some people use to quote previous material in email threads with a horizontal tab while leaving a quoted algebraic inequality later in the line untouched), but in your example where you specify that anywhere few occurs it should be replaced, make sure you have that /g.
The following two options (flags) are combined into one, -ie:
-i option is used to edit in place on the file hello.txt.
-e option indicates the expression/command to run, in this case s/.
Note: It's important that you use -i -e to search/replace. If you do -ie, you create a backup of every file with the letter 'e' appended.
| Using 'sed' to find and replace [duplicate] |
1,327,107,993,000 |
I would like to change a file extension from *.txt to *.text. I tried using the basename command, but I'm having trouble changing more than one file.
Here's my code:
files=`ls -1 *.txt`
for x in $files
do
mv $x "`basename $files .txt`.text"
done
I'm getting this error:
basename: too many arguments Try basename --help' for more information
|
Straight from Greg's Wiki:
# Rename all *.txt to *.text
for file in *.txt; do
mv -- "$file" "${file%.txt}.text"
done
*.txt is a globbing pattern, using * as a wildcard to match any string. *.txt matches all filenames ending with '.txt'.
-- marks the end of the option list. This avoids issues with filenames starting with hyphens.
${file%.txt} is a parameter expansion, replaced by the value of the file variable with .txt removed from the end.
Also see the entry on why you shouldn't parse ls.
If you have to use basename, your syntax would be:
for file in *.txt; do
mv -- "$file" "$(basename -- "$file" .txt).text"
done
| How do I change the extension of multiple files? |
1,327,107,993,000 |
Noone should need 10 years for asking this question, like I did. If I were just starting out with Linux, I'd want to know: When to alias, when to script and when to write a function?
Where aliases are concerned, I use aliases for very simple operations that don't take arguments.
alias houston='cd /home/username/.scripts/'
That seems obvious. But some people do this:
alias command="bash bashscriptname"
(and add it to the .bashrc file).
Is there a good reason to do that? I didn't come across a circumstance for this. If there is an edge case where that would make a difference, please answer below.
That's where I would just put something in my PATH and chmod +x it, which is another thing that came after years of Linux trial-and-error.
Which brings me to the next topic. For instance, I added a hidden folder (.scripts/) in the home directory to my PATH by just adding a line to my .bashrc (PATH=$PATH:/home/username/.scripts/), so anything executable in there automagically autocompletes.
I don't really need that, do I? I would only use that for languages which are not the shell, like Python. If it's the shell, I can just write a function inside the very same .bashrc:
funcname () {
somecommand -someARGS "$@"
}
Did I miss anything?
What would you tell a beginning Linux user about when to alias, when to script and when to write a function?
If it's not obvious, I'm assuming the people who answer this will make use of all three options. If you only use one or two of these three (aliases, scripts, functions), this question isn't really aimed at you.
|
An alias should effectively not (in general) do more than change the default options of a command. It is nothing more than simple text replacement on the command name. It can't do anything with arguments but pass them to the command it actually runs. So if you simply need to add an argument at the front of a single command, an alias will work. Common examples are
# Make ls output in color by default.
alias ls="ls --color=auto"
# make mv ask before overwriting a file by default
alias mv="mv -i"
A function should be used when you need to do something more complex than an alias but that wouldn't be of use on its own. For example, take this answer on a question I asked about changing grep's default behavior depending on whether it's in a pipeline:
grep() {
if [[ -t 1 ]]; then
command grep -n "$@"
else
command grep "$@"
fi
}
It's a perfect example of a function because it is too complex for an alias (requiring different defaults based on a condition), but it's not something you'll need in a non-interactive script.
If you get too many functions or functions too big, put them into separate files in a hidden directory, and source them in your ~/.bashrc:
if [ -d ~/.bash_functions ]; then
for file in ~/.bash_functions/*; do
. "$file"
done
fi
A script should stand on its own. It should have value as something that can be re-used, or used for more than one purpose.
| In Bash, when to alias, when to script and when to write a function? |
1,327,107,993,000 |
Take the following script:
#!/bin/sh
sed 's/(127\.0\.1\.1)\s/\1/' [some file]
If I try to run this in sh (dash here), it'll fail because of the parentheses, which need to be escaped. But I don't need to escape the backslashes themselves (between the octets, or in the \s or \1). What's the rule here? What about when I need to use {...} or [...]? Is there a list of what I do and don't need to escape?
|
There are two levels of interpretation here: the shell, and sed.
In the shell, everything between single quotes is interpreted literally, except for single quotes themselves. You can effectively have a single quote between single quotes by writing '\'' (close single quote, one literal single quote, open single quote).
Sed uses basic regular expressions. In a BRE, in order to have them treated literally, the characters $.*[\^ need to be quoted by preceding them by a backslash, except inside character sets ([…]). Letters, digits and (){}+?| must not be quoted (you can get away with quoting some of these in some implementations). The sequences \(, \), \n, and in some implementations \{, \}, \+, \?, \| and other backslash+alphanumerics have special meanings. You can get away with not quoting $^ in some positions in some implementations.
Furthermore, you need a backslash before / if it is to appear in the regex outside of bracket expressions. You can choose an alternative character as the delimiter by writing, e.g., s~/dir~/replacement~ or \~/dir~p; you'll need a backslash before the delimiter if you want to include it in the BRE. If you choose a character that has a special meaning in a BRE and you want to include it literally, you'll need three backslashes; I do not recommend this, as it may behave differently in some implementations.
In a nutshell, for sed 's/…/…/':
Write the regex between single quotes.
Use '\'' to end up with a single quote in the regex.
Put a backslash before $.*/[\]^ and only those characters (but not inside bracket expressions). (Technically you shouldn't put a backslash before ] but I don't know of an implementation that treats ] and \] differently outside of bracket expressions.)
Inside a bracket expression, for - to be treated literally, make sure it is first or last ([abc-] or [-abc], not [a-bc]).
Inside a bracket expression, for ^ to be treated literally, make sure it is not first (use [abc^], not [^abc]).
To include ] in the list of characters matched by a bracket expression, make it the first character (or first after ^ for a negated set): []abc] or [^]abc] (not [abc]] nor [abc\]]).
In the replacement text:
& and \ need to be quoted by preceding them by a backslash,
as do the delimiter (usually /) and newlines.
\ followed by a digit has a special meaning. \ followed by a letter has a special meaning (special characters) in some implementations, and \ followed by some other character means \c or c depending on the implementation.
With single quotes around the argument (sed 's/…/…/'), use '\'' to put a single quote in the replacement text.
If the regex or replacement text comes from a shell variable, remember that
The regex is a BRE, not a literal string.
In the regex, a newline needs to be expressed as \n (which will never match unless you have other sed code adding newline characters to the pattern space). But note that it won't work inside bracket expressions with some sed implementations.
In the replacement text, &, \ and newlines need to be quoted.
The delimiter needs to be quoted (but not inside bracket expressions).
Use double quotes for interpolation: sed -e "s/$BRE/$REPL/".
| What characters do I need to escape when using sed in a sh script? |
1,327,107,993,000 |
I would like to remove all leading and trailing spaces and tabs from each line in an output.
Is there a simple tool like trim I could pipe my output into?
Example file:
test space at back
test space at front
TAB at end
TAB at front
sequence of some space in the middle
some empty lines with differing TABS and spaces:
test space at both ends
|
awk '{$1=$1;print}'
or shorter:
awk '{$1=$1};1'
Would trim leading and trailing space or tab characters1 and also squeeze sequences of tabs and spaces into a single space.
That works because when you assign something to any one field and then try to access the whole record ($0, the thing print prints be default), awk needs to rebuild that record by joining all fields ($1, ..., $NF) with OFS (space by default).
To also remove blank lines, change it to awk 'NF{$1=$1;print}' (where NF as a condition selects the records for which the Number of Fields is non-zero). Do not do awk '$1=$1' as sometimes suggested as that would also remove lines whose first field is any representation of 0 supported by awk (0, 00, -0e+12...)
¹ and possibly other blank characters depending on the locale and the awk implementation
| How do I trim leading and trailing whitespace from each line of some output? |
1,327,107,993,000 |
… or an introductory guide to robust filename handling and other string passing in shell scripts.
I wrote a shell script which works well most of the time. But it chokes on some inputs (e.g. on some file names).
I encountered a problem such as the following:
I have a file name containing a space hello world, and it was treated as two separate files hello and world.
I have an input line with two consecutive spaces and they shrank to one in the input.
Leading and trailing whitespace disappears from input lines.
Sometimes, when the input contains one of the characters \[*?, they are replaced by some text which is actually the names of some files.
There is an apostrophe ' (or a double quote ") in the input, and things got weird after that point.
There is a backslash in the input (or: I am using Cygwin and some of my file names have Windows-style \ separators).
What is going on, and how do I fix this?
|
Always use double quotes around variable substitutions and command substitutions: "$foo", "$(foo)"
If you use $foo unquoted, your script will choke on input or parameters (or command output, with $(foo)) containing whitespace or \[*?.
There, you can stop reading. Well, ok, here are a few more:
read — To read input line by line with the read builtin, use while IFS= read -r line; do …
Plain read treats backslashes and whitespace specially.
xargs — Avoid xargs. If you must use xargs, make that xargs -0. Instead of find … | xargs, prefer find … -exec ….
xargs treats whitespace and the characters \"' specially.
This answer applies to Bourne/POSIX-style shells (sh, ash, dash, bash, ksh, mksh, yash…). Zsh users should skip it and read the end of When is double-quoting necessary? instead. If you want the whole nitty-gritty, read the standard or your shell's manual.
Note that the explanations below contains a few approximations (statements that are true in most conditions but can be affected by the surrounding context or by configuration).
Why do I need to write "$foo"? What happens without the quotes?
$foo does not mean “take the value of the variable foo”. It means something much more complex:
First, take the value of the variable.
Field splitting: treat that value as a whitespace-separated list of fields, and build the resulting list. For example, if the variable contains foo * bar then the result of this step is the 3-element list foo, *, bar.
Filename generation: treat each field as a glob, i.e. as a wildcard pattern, and replace it by the list of file names that match this pattern. If the pattern doesn't match any files, it is left unmodified. In our example, this results in the list containing foo, following by the list of files in the current directory, and finally bar. If the current directory is empty, the result is foo, *, bar.
Note that the result is a list of strings. There are two contexts in shell syntax: list context and string context. Field splitting and filename generation only happen in list context, but that's most of the time. Double quotes delimit a string context: the whole double-quoted string is a single string, not to be split. (Exception: "$@" to expand to the list of positional parameters, e.g. "$@" is equivalent to "$1" "$2" "$3" if there are three positional parameters. See What is the difference between $* and $@?)
The same happens to command substitution with $(foo) or with `foo`. On a side note, don't use `foo`: its quoting rules are weird and non-portable, and all modern shells support $(foo) which is absolutely equivalent except for having intuitive quoting rules.
The output of arithmetic substitution also undergoes the same expansions, but that isn't normally a concern as it only contains non-expandable characters (assuming IFS doesn't contain digits or -).
See When is double-quoting necessary? for more details about the cases when you can leave out the quotes.
Unless you mean for all this rigmarole to happen, just remember to always use double quotes around variable and command substitutions. Do take care: leaving out the quotes can lead not just to errors but to security holes.
How do I process a list of file names?
If you write myfiles="file1 file2", with spaces to separate the files, this can't work with file names containing spaces. Unix file names can contain any character other than / (which is always a directory separator) and null bytes (which you can't use in shell scripts with most shells).
Same problem with myfiles=*.txt; … process $myfiles. When you do this, the variable myfiles contains the 5-character string *.txt, and it's when you write $myfiles that the wildcard is expanded. This example will actually work, until you change your script to be myfiles="$someprefix*.txt"; … process $myfiles. If someprefix is set to final report, this won't work.
To process a list of any kind (such as file names), put it in an array. This requires mksh, ksh93, yash or bash (or zsh, which doesn't have all these quoting issues); a plain POSIX shell (such as ash or dash) doesn't have array variables.
myfiles=("$someprefix"*.txt)
process "${myfiles[@]}"
Ksh88 has array variables with a different assignment syntax set -A myfiles "someprefix"*.txt (see assignation variable under different ksh environment if you need ksh88/bash portability). Bourne/POSIX-style shells have a single one array, the array of positional parameters "$@" which you set with set and which is local to a function:
set -- "$someprefix"*.txt
process -- "$@"
What about file names that begin with -?
On a related note, keep in mind that file names can begin with a - (dash/minus), which most commands interpret as denoting an option. Some commands (like sh, set or sort) also accept options that start with +. If you have a file name that begins with a variable part, be sure to pass -- before it, as in the snippet above. This indicates to the command that it has reached the end of options, so anything after that is a file name even if it starts with - or +.
Alternatively, you can make sure that your file names begin with a character other than -. Absolute file names begin with /, and you can add ./ at the beginning of relative names. The following snippet turns the content of the variable f into a “safe” way of referring to the same file that's guaranteed not to start with - nor +.
case "$f" in -* | +*) "f=./$f";; esac
On a final note on this topic, beware that some commands interpret - as meaning standard input or standard output, even after --. If you need to refer to an actual file named -, or if you're calling such a program and you don't want it to read from stdin or write to stdout, make sure to rewrite - as above. See What is the difference between "du -sh *" and "du -sh ./*"? for further discussion.
How do I store a command in a variable?
“Command” can mean three things: a command name (the name as an executable, with or without full path, or the name of a function, builtin or alias), a command name with arguments, or a piece of shell code. There are accordingly different ways of storing them in a variable.
If you have a command name, just store it and use the variable with double quotes as usual.
command_path="$1"
…
"$command_path" --option --message="hello world"
If you have a command with arguments, the problem is the same as with a list of file names above: this is a list of strings, not a string. You can't just stuff the arguments into a single string with spaces in between, because if you do that you can't tell the difference between spaces that are part of arguments and spaces that separate arguments. If your shell has arrays, you can use them.
cmd=(/path/to/executable --option --message="hello world" --)
cmd=("${cmd[@]}" "$file1" "$file2")
"${cmd[@]}"
What if you're using a shell without arrays? You can still use the positional parameters, if you don't mind modifying them.
set -- /path/to/executable --option --message="hello world" --
set -- "$@" "$file1" "$file2"
"$@"
What if you need to store a complex shell command, e.g. with redirections, pipes, etc.? Or if you don't want to modify the positional parameters? Then you can build a string containing the command, and use the eval builtin.
code='/path/to/executable --option --message="hello world" -- /path/to/file1 | grep "interesting stuff"'
eval "$code"
Note the nested quotes in the definition of code: the single quotes '…' delimit a string literal, so that the value of the variable code is the string /path/to/executable --option --message="hello world" -- /path/to/file1. The eval builtin tells the shell to parse the string passed as an argument as if it appeared in the script, so at that point the quotes and pipe are parsed, etc.
Using eval is tricky. Think carefully about what gets parsed when. In particular, you can't just stuff a file name into the code: you need to quote it, just like you would if it was in a source code file. There's no direct way to do that. Something like code="$code $filename" breaks if the file name contains any shell special character (spaces, $, ;, |, <, >, etc.). code="$code \"$filename\"" still breaks on "$\`. Even code="$code '$filename'" breaks if the file name contains a '. There are two solutions.
Add a layer of quotes around the file name. The easiest way to do that is to add single quotes around it, and replace single quotes by '\''.
quoted_filename=$(printf %s. "$filename" | sed "s/'/'\\\\''/g")
code="$code '${quoted_filename%.}'"
Keep the variable expansion inside the code, so that it's looked up when the code is evaluated, not when the code fragment is built. This is simpler but only works if the variable is still around with the same value at the time the code is executed, not e.g. if the code is built in a loop.
code="$code \"\$filename\""
Finally, do you really need a variable containing code? The most natural way to give a name to a code block is to define a function.
What's up with read?
Without -r, read allows continuation lines — this is a single logical line of input:
hello \
world
read splits the input line into fields delimited by characters in $IFS (without -r, backslash also escapes those). For example, if the input is a line containing three words, then read first second third sets first to the first word of input, second to the second word and third to the third word. If there are more words, the last variable contains everything that's left after setting the preceding ones. Leading and trailing whitespace are trimmed.
Setting IFS to the empty string avoids any trimming. See Why is `while IFS= read` used so often, instead of `IFS=; while read..`? for a longer explanation.
What's wrong with xargs?
The input format of xargs is whitespace-separated strings which can optionally be single- or double-quoted. No standard tool outputs this format.
xargs -L1 or xargs -l is not to split the input on lines, but to run one command per line of input (that line still split to make up the arguments, and continued on the next line if ending in blanks).
xargs -I PLACEHOLDER does use one line of input to substitute the PLACEHOLDER but quotes and backslashes are still processed and leading blanks trimmed.
You can use xargs -r0 where applicable (and where available: GNU (Linux, Cygwin), BusyBox, BSDs, OSX, but it isn't in POSIX). That's safe, because null bytes can't appear in most data, in particular in file names and external command arguments. To produce a null-separated list of file names, use find … -print0 (or you can use find … -exec … as explained below).
How do I process files found by find?
find … -exec some_command a_parameter another_parameter {} +
some_command needs to be an external command, it can't be a shell function or alias. If you need to invoke a shell to process the files, call sh explicitly.
find … -exec sh -c '
for x do
… # process the file "$x"
done
' find-sh {} +
I have some other question
Browse the quoting tag on this site, or shell or shell-script. (Click on “learn more…” to see some general tips and a hand-selected list of common questions.) If you've searched and you can't find an answer, ask away.
| Why does my shell script choke on whitespace or other special characters? |
1,327,107,993,000 |
I need to find my external IP address from a shell script. At the moment I use this function:
myip () {
lwp-request -o text checkip.dyndns.org | awk '{ print $NF }'
}
But it relies on perl-libwww, perl-html-format, and perl-html-tree being installed.
What other ways can I get my external IP?
|
I'd recommend getting it directly from a DNS server.
Most of the other answers below all involve going over HTTP to a remote server. Some of them required parsing of the output, or relied on the User-Agent header to make the server respond in plain text. Those change quite frequently (go down, change their name, put up ads, might change output format etc.).
The DNS response protocol is standardised (the format will stay compatible).
Historically, DNS services (Akamai, Google Public DNS, OpenDNS, ..) tend to survive much longer and are more stable, more scalable, and generally more looked-after than whatever new hip whatismyip dot-com HTTP service is hot today.
This method is inherently faster (be it only by a few milliseconds!).
Using dig with an OpenDNS resolver:
$ dig @resolver4.opendns.com myip.opendns.com +short
Perhaps alias it in your bashrc so it's easy to remember
# https://unix.stackexchange.com/a/81699/37512
alias wanip='dig @resolver4.opendns.com myip.opendns.com +short'
alias wanip4='dig @resolver4.opendns.com myip.opendns.com +short -4'
alias wanip6='dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6'
Responds with a plain ip address:
$ wanip # wanip4, or wanip6
80.100.192.168 # or, 2606:4700:4700::1111
Syntax
(Abbreviated from https://ss64.com/bash/dig.html):
usage: dig [@global-dnsserver] [q-type] <hostname> <d-opt> [q-opt]
q-type one of (A, ANY, AAAA, TXT, MX, ...). Default: A.
d-opt ...
+[no]short (Display nothing except short form of answer)
...
q-opt one of:
-4 (use IPv4 query transport only)
-6 (use IPv6 query transport only)
...
The ANY query type returns either an AAAA or an A record. To prefer IPv4 or IPv6 connection specifically, use the -4 or -6 options accordingly.
To require the response be an IPv4 address, replace ANY with A; for IPv6, replace it with AAAA. Note that it can only return the address used for the connection. For example, when connecting over IPv6, it cannot return the A address.
Alternative servers
Various DNS providers offer this service, including OpenDNS, Akamai, and Google Public DNS:
# OpenDNS (since 2009)
$ dig @resolver3.opendns.com myip.opendns.com +short
$ dig @resolver4.opendns.com myip.opendns.com +short
80.100.192.168
# OpenDNS IPv6
$ dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6
2606:4700:4700::1111
# Akamai (since 2009)
$ dig @ns1-1.akamaitech.net ANY whoami.akamai.net +short
80.100.192.168
# Akamai approximate
# NOTE: This returns only an approximate IP from your block,
# but has the benefit of working with private DNS proxies.
$ dig +short TXT whoami.ds.akahelp.net
"ip" "80.100.192.160"
# Google (since 2010)
# Supports IPv6 + IPv4, use -4 or -6 to force one.
$ dig @ns1.google.com TXT o-o.myaddr.l.google.com +short
"80.100.192.168"
Example alias that specifically requests an IPv4 address:
# https://unix.stackexchange.com/a/81699/37512
alias wanip4='dig @resolver4.opendns.com myip.opendns.com +short -4'
$ wanip4
80.100.192.168
And for your IPv6 address:
# https://unix.stackexchange.com/a/81699/37512
alias wanip6='dig @ns1.google.com TXT o-o.myaddr.l.google.com +short -6'
$ wanip6
"2606:4700:4700::1111"
Troubleshooting
If the command is not working for some reason, there may be a network problem. Try one of the alternatives above first.
If you suspect a different issue (with the upstream provider, the command-line tool, or something else) then run the command without the +short option to reveal the details of the DNS query. For example:
$ dig @resolver4.opendns.com myip.opendns.com
;; Got answer: ->>HEADER<<- opcode: QUERY, status: NOERROR
;; QUESTION SECTION:
;myip.opendns.com. IN A
;; ANSWER SECTION:
myip.opendns.com. 0 IN A 80.100.192.168
;; Query time: 4 msec
| How can I get my external IP address in a shell script? |
1,327,107,993,000 |
While running a script, I want to create a temporary file in /tmp directory.
After execution of that script, that will be cleaned by that script.
How to do that in shell script?
|
tmpfile=$(mktemp /tmp/abc-script.XXXXXX)
: ...
rm "$tmpfile"
You can make sure that a file is deleted when the scripts exits (including kills and crashes) by opening a file descriptor to the file and deleting it. The file keeps available (for the script; not really for other processes but /proc/$PID/fd/$FD is a work-around) as long as the file descriptor is open. When it gets closed (which the kernel does automatically when the process exits) the filesystem deletes the file.
# create temporary file
tmpfile=$(mktemp /tmp/abc-script.XXXXXX)
# create file descriptor 3 for writing to a temporary file so that
# echo ... >&3 writes to that file
exec 3>"$tmpfile"
# create file descriptor 4 for reading from the same file so that
# the file seek positions for reading and writing can be different
exec 4<"$tmpfile"
# delete temp file; the directory entry is deleted at once; the reference counter
# of the inode is decremented only after the file descriptor has been closed.
# The file content blocks are deallocated (this is the real deletion) when the
# reference counter drops to zero.
rm "$tmpfile"
# your script continues
: ...
# example of writing to file descriptor
echo foo >&3
# your script continues
: ...
# reading from that file descriptor
head -n 1 <&4
# close the file descriptor (done automatically when script exits)
# see section 2.7.6 of the POSIX definition of the Shell Command Language
exec 3>&-
| How create a temporary file in shell script? |
1,327,107,993,000 |
The following bash syntax verifies if param isn't empty:
[[ ! -z $param ]]
For example:
param=""
[[ ! -z $param ]] && echo "I am not zero"
No output and its fine.
But when param is empty except for one (or more) space characters, then the case is different:
param=" " # one space
[[ ! -z $param ]] && echo "I am not zero"
"I am not zero" is output.
How can I change the test to consider variables that contain only space characters as empty?
|
First, note that the -z test is explicitly for:
the length of string is zero
That is, a string containing only spaces should not be true under -z, because it has a non-zero length.
What you want is to remove the spaces from the variable using the pattern replacement parameter expansion:
[[ -z "${param// }" ]]
This expands the param variable and replaces all matches of the pattern (a single space) with nothing, so a string that has only spaces in it will be expanded to an empty string.
The nitty-gritty of how that works is that ${var/pattern/string} replaces the first longest match of pattern with string. When pattern starts with / (as above) then it replaces all the matches. Because the replacement is empty, we can omit the final / and the string value:
${parameter/pattern/string}
The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. If pattern begins with ‘/’, all matches of pattern are replaced with string. Normally only the first match is replaced. ... If string is null, matches of pattern are deleted and the / following pattern may be omitted.
After all that, we end up with ${param// } to delete all spaces.
Note that though present in ksh (where it originated), zsh and bash, that syntax is not POSIX and should not be used in sh scripts.
| How can I test if a variable is empty or contains only spaces? |
1,327,107,993,000 |
I have 2 graphics cards on my laptop. One is IGP and another discrete.
I've written a shell script to to turn off the discrete graphics card.
How can I convert it to systemd script to run it at start-up?
|
There are mainly two approaches to do that:
With script
If you have to run a script, you don't convert it but rather run the script via a systemd service:
Therefore you need two files: the script and the .service file (unit configuration file).
Make sure your script is executable and the first line (the shebang) is #!/bin/sh. Then create the .service file in /etc/systemd/system (a plain text file, let's call it vgaoff.service).
For example:
the script: /usr/bin/vgaoff
the unit file: /etc/systemd/system/vgaoff.service
Now, edit the unit file. Its content depends on how your script works:
If vgaoff just powers off the gpu, e.g.:
exec blah-blah pwrOFF etc
then the content of vgaoff.service should be:
[Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/usr/bin/vgaoff
[Install]
WantedBy=multi-user.target
If vgaoff is used to power off the GPU and also to power it back on, e.g.:
start() {
exec blah-blah pwrOFF etc
}
stop() {
exec blah-blah pwrON etc
}
case $1 in
start|stop) "$1" ;;
esac
then the content of vgaoff.service should be:
[Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/usr/bin/vgaoff start
ExecStop=/usr/bin/vgaoff stop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Without script
For the most trivial cases, you can do without the script and execute a certain command directly:
To power off:
[Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo OFF > /whatever/vga_pwr_gadget/switch"
[Install]
WantedBy=multi-user.target
To power off and on:
[Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo OFF > /whatever/vga_pwr_gadget/switch"
ExecStop=/bin/sh -c "echo ON > /whatever/vga_pwr_gadget/switch"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Enable the service
Once you're done with the files, enable the service:
systemctl enable vgaoff.service
It will start automatically on next boot. You could even enable and start the service in one go with
systemctl enable --now vgaoff.service
as of systemd v.220 (on older setups you'll have to start it manually).
For more details see systemd.service manual page.
Troubleshooting
How to see full log of a systemd service?
systemd service exit codes and status information explanation
| How to write startup script for Systemd? |
1,327,107,993,000 |
I'm reading an example bash shell script:
#!/bin/bash
# This script makes a backup of my home directory.
cd /home
# This creates the archive
tar cf /var/tmp/home_franky.tar franky > /dev/null 2>&1
# First remove the old bzip2 file. Redirect errors because this generates some if the archive
# does not exist. Then create a new compressed file.
rm /var/tmp/home_franky.tar.bz2 2> /dev/null
bzip2 /var/tmp/home_franky.tar
# Copy the file to another host - we have ssh keys for making this work without intervention.
scp /var/tmp/home_franky.tar.bz2 bordeaux:/opt/backup/franky > /dev/null 2>&1
# Create a timestamp in a logfile.
date >> /home/franky/log/home_backup.log
echo backup succeeded >> /home/franky/log/home_backup.log
I'm trying to understand the use of /dev/null 2>&1 here. At first, I thought this script uses /dev/null in order to gracefully ignore errors, without causing the script to crash (kind of like try catch exception handling in programming languages). Because I don't see how using tar to compress a directory into a tar file could possibly cause any type of errors.
|
No, this will not prevent the script from crashing. If any errors occur in the tar process (e.g.: permission denied, no such file or directory, ...) the script will still crash.
This is because of using > /dev/null 2>&1 will redirect all your command output (both stdout and stderr) to /dev/null, meaning no outputs are printed to the terminal.
By default:
stdin ==> fd 0
stdout ==> fd 1
stderr ==> fd 2
In the script, you use > /dev/null causing:
stdin ==> fd 0
stdout ==> /dev/null
stderr ==> fd 2
And then 2>&1 causing:
stdin ==> fd 0
stdout ==> /dev/null
stderr ==> stdout
| redirecting to /dev/null |
1,327,107,993,000 |
I am aware of three methods to delete all entries from a file.
They are
>filename
touch filename1
filename < /dev/null
Of these three I abuse >filename the most as that requires the least number of keystrokes.
However, I would like to know which is the most efficient of the three (if there are any more efficient methods) with respect to large log files and small files.
Also, how does the three codes operate and delete the contents?
1Edit: as discussed in this answer, this actually does not clear the file!
|
Actually, the second form touch filename doesn't delete anything from the file - it only creates an empty file if one did not exist, or updates the last-modified date of an existing file.
And the third filename < /dev/null tries to run filename with /dev/null as input.
cp /dev/null filename works.
As for efficient, the most efficient would be truncate -s 0 filename (see here).
Otherwise, cp /dev/null filename or > filename are both fine. They both open and then close the file, using the truncate-on-open setting. cp also opens /dev/null, so that makes it marginally slower.
On the other hand, truncate would likely be slower than > filename when run from a script since running the truncate command requires the system to open the executable, load it, and then run it.
| Most efficient method to empty the contents of a file |
1,327,107,993,000 |
My problem:
I'm writing a bash script and in it I'd like to check if a given service is running.
I know how to do this manually, with $ service [service_name] status.
But (especially since the move to systemd) that prints a whole bunch of text that's a little messy to parse. I assumed there's a command made for scripts with simple output or a return value I can check.
But Googling around only yields a ton of "Oh, just ps aux | grep -v grep | grep [service_name]" results. That can't be the best practice, is it? What if another instance of that command is running, but not one started by the SysV init script?
Or should I just shut up and get my hands dirty with a little pgrep?
|
systemctl has an is-active subcommand for this:
systemctl is-active --quiet service
will exit with status zero if service is active, non-zero otherwise, making it ideal for scripts:
systemctl is-active --quiet service && echo Service is running
If you omit --quiet it will also output the current status to its standard output.
Some units can be active even though nothing is running to provide the service: units marked as “RemainAfterExit” are considered active if they exit successfully, the idea being that they provide a service which doesn’t need a daemon (e.g. they configure some aspect of the system). Units involving daemons will however only be active if the daemon is still running.
Oneshot units without “RemainAfterExit” never enter the active unit state, so is-active never succeeds; to handle such units, is-active’s text output can be analysed instead:
systemctl is-active service
This will output “activating” for a oneshot unit that’s currently running, “inactive” for a oneshot unit that’s currently not running but was successful the last time it ran (if any), and “failed” for a oneshot unit that’s currently not running and failed the last time it ran. is-active always returns a non-zero status with these units, so run
systemctl is-active service ||:
if you need to ignore that.
| The "proper" way to test if a service is running in a script |
1,327,107,993,000 |
If you've been following unix.stackexchange.com for a while, you
should hopefully know by now that leaving a variable
unquoted in list context (as in echo $var) in Bourne/POSIX
shells (zsh being the exception) has a very special meaning and
shouldn't be done unless you have a very good reason to.
It's discussed at length in a number of Q&A here (Examples: Why does my shell script choke on whitespace or other special characters?, When is double-quoting necessary?, Expansion of a shell variable and effect of glob and split on it, Quoted vs unquoted string expansion)
That has been the case since the initial release of the Bourne
shell in the late 70s and hasn't been changed by the Korn shell
(one of David Korn's biggest
regrets (question #7)) or bash which mostly
copied the Korn shell, and that's how that has been specified by POSIX/Unix.
Now, we're still seeing a number of answers here and even
occasionally publicly released shell code where
variables are not quoted. You'd have thought people would have
learnt by now.
In my experience, there are mainly 3 types of people who omit to
quote their variables:
beginners. Those can be excused as admittedly it's a
completely unintuitive syntax. And it's our role on this site
to educate them.
forgetful people.
people who are not convinced even after repeated hammering,
who think that surely the Bourne shell author did not
intend us to quote all our variables.
Maybe we can convince them if we expose the risk associated with
this kind of behaviours.
What's the worst thing that can possibly happen if you
forget to quote your variables. Is it really that bad?
What kind of vulnerability are we talking of here?
In what contexts can it be a problem?
|
Preamble
First, I'd say it's not the right way to address the problem.
It's a bit like saying "you should not murder people because
otherwise you'll go to jail".
Similarly, you don't quote your variable because otherwise
you're introducing security vulnerabilities. You quote your
variables because it is wrong not to (but if the fear of the jail can help, why not).
A little summary for those who've just jumped on the train.
In most shells, leaving a variable expansion unquoted (though
that (and the rest of this answer) also applies to command
substitution (`...` or $(...)) and arithmetic expansion ($((...)) or $[...])) has a very special
meaning. The best way to describe it is that it is like
invoking some sort of implicit split+glob operator¹.
cmd $var
in another language would be written something like:
cmd(glob(split($var)))
$var is first split into a list of words according to complex
rules involving the $IFS special parameter (the split part)
and then each word resulting of that splitting is considered as
a pattern which is expanded to a list of files that match it
(the glob part).
As an example, if $var contains *.txt,/var/*.xml and $IFS
contains ,, cmd would be called with a number of arguments,
the first one being cmd and the next ones being the txt
files in the current directory and the xml files in /var.
If you wanted to call cmd with just the two literal arguments cmd
and *.txt,/var/*.xml, you'd write:
cmd "$var"
which would be in your other more familiar language:
cmd($var)
What do we mean by vulnerability in a shell?
After all, it's been known since the dawn of time that shell
scripts should not be used in security-sensitive contexts.
Surely, OK, leaving a variable unquoted is a bug but that can't
do that much harm, can it?
Well, despite the fact that anybody would tell you that shell
scripts should never be used for web CGIs, or that thankfully
most systems don't allow setuid/setgid shell scripts nowadays,
one thing that shellshock (the remotely exploitable bash bug
that made the headlines in September 2014) revealed is that
shells are still extensively used where they probably shouldn't:
in CGIs, in DHCP client hook scripts, in sudoers commands,
invoked by (if not as) setuid commands...
Sometimes unknowingly. For instance system('cmd $PATH_INFO')
in a php/perl/python CGI script does invoke a shell to interpret that command line (not to
mention the fact that cmd itself may be a shell script and its
author may have never expected it to be called from a CGI).
You've got a vulnerability when there's a path for privilege
escalation, that is when someone (let's call him the attacker)
is able to do something he is not meant to.
Invariably that means the attacker providing data, that data
being processed by a privileged user/process which inadvertently
does something it shouldn't be doing, in most of the cases because
of a bug.
Basically, you've got a problem when your buggy code processes
data under the control of the attacker.
Now, it's not always obvious where that data may come from,
and it's often hard to tell if your code will ever get to
process untrusted data.
As far as variables are concerned, In the case of a CGI script,
it's quite obvious, the data are the CGI GET/POST parameters and
things like cookies, path, host... parameters.
For a setuid script (running as one user when invoked by
another), it's the arguments or environment variables.
Another very common vector is file names. If you're getting a
file list from a directory, it's possible that files have been
planted there by the attacker.
In that regard, even at the prompt of an interactive shell, you
could be vulnerable (when processing files in /tmp or ~/tmp
for instance).
Even a ~/.bashrc can be vulnerable (for instance, bash will
interpret it when invoked over ssh to run a ForcedCommand
like in git server deployments with some variables under the
control of the client).
Now, a script may not be called directly to process untrusted
data, but it may be called by another command that does. Or your
incorrect code may be copy-pasted into scripts that do (by you 3
years down the line or one of your colleagues). One place where it's
particularly critical is in answers in Q&A sites as you'll
never know where copies of your code may end up.
Down to business; how bad is it?
Leaving a variable (or command substitution) unquoted is by far
the number one source of security vulnerabilities associated
with shell code. Partly because those bugs often translate to
vulnerabilities but also because it's so common to see unquoted
variables.
Actually, when looking for vulnerabilities in shell code, the
first thing to do is look for unquoted variables. It's easy to
spot, often a good candidate, generally easy to track back to
attacker-controlled data.
There's an infinite number of ways an unquoted variable can turn
into a vulnerability. I'll just give a few common trends here.
Information disclosure
Most people will bump into bugs associated with unquoted
variables because of the split part (for instance, it's
common for files to have spaces in their names nowadays and space
is in the default value of IFS). Many people will overlook the
glob part. The glob part is at least as dangerous as the
split part.
Globbing done upon unsanitised external input means the
attacker can make you read the content of any directory.
In:
echo You entered: $unsanitised_external_input
if $unsanitised_external_input contains /*, that means the
attacker can see the content of /. No big deal. It becomes
more interesting though with /home/* which gives you a list of
user names on the machine, /tmp/*, /home/*/.forward for
hints at other dangerous practises, /etc/rc*/* for enabled
services... No need to name them individually. A value of /* /*/* /*/*/*... will just list the whole file system.
Denial of service vulnerabilities.
Taking the previous case a bit too far and we've got a DoS.
Actually, any unquoted variable in list context with unsanitized
input is at least a DoS vulnerability.
Even expert shell scripters commonly forget to quote things
like:
#! /bin/sh -
: ${QUERYSTRING=$1}
: is the no-op command. What could possibly go wrong?
That's meant to assign $1 to $QUERYSTRING if $QUERYSTRING
was unset. That's a quick way to make a CGI script callable from
the command line as well.
That $QUERYSTRING is still expanded though and because it's
not quoted, the split+glob operator is invoked.
Now, there are some globs that are particularly expensive to
expand. The /*/*/*/* one is bad enough as it means listing
directories up to 4 levels down. In addition to the disk and CPU
activity, that means storing tens of thousands of file paths
(40k here on a minimal server VM, 10k of which directories).
Now /*/*/*/*/../../../../*/*/*/* means 40k x 10k and
/*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/* is enough to
bring even the mightiest machine to its knees.
Try it for yourself (though be prepared for your machine to
crash or hang):
a='/*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/*' sh -c ': ${a=foo}'
Of course, if the code is:
echo $QUERYSTRING > /some/file
Then you can fill up the disk.
Just do a google search on shell
cgi or bash
cgi or ksh
cgi, and you'll find
a few pages that show you how to write CGIs in shells. Notice
how half of those that process parameters are vulnerable.
Even David Korn's
own
one
is vulnerable (look at the cookie handling).
up to arbitrary code execution vulnerabilities
Arbitrary code execution is the worst type of vulnerability,
since if the attacker can run any command, there's no limit on
what he may do.
That's generally the split part that leads to those. That
splitting results in several arguments to be passed to commands
when only one is expected. While the first of those will be used
in the expected context, the others will be in a different context
so potentially interpreted differently. Better with an example:
awk -v foo=$external_input '$2 == foo'
Here, the intention was to assign the content of the
$external_input shell variable to the foo awk variable.
Now:
$ external_input='x BEGIN{system("uname")}'
$ awk -v foo=$external_input '$2 == foo'
Linux
The second word resulting of the splitting of $external_input
is not assigned to foo but considered as awk code (here that
executes an arbitrary command: uname).
That's especially a problem for commands that can execute other
commands (awk, env, sed (GNU one), perl, find...) especially
with the GNU variants (which accept options after arguments).
Sometimes, you wouldn't suspect commands to be able to execute
others like ksh, bash or zsh's [ or printf...
for file in *; do
[ -f $file ] || continue
something-that-would-be-dangerous-if-$file-were-a-directory
done
If we create a directory called x -o yes, then the test
becomes positive, because it's a completely different
conditional expression we're evaluating.
Worse, if we create a file called x -a a[0$(uname>&2)] -gt 1,
with all ksh implementations at least (which includes the sh
of most commercial Unices and some BSDs), that executes uname
because those shells perform arithmetic evaluation on the
numerical comparison operators of the [ command.
$ touch x 'x -a a[0$(uname>&2)] -gt 1'
$ ksh -c 'for f in *; do [ -f $f ]; done'
Linux
Same with bash for a filename like x -a -v a[0$(uname>&2)].
Of course, if they can't get arbitrary execution, the attacker may
settle for lesser damage (which may help to get arbitrary
execution). Any command that can write files or change
permissions, ownership or have any main or side effect could be exploited.
All sorts of things can be done with file names.
$ touch -- '-R ..'
$ for file in *; do [ -f "$file" ] && chmod +w $file; done
And you end up making .. writeable (recursively with GNU
chmod).
Scripts doing automatic processing of files in publicly writable areas like /tmp are to be written very carefully.
What about [ $# -gt 1 ]
That's something I find exasperating. Some people go down all
the trouble of wondering whether a particular expansion may be
problematic to decide if they can omit the quotes.
It's like saying. Hey, it looks like $# cannot be subject to
the split+glob operator, let's ask the shell to split+glob it.
Or Hey, let's write incorrect code just because the bug is
unlikely to be hit.
Now how unlikely is it? OK, $# (or $!, $? or any
arithmetic substitution) may only contain digits (or - for
some²) so the glob part is out. For the split part to do
something though, all we need is for $IFS to contain digits (or -).
With some shells, $IFS may be inherited from the environment,
but if the environment is not safe, it's game over anyway.
Now if you write a function like:
my_function() {
[ $# -eq 2 ] || return
...
}
What that means is that the behaviour of your function depends
on the context in which it is called. Or in other words, $IFS
becomes one of the inputs to it. Strictly speaking, when you
write the API documentation for your function, it should be
something like:
# my_function
# inputs:
# $1: source directory
# $2: destination directory
# $IFS: used to split $#, expected not to contain digits...
And code calling your function needs to make sure $IFS doesn't
contain digits. All that because you didn't feel like typing
those 2 double-quote characters.
Now, for that [ $# -eq 2 ] bug to become a vulnerability,
you'd need somehow for the value of $IFS to become under
control of the attacker. Conceivably, that would not normally
happen unless the attacker managed to exploit another bug.
That's not unheard of though. A common case is when people
forget to sanitize data before using it in arithmetic
expression. We've already seen above that it can allow
arbitrary code execution in some shells, but in all of them, it allows
the attacker to give any variable an integer value.
For instance:
n=$(($1 + 1))
if [ $# -gt 2 ]; then
echo >&2 "Too many arguments"
exit 1
fi
And with a $1 with value (IFS=-1234567890), that arithmetic
evaluation has the side effect of settings IFS and the next [
command fails which means the check for too many args is
bypassed.
What about when the split+glob operator is not invoked?
There's another case where quotes are needed around variables and other expansions: when it's used as a pattern.
[[ $a = $b ]] # a `ksh` construct also supported by `bash`
case $a in ($b) ...; esac
do not test whether $a and $b are the same (except with zsh) but if $a matches the pattern in $b. And you need to quote $b if you want to compare as strings (same thing in "${a#$b}" or "${a%$b}" or "${a##*$b*}" where $b should be quoted if it's not to be taken as a pattern).
What that means is that [[ $a = $b ]] may return true in cases where $a is different from $b (for instance when $a is anything and $b is *) or may return false when they are identical (for instance when both $a and $b are [a]).
Can that make for a security vulnerability? Yes, like any bug. Here, the attacker can alter your script's logical code flow and/or break the assumptions that your script are making. For instance, with a code like:
if [[ $1 = $2 ]]; then
echo >&2 '$1 and $2 cannot be the same or damage will incur'
exit 1
fi
The attacker can bypass the check by passing '[a]' '[a]'.
Now, if neither that pattern matching nor the split+glob operator apply, what's the danger of leaving a variable unquoted?
I have to admit that I do write:
a=$b
case $a in...
There, quoting doesn't harm but is not strictly necessary.
However, one side effect of omitting quotes in those cases (for instance in Q&A answers) is that it can send a wrong message to beginners: that it may be all right not to quote variables.
For instance, they may start thinking that if a=$b is OK, then export a=$b would be as well (which it's not in many shells as it's in arguments to the export command so in list context) or env a=$b.
There are a few places though where quotes are not accepted. The main one being inside Korn-style arithmetic expressions in many shells like in echo "$(( $1 + 1 ))" "${array[$1 + 1]}" "${var:$1 + 1}" where the $1 must not be quoted (being in a list context --the arguments to a simple command-- the overall expansions still needs to be quoted though).
Inside those, the shell understands a separate language altogether inspired from C. In AT&T ksh for instance $(( 'd' - 'a' )) expands to 3 like it does in C and not the same as $(( d - a )) would. Double quotes are ignored in ksh93 but cause a syntax error in many other shells. In C, "d" - "a" would return the difference between pointers to C strings. Doing the same in shell would not make sense.
What about zsh?
zsh did fix most of those design awkwardnesses. In zsh (at least when not in sh/ksh emulation mode), if you want splitting, or globbing, or pattern matching, you have to request it explicitly: $=var to split, and $~var to glob or for the content of the variable to be treated as a pattern.
However, splitting (but not globbing) is still done implicitly upon unquoted command substitution (as in echo $(cmd)).
Also, a sometimes unwanted side effect of not quoting variable is the empties removal. The zsh behaviour is similar to what you can achieve in other shells by disabling globbing altogether (with set -f) and splitting (with IFS=''). Still, in:
cmd $var
There will be no split+glob, but if $var is empty, instead of receiving one empty argument, cmd will receive no argument at all.
That can cause bugs (like the obvious [ -n $var ]). That can possibly break a script's expectations and assumptions and cause vulnerabilities.
As the empty variable can cause an argument to be just removed, that means the next argument could be interpreted in the wrong context.
As an example,
printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2
If $attacker_supplied1 is empty, then $attacker_supplied2 will be interpreted as an arithmetic expression (for %d) instead of a string (for %s) and any unsanitized data used in an arithmetic expression is a command injection vulnerability in Korn-like shells such as zsh.
$ attacker_supplied1='x y' attacker_supplied2='*'
$ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2
[1] <x y>
[2] <*>
fine, but:
$ attacker_supplied1='' attacker_supplied2='psvar[$(uname>&2)0]'
$ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2
Linux
[1] <2>
[0] <>
The uname arbitrary command was run.
Also note that while zsh doesn't do globbing upon substitutions by default, as globs in zsh are much more powerful than in other shells, that means they can do a lot more damage if ever you enabled the globsubst option at the same time of the extendedglob one, or without disabling bareglobqual and left some variables unintentionally unquoted.
For instance, even:
set -o globsubst
echo $attacker_controlled
Would be an arbitrary command execution vulnerability, because commands can be executed as part of glob expansions, for instance with the evaluation glob qualifier:
$ set -o globsubst
$ attacker_controlled='.(e[uname])'
$ echo $attacker_controlled
Linux
.
emulate sh # or ksh
echo $attacker_controlled
doesn't cause an ACE vulnerability (though it still a DoS one like in sh) because bareglobqual is disabled in sh/ksh emulation. There's no good reason to enable globsubst other than in those sh/ksh emulations when wanting to interpret sh/ksh code.
What about when you do need the split+glob operator?
Yes, that's typically when you do want to leave your variable unquoted. But then you need to make sure you tune your split and glob operators correctly before using it. If you only want the split part and not the glob part (which is the case most of the time), then you do need to disable globbing (set -o noglob/set -f) and fix $IFS. Otherwise you'll cause vulnerabilities as well (like David Korn's CGI example mentioned above).
Conclusion
In short, leaving a variable (or command substitution or
arithmetic expansion) unquoted in shells can be very dangerous
indeed especially when done in the wrong contexts, and it's very
hard to know which are those wrong contexts.
That's one of the reasons why it is considered bad practice.
Thanks for reading so far. If it goes over your head, don't
worry. One can't expect everyone to understand all the implications of
writing their code the way they write it. That's why we have
good practice recommendations, so they can be followed without
necessarily understanding why.
(and in case that's not obvious yet, please avoid writing
security sensitive code in shells).
And please quote your variables on your answers on this site!
¹In ksh93 and pdksh and derivatives, brace expansion is also performed unless globbing is disabled (in the case of ksh93 versions up to ksh93u+, even when the braceexpand option is disabled).
² In ksh93 and yash, arithmetic expansions can also include things like 1,2, 1e+66, inf, nan. There are even more in zsh, including # which is a glob operator with extendedglob, but zsh never does split+glob upon arithmetic expansion, even in sh emulation
| Security implications of forgetting to quote a variable in bash/POSIX shells |
1,327,107,993,000 |
Is there any easy way to pass (receive) named parameters to a shell script?
For example,
my_script -p_out '/some/path' -arg_1 '5'
And inside my_script.sh receive them as:
# I believe this notation does not work, but is there anything close to it?
p_out=$ARGUMENTS['p_out']
arg1=$ARGUMENTS['arg_1']
printf "The Argument p_out is %s" "$p_out"
printf "The Argument arg_1 is %s" "$arg1"
Is this possible in Bash or Zsh?
|
If you don't mind being limited to single-letter argument names i.e. my_script -p '/some/path' -a5, then in bash you could use the built-in getopts, e.g.
#!/bin/bash
while getopts ":a:p:" opt; do
case $opt in
a) arg_1="$OPTARG"
;;
p) p_out="$OPTARG"
;;
\?) echo "Invalid option -$OPTARG" >&2
exit 1
;;
esac
case $OPTARG in
-*) echo "Option $opt needs a valid argument"
exit 1
;;
esac
done
printf "Argument p_out is %s\n" "$p_out"
printf "Argument arg_1 is %s\n" "$arg_1"
Then you can do
$ ./my_script -p '/some/path' -a5
Argument p_out is /some/path
Argument arg_1 is 5
There is a helpful Small getopts tutorial or you can type help getopts at the shell prompt.
Edit: The second case statement in while loop triggers if the -p option has no arguments and is followed by another option, e.g. my_script -p -a5, and exits the program.
| Passing named arguments to shell scripts |
1,327,107,993,000 |
I would like to delete the last character of a string, I tried this little script :
#! /bin/sh
t="lkj"
t=${t:-2}
echo $t
but it prints "lkj", what I am doing wrong?
|
In a POSIX shell, the syntax ${t:-2} means something different - it expands to the value of t if t is set and non null, and otherwise to the value 2. To trim a single character by parameter expansion, the syntax you probably want is ${t%?}
Note that in ksh93, bash or zsh, ${t:(-2)} or ${t: -2} (note the space) are legal as a substring expansion but are probably not what you want, since they return the substring starting at a position 2 characters in from the end (i.e. it removes the first character i of the string ijk).
See the Shell Parameter Expansion section of the Bash Reference Manual for more info:
Bash Reference Manual – Shell Parameter Expansion
| Delete the last character of a string using string manipulation in shell script |
1,327,107,993,000 |
I have been trying to parallelize the following script, specifically each of the three FOR loop instances, using GNU Parallel but haven't been able to. The 4 commands contained within the FOR loop run in series, each loop taking around 10 minutes.
#!/bin/bash
kar='KAR5'
runList='run2 run3 run4'
mkdir normFunc
for run in $runList
do
fsl5.0-flirt -in $kar"deformed.nii.gz" -ref normtemp.nii.gz -omat $run".norm1.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12
fsl5.0-flirt -in $run".poststats.nii.gz" -ref $kar"deformed.nii.gz" -omat $run".norm2.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12
fsl5.0-convert_xfm -concat $run".norm1.mat" -omat $run".norm.mat" $run".norm2.mat"
fsl5.0-flirt -in $run".poststats.nii.gz" -ref normtemp.nii.gz -out $PWD/normFunc/$run".norm.nii.gz" -applyxfm -init $run".norm.mat" -interp trilinear
rm -f *.mat
done
|
Why don't you just fork (aka. background) them?
foo () {
local run=$1
fsl5.0-flirt -in $kar"deformed.nii.gz" -ref normtemp.nii.gz -omat $run".norm1.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12
fsl5.0-flirt -in $run".poststats.nii.gz" -ref $kar"deformed.nii.gz" -omat $run".norm2.mat" -bins 256 -cost corratio -searchrx -90 90 -searchry -90 90 -searchrz -90 90 -dof 12
fsl5.0-convert_xfm -concat $run".norm1.mat" -omat $run".norm.mat" $run".norm2.mat"
fsl5.0-flirt -in $run".poststats.nii.gz" -ref normtemp.nii.gz -out $PWD/normFunc/$run".norm.nii.gz" -applyxfm -init $run".norm.mat" -interp trilinear
}
for run in $runList; do foo "$run" & done
In case that's not clear, the significant part is here:
for run in $runList; do foo "$run" & done
^
Causing the function to be executed in a forked shell in the background. That's parallel.
| Parallelize a Bash FOR Loop |
1,327,107,993,000 |
I'm working on a simple bash script that should be able to run on Ubuntu and CentOS distributions (support for Debian and Fedora/RHEL would be a plus) and I need to know the name and version of the distribution the script is running (in order to trigger specific actions, for instance the creation of repositories). So far what I've got is this:
OS=$(awk '/DISTRIB_ID=/' /etc/*-release | sed 's/DISTRIB_ID=//' | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m | sed 's/x86_//;s/i[3-6]86/32/')
VERSION=$(awk '/DISTRIB_RELEASE=/' /etc/*-release | sed 's/DISTRIB_RELEASE=//' | sed 's/[.]0/./')
if [ -z "$OS" ]; then
OS=$(awk '{print $1}' /etc/*-release | tr '[:upper:]' '[:lower:]')
fi
if [ -z "$VERSION" ]; then
VERSION=$(awk '{print $3}' /etc/*-release)
fi
echo $OS
echo $ARCH
echo $VERSION
This seems to work, returning ubuntu or centos (I haven't tried others) as the release name. However, I have a feeling that there must be an easier, more reliable way of finding this out -- is that true?
It doesn't work for RedHat.
/etc/redhat-release contains :
Redhat Linux Entreprise release 5.5
So, the version is not the third word, you'd better use :
OS_MAJOR_VERSION=`sed -rn 's/.*([0-9])\.[0-9].*/\1/p' /etc/redhat-release`
OS_MINOR_VERSION=`sed -rn 's/.*[0-9].([0-9]).*/\1/p' /etc/redhat-release`
echo "RedHat/CentOS $OS_MAJOR_VERSION.$OS_MINOR_VERSION"
|
To get OS and VER, the latest standard seems to be /etc/os-release.
Before that, there was lsb_release and /etc/lsb-release. Before that, you had to look for different files for each distribution.
Here's what I'd suggest
if [ -f /etc/os-release ]; then
# freedesktop.org and systemd
. /etc/os-release
OS=$NAME
VER=$VERSION_ID
elif type lsb_release >/dev/null 2>&1; then
# linuxbase.org
OS=$(lsb_release -si)
VER=$(lsb_release -sr)
elif [ -f /etc/lsb-release ]; then
# For some versions of Debian/Ubuntu without lsb_release command
. /etc/lsb-release
OS=$DISTRIB_ID
VER=$DISTRIB_RELEASE
elif [ -f /etc/debian_version ]; then
# Older Debian/Ubuntu/etc.
OS=Debian
VER=$(cat /etc/debian_version)
elif [ -f /etc/SuSe-release ]; then
# Older SuSE/etc.
...
elif [ -f /etc/redhat-release ]; then
# Older Red Hat, CentOS, etc.
...
else
# Fall back to uname, e.g. "Linux <version>", also works for BSD, etc.
OS=$(uname -s)
VER=$(uname -r)
fi
I think uname to get ARCH is still the best way. But the example you gave obviously only handles Intel systems. I'd either call it BITS like this:
case $(uname -m) in
x86_64)
BITS=64
;;
i*86)
BITS=32
;;
*)
BITS=?
;;
esac
Or change ARCH to be the more common, yet unambiguous versions: x86 and x64 or similar:
case $(uname -m) in
x86_64)
ARCH=x64 # or AMD64 or Intel64 or whatever
;;
i*86)
ARCH=x86 # or IA32 or Intel32 or whatever
;;
*)
# leave ARCH as-is
;;
esac
but of course that's up to you.
| How can I get distribution name and version number in a simple shell script? |
1,327,107,993,000 |
I have came across this script:
#! /bin/bash
if (( $# < 3 )); then
echo "$0 old_string new_string file [file...]"
exit 0
else
ostr="$1"; shift
nstr="$1"; shift
fi
echo "Replacing \"$ostr\" with \"$nstr\""
for file in $@; do
if [ -f $file ]; then
echo "Working with: $file"
eval "sed 's/"$ostr"/"$nstr"/g' $file" > $file.tmp
mv $file.tmp $file
fi
done
What is the meaning of the lines where they use shift? I presume the script should be used with at least arguments so...?
|
shift is a bash built-in which kind of removes arguments from the beginning of the argument list. Given that the 3 arguments provided to the script are available in $1, $2, $3, then a call to shift will make $2 the new $1.
A shift 2 will shift by two making new $1 the old $3.
For more information, see here:
http://ss64.com/bash/shift.html
http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_07.html
| What is the purpose of using shift in shell scripts? |
1,327,107,993,000 |
Most languages have naming conventions for variables, the most common style I see in shell scripts is MY_VARIABLE=foo. Is this the convention or is it only for global variables? What about variables local to the script?
|
Environment variables or shell variables introduced by the operating system, shell startup scripts, or the shell itself, etc., are usually all in CAPITALS1.
To prevent your variables from conflicting with these variables, it is a good practice to use lower_case variable names.
1A notable exception that may be worth knowing about is the path array, used by the zsh shell. This is the same as the common PATH variable but represented as an array.
| Are there naming conventions for variables in shell scripts? |
1,327,107,993,000 |
Can I redirect output to a log file and a background process at the same time?
In other words, can I do something like this?
nohup java -jar myProgram.jar 2>&1 > output.log &
Or, is that not a legal command? Or, do I need to manually move it to the background, like this:
java -jar myProgram.jar 2>$1 > output.log
jobs
[CTRL-Z]
bg 1
|
One problem with your first command is that you redirect stderr to where stdout is (if you changed the $ to a & as suggested in the comment) and then, you redirected stdout to some log file, but that does not pull along the redirected stderr. You must do it in the other order, first send stdout to where you want it to go, and then send stderr to the address stdout is at
some_cmd > some_file 2>&1 &
and then you could throw the & on to send it to the background. Jobs can be accessed with the jobs command. jobs will show you the running jobs, and number them. You could then talk about the jobs using a % followed by the number like kill %1 or so.
Also, without the & on the end you can suspend the command with Ctrlz, use the bg command to put it in the background and fg to bring it back to the foreground. In combination with the jobs command, this is powerful.
to clarify the above part about the order you write the commands. Suppose stderr is address 1002, stdout is address 1001, and the file is 1008. The command reads left to right, so the first thing it sees in yours is 2>&1 which moves stderr to the address 1001, it then sees > file which moves stdout to 1008, but keeps stderr at 1001. It does not pull everything pointing at 1001 and move it to 1008, but simply references stdout and moves it to the file.
The other way around, it moves stdout to 1008, and then moves stderr to the point that stdout is pointing to, 1008 as well. This way both can point to the single file.
| Can I redirect output to a log file and background a process at the same time? |
1,327,107,993,000 |
I have a script which runs rsync with a Git working directory as destination. I want the script to have different behavior depending on if the working directory is clean (no changes to commit), or not. For instance, if the output of git status is as below, I want the script to exit:
git status
Already up-to-date.
# On branch master
nothing to commit (working directory clean)
Everything up-to-date
If the directory is not clean then I would like it to execute some more commands.
How can I check for output like the above in a shell script?
|
Parsing the output of git status is a bad idea because the output is intended to be human readable, not machine-readable. There's no guarantee that the output will remain the same in future versions of Git or in differently configured environments.
UVVs comment is on the right track, but unfortunately the return code of git status doesn't change when there are uncommitted changes. It does, however, provide the --porcelain option, which causes the output of git status --porcelain to be formatted in an easy-to-parse format for scripts, and will remain stable across Git versions and regardless of user configuration.
We can use empty output of git status --porcelain as an indicator that there are no changes to be committed:
if [ -z "$(git status --porcelain)" ]; then
# Working directory clean
else
# Uncommitted changes
fi
If we do not care about untracked files in the working directory, we can use the --untracked-files=no option to disregard those:
if [ -z "$(git status --untracked-files=no --porcelain)" ]; then
# Working directory clean excluding untracked files
else
# Uncommitted changes in tracked files
fi
To make this more robust against conditions which actually cause git status to fail without output to stdout, we can refine the check to:
if output=$(git status --porcelain) && [ -z "$output" ]; then
# Working directory clean
else
# Uncommitted changes
fi
It's also worth noting that, although git status does not give meaningful exit code when the working directory is unclean, git diff provides the --exit-code option, which makes it behave similar to the diff utility, that is, exiting with status 1 when there were differences and 0 when none were found.
Using this, we can check for unstaged changes with:
git diff --exit-code
and staged, but not committed changes with:
git diff --cached --exit-code
Although git diff can report on untracked files in submodules via appropriate arguments to --ignore-submodules, unfortunately it seems that there is no way to have it report on untracked files in the actual working directory. If untracked files in the working directory are relevant, git status --porcelain is probably the best bet.
| Determine if Git working directory is clean from a script |
1,327,107,993,000 |
I can write
VAR=$VAR1
VAR=${VAR1}
VAR="$VAR1"
VAR="${VAR1}"
the end result to me all seems about the same. Why should I write one or the other? are any of these not portable/POSIX?
|
VAR=$VAR1 is a simplified version of VAR=${VAR1}. There are things the second can do that the first can't, for instance reference an array index (not portable) or remove a substring (POSIX-portable). See the More on variables section of the Bash Guide for Beginners and Parameter Expansion in the POSIX spec.
Using quotes around a variable as in rm -- "$VAR1" or rm -- "${VAR}" is a good idea. This makes the contents of the variable an atomic unit. If the variable value contains blanks (well, characters in the $IFS special variable, blanks by default) or globbing characters and you don't quote it, then each word is considered for filename generation (globbing) whose expansion makes as many arguments to whatever you're doing.
$ find .
.
./*r*
./-rf
./another
./filename
./spaced filename
./another spaced filename
./another spaced filename/x
$ var='spaced filename'
# usually, 'spaced filename' would come from the output of some command and you weren't expecting it
$ rm $var
rm: cannot remove 'spaced': No such file or directory
# oops! I just ran 'rm spaced filename'
$ var='*r*'
$ rm $var
# expands to: 'rm' '-rf' '*r*' 'another spaced filename'
$ find .
.
./another
./spaced filename
./another spaced filename
$ var='another spaced filename'
$ rm -- "$var"
$ find .
.
./another
./spaced filename
On portability:
According to POSIX.1-2008 section 2.6.2, the curly braces are optional.
| $VAR vs ${VAR} and to quote or not to quote |
1,327,107,993,000 |
I want to write logic in shell script which will retry it to run again after 15 sec upto 5 times based on "status code=FAIL" if it fails due to some issue.
|
This script uses a counter n to limit the attempts at the command to five.
If the command is successful, break ends the loop.
n=0
until [ "$n" -ge 5 ]
do
command && break # substitute your command here
n=$((n+1))
sleep 15
done
| How do I write a retry logic in script to keep retrying to run it upto 5 times? |
1,327,107,993,000 |
I tried to check if the PHONE_TYPE variable contains one of three valid values.
if [ "$PHONE_TYPE" != "NORTEL" ] || [ "$PHONE_TYPE" != "NEC" ] ||
[ "$PHONE_TYPE" != "CISCO" ]
then
echo "Phone type must be nortel,cisco or nec"
exit
fi
The above code did not work for me, so I tried this instead:
if [ "$PHONE_TYPE" == "NORTEL" ] || [ "$PHONE_TYPE" == "NEC" ] ||
[ "$PHONE_TYPE" == "CISCO" ]
then
: # do nothing
else
echo "Phone type must be nortel,cisco or nec"
exit
fi
Are there cleaner ways for this type of task?
|
I guess you're looking for:
if [ "$PHONE_TYPE" != "NORTEL" ] && [ "$PHONE_TYPE" != "NEC" ] &&
[ "$PHONE_TYPE" != "CISCO" ]
The rules for these equivalents are called De Morgan's laws and in your case meant:
not(A || B || C) => not(A) && not(B) && not (C)
Note the change in the boolean operator or and and.
Whereas you tried to do:
not(A || B || C) => not(A) || not(B) || not(C)
Which obviously doesn't work.
| Using the not equal operator for string comparison |
1,327,107,993,000 |
I have Linux ( RH 5.3) machine
I need to add/calculate 10 days plus date so then I will get new date (expiration date))
for example
# date
Sun Sep 11 07:59:16 IST 2012
So I need to get
NEW_expration_DATE = Sun Sep 21 07:59:16 IST 2012
Please advice how to calculate the new expiration date ( with bash , ksh , or manipulate date command ?)
|
You can just use the -d switch and provide a date to be calculated
date
Sun Sep 23 08:19:56 BST 2012
NEW_expration_DATE=$(date -d "+10 days")
echo $NEW_expration_DATE
Wed Oct 3 08:12:33 BST 2012
-d, --date=STRING
display time described by STRING, not ‘now’
This is quite a powerful tool as you can do things like
date -d "Sun Sep 11 07:59:16 IST 2012+10 days"
Fri Sep 21 03:29:16 BST 2012
or
TZ=IST date -d "Sun Sep 11 07:59:16 IST 2012+10 days"
Fri Sep 21 07:59:16 IST 2012
or
prog_end_date=`date '+%C%y%m%d' -d "$end_date+10 days"`
So if $end_date = 20131001 then $prog_end_date = 20131011.
| How do I add X days to date and get new date? |
1,327,107,993,000 |
I have multiple files that contain ascii text information in the first 5-10 lines, followed by well-tabulated matrix information. In a shell script, I want to remove these first few lines of text so that I can use the pure matrix information in another program. How can I use bash shell commands to do this?
If it's any help, I'm using RedHat and an Ubuntu linux systems.
|
As long as the file is not a symlink or hardlink, you can use sed, tail, or awk. Example below.
$ cat t.txt
12
34
56
78
90
sed
$ sed -e '1,3d' < t.txt
78
90
You can also use sed in-place without a temp file: sed -i -e 1,3d yourfile. This won't echo anything, it will just modify the file in-place. If you don't need to pipe the result to another command, this is easier.
tail
$ tail -n +4 t.txt
78
90
awk
$ awk 'NR > 3 { print }' < t.txt
78
90
| How do I delete the first n lines of an ascii file using shell commands? |
1,327,107,993,000 |
The old advice used to be to double-quote any expression involving a $VARIABLE, at least if one wanted it to be interpreted by the shell as one single item, otherwise, any spaces in the content of $VARIABLE would throw off the shell.
I understand, however, that in more recent versions of shells, double-quoting is no longer always needed (at least for the purpose described above). For instance, in bash:
% FOO='bar baz'
% [ $FOO = 'bar baz' ] && echo OK
bash: [: too many arguments
% [[ $FOO = 'bar baz' ]] && echo OK
OK
% touch 'bar baz'
% ls $FOO
ls: cannot access bar: No such file or directory
ls: cannot access baz: No such file or directory
In zsh, on the other hand, the same three commands succeed. Therefore, based on this experiment, it seems that, in bash, one can omit the double quotes inside [[ ... ]], but not inside [ ... ] nor in command-line arguments, whereas, in zsh, the double quotes may be omitted in all these cases.
But inferring general rules from anecdotal examples like the above is a chancy proposition. It would be nice to see a summary of when double-quoting is necessary. I'm primarily interested in zsh, bash, and /bin/sh.
|
First, separate zsh from the rest. It's not a matter of old vs modern shells: zsh behaves differently. The zsh designers decided to make it incompatible with traditional shells (Bourne, ksh, bash), but easier to use.
Second, it is far easier to use double quotes all the time than to remember when they are needed. They are needed most of the time, so you'll need to learn when they aren't needed, not when they are needed.
In a nutshell, double quotes are necessary wherever a list of words or a pattern is expected. They are optional in contexts where a single raw string is expected by the parser.
What happens without quotes
Note that without double quotes, two things happen.
First, the result of the expansion (the value of the variable for a parameter substitution like $foo or ${foo}, or the output of the command for a command substitution like $(foo)) is split into words wherever it contains whitespace.
More precisely, the result of the expansion is split at each character that appears in the value of the IFS variable (separator character). If a sequence of separator characters contains whitespace (space, tab, or newline), the whitespace counts as a single character; leading, trailing, or repeated non-whitespace separators lead to empty fields. For example, with IFS=" :" (space and colon), :one::two : three: :four produces empty fields before one, between one and two, and (a single one) between three and four.
Each field that results from splitting is interpreted as a glob (a wildcard pattern) if it contains one of the characters [*?. If that pattern matches one or more file names, the pattern is replaced by the list of matching file names.
An unquoted variable expansion $foo is colloquially known as the “split+glob operator”, in contrast with "$foo" which just takes the value of the variable foo. The same goes for command substitution: "$(foo)" is a command substitution, $(foo) is a command substitution followed by split+glob.
Where you can omit the double quotes
Here are all the cases I can think of in a Bourne-style shell where you can write a variable or command substitution without double quotes, and the value is interpreted literally.
On the right-hand side of a scalar (not array) variable assignment.
var=$stuff
a_single_star=*
Note that you do need the double quotes after export or readonly, because in a few shells, they are still ordinary builtins, not a keyword. This is only true in some shells such as some older versions of dash, older versions of zsh (in sh emulation), yash, or posh; in bash, ksh, newer versions of dash and zsh export / readonly and co are treated specially as dual builtin / keyword (under some conditions) as POSIX now more clearly requires.
export VAR="$stuff"
In a case statement.
case $var in …
Note that you do need double quotes in a case pattern. Word splitting doesn't happen in a case pattern, but an unquoted variable is interpreted as a glob-style pattern whereas a quoted variable is interpreted as a literal string.
a_star='a*'
case $var in
"$a_star") echo "'$var' is the two characters a, *";;
$a_star) echo "'$var' begins with a";;
esac
Within double brackets. Double brackets are shell special syntax.
[[ -e $filename ]]
Except that you do need double quotes where a pattern or regular expression is expected: on the right-hand side of = or == or != or =~ (though for the latter, behaviour varies between shells).
a_star='a*'
if [[ $var == "$a_star" ]]; then echo "'$var' is the two characters a, *"
elif [[ $var == $a_star ]]; then echo "'$var' begins with a"
fi
You do need double quotes as usual within single brackets [ … ] because they are ordinary shell syntax (it's a command that happens to be called [). See Why does parameter expansion with spaces without quotes work inside double brackets "[[" but not inside single brackets "["?.
In a redirection in non-interactive POSIX shells (not bash, nor ksh88).
echo "hello world" >$filename
Some shells, when interactive, do treat the value of the variable as a wildcard pattern. POSIX prohibits that behaviour in non-interactive shells, but a few shells including bash (except in POSIX mode) and ksh88 (including when found as the (supposedly) POSIX sh of some commercial Unices like Solaris) still do it there (bash does also attempt splitting and the redirection fails unless that split+globbing results in exactly one word), which is why it's better to quote targets of redirections in a sh script in case you want to convert it to a bash script some day, or run it on a system where sh is non-compliant on that point, or it may be sourced from interactive shells.
Inside an arithmetic expression. In fact, you need to leave the quotes out in order for a variable to be parsed as an arithmetic expression in several shells.
expr=2*2
echo "$(($expr))"
However, you do need the quotes around the arithmetic expansion as it is subject to word splitting in most shells as POSIX requires (!?).
In an associative array subscript.
typeset -A a
i='foo bar*qux'
a[foo\ bar\*qux]=hello
echo "${a[$i]}"
An unquoted variable and command substitution can be useful in some rare circumstances:
When the variable value or command output consists of a list of glob patterns and you want to expand these patterns to the list of matching files.
When you know that the value doesn't contain any wildcard character, that $IFS was not modified and you want to split it at whitespace (well, only space, tab and newline) characters.
When you want to split a value at a certain character: disable globbing with set -o noglob / set -f, set IFS to the separator character (or leave it alone to split at whitespace), then do the expansion.
Zsh
In zsh, you can omit the double quotes most of the time, with a few exceptions.
$var never expands to multiple words (assuming var isn't an array), but it expands to the empty list (as opposed to a list containing a single, empty word) if the value of var is the empty string. Contrast:
var=
print -l -- $var foo # prints just foo
print -l -- "$var" foo # prints an empty line, then foo
Similarly, "${array[@]}" expands to all the elements of the array, while $array only expands to the non-empty elements.
Like in ksh and bash, inside [[ … ]], a variable in the right-hand side of a ==, != or =~ operator needs to be double-quoted if it contains a string, and not quoted if it contains a pattern/regex: p='a*'; [[ abc == $p ]] is true but p='a*'; [[ abc == "$p" ]] is false.
The @ parameter expansion flag sometimes requires double quotes around the whole substitution: "${(@)foo}".
Command substitution undergoes field splitting if unquoted: echo $(echo 'a'; echo '*') prints a * (with a single space) whereas echo "$(echo 'a'; echo '*')" prints the unmodified two-line string. Use "$(somecommand)" to get the output of the command in a single word, sans final newlines. Use "${$(somecommand; echo .)%?}" to get the exact output of the command including final newlines. Use "${(@f)$(somecommand)}" to get an array of lines from the command's output (removing trailing empty lines if any though).
| When is double-quoting necessary? |
1,327,107,993,000 |
I have written a script that runs fine when executed locally:
./sysMole -time Aug 18 18
The arguments "-time", "Aug", "18", and "18" are successfully passed on to the script.
Now, this script is designed to be executed on a remote machine but, from a local directory on the local machine. Example:
ssh root@remoteServer "bash -s" < /var/www/html/ops1/sysMole
That also works fine. But the problem arises when I try to include those aforementioned arguments (-time Aug 18 18), for example:
ssh root@remoteServer "bash -s" < /var/www/html/ops1/sysMole -time Aug 18 18
After running that script I get the following error:
bash: cannot set terminal process group (-1): Invalid argument
bash: no job control in this shell
Please tell me what I'm doing wrong, this greatly frustrating.
|
You were pretty close with your example. It works just fine when you use it with arguments such as these.
Sample script:
$ more ex.bash
#!/bin/bash
echo $1 $2
Example that works:
$ ssh serverA "bash -s" < ./ex.bash "hi" "bye"
hi bye
But it fails for these types of arguments:
$ ssh serverA "bash -s" < ./ex.bash "--time" "bye"
bash: --: invalid option
...
What's going on?
The problem you're encountering is that the argument, -time, or --time in my example, is being interpreted as a switch to bash -s. You can pacify bash by terminating it from taking any of the remaining command line arguments for itself using the -- argument.
Like this:
$ ssh root@remoteServer "bash -s" -- < /var/www/html/ops1/sysMole -time Aug 18 18
Examples
#1:
$ ssh serverA "bash -s" -- < ./ex.bash "-time" "bye"
-time bye
#2:
$ ssh serverA "bash -s" -- < ./ex.bash "--time" "bye"
--time bye
#3:
$ ssh serverA "bash -s" -- < ./ex.bash --time "bye"
--time bye
#4:
$ ssh < ./ex.bash serverA "bash -s -- --time bye"
--time bye
NOTE: Just to make it clear that wherever the redirection appears on the command line makes no difference, because ssh calls a remote shell with the concatenation of its arguments anyway, quoting doesn't make much difference, except when you need quoting on the remote shell like in example #4:
$ ssh < ./ex.bash serverA "bash -s -- '<--time bye>' '<end>'"
<--time bye> <end>
| How can I execute local script on remote machine and include arguments? |
1,327,107,993,000 |
How can I delete the first line of a file and keep the changes?
I tried this but it erases the whole content of the file.
$sed 1d file.txt > file.txt
|
The reason file.txt is empty after that command is the order in which the shell does things. The first thing that happens with that line is the redirection. The file "file.txt" is opened and truncated to 0 bytes. After that the sed command runs, but at the point the file is already empty.
There are a few options, most involve writing to a temporary file.
sed '1d' file.txt > tmpfile; mv tmpfile file.txt # POSIX
sed -i '1d' file.txt # GNU sed only, creates a temporary file
perl -ip -e '$_ = undef if $. == 1' file.txt # also creates a temporary file
| Delete First line of a file |
1,327,107,993,000 |
From what I've read, putting a command in parentheses should run it in a subshell, similar to running a script. If this is true, how does it see the variable x if x isn't exported?
x=1
Running (echo $x) on the command line results in 1
Running echo $x in a script results in nothing, as expected
|
A subshell starts out as an almost identical copy of the original shell process. Under the hood, the shell calls the fork system call1, which creates a new process whose code and memory are copies2. When the subshell is created, there are very few differences between it and its parent. In particular, they have the same variables. Even the $$ special variable keeps the same value in subshells: it's the original shell's process ID. Similarly $PPID is the PID of the parent of the original shell.
A few shells change a few variables in the subshell. Bash ≥4.0 sets BASHPID to the PID of the shell process, which changes in subshells. Bash, zsh and mksh arrange for $RANDOM to yield different values in the parent and in the subshell. But apart from built-in special cases like these, all variables have the same value in the subshell as in the original shell, the same export status, the same read-only status, etc. All function definitions, alias definitions, shell options and other settings are inherited as well.
A subshell created by (…) has the same file descriptors as its creator. Some other means of creating subshells modify some file descriptors before executing user code; for example, the left-hand side of a pipe runs in a subshell3 with standard output connected to the pipe. The subshell also starts out with the same current directory, the same signal mask, etc. One of the few exceptions is that subshells do not inherit custom traps: ignored signals (trap '' SIGNAL) remain ignored in the subshell, but other traps (trap CODE SIGNAL) are reset to the default action4.
A subshell is thus different from executing a script. A script is a separate program. This separate program might coincidentally be also a script which is executed by the same interpreter as the parent, but this coincidence doesn't give the separate program any special visibility on internal data of the parent. Non-exported variables are internal data, so when the interpreter for the child shell script is executed, it doesn't see these variables. Exported variables, i.e. environment variables, are transmitted to executed programs.
Thus:
x=1
(echo $x)
prints 1 because the subshell is a replication of the shell that spawned it.
x=1
sh -c 'echo $x'
happens to run a shell as a child process of a shell, but the x on the second line has no more connection with the x on the second line than in
x=1
perl -le 'print $x'
or
x=1
python -c 'print x'
1 Unless the shell optimizes the forking out, but emulates forking as much as necessary to preserve the behavior of the code that it's executing. Ksh93 optimizes a lot, other shells mostly don't.
2 Semantically, they're copies. From an implementation perspective, there's a lot of sharing going on.
3 For the right-hand side, it depends on the shell.
4 If you test this out, note that things like $(trap) may report the traps of the original shell. Note also that many shells have bugs in corner cases involving traps. For example ninjalj notes that as of bash 4.3, bash -x -c 'trap "echo ERR at \$BASH_SUBSHELL \$BASHPID" ERR; set -E; false; echo one subshell; (false); echo two subshells; ( (false) )' runs the ERR trap from the nested subshell in the “two subshells” case, but not the ERR trap from the intermediate subshell — set -E option should propagate the ERR trap to all subshells but the intermediate subshell is optimized away and so isn't there to run its ERR trap.
| Do parentheses really put the command in a subshell? |
1,327,107,993,000 |
I want to decode URL encoding, is there any built-in tool for doing this or could anyone provide me with a sed code that will do this?
I did search a bit through unix.stackexchange.com and on the internet but I couldn't find any command line tool for decoding url encoding.
What I want to do is simply in place edit a txt file so that:
%21 becomes !
%23 becomes #
%24 becomes $
%26 becomes &
%27 becomes '
%28 becomes (
%29 becomes )
And so on.
|
Found these Python one liners that do what you want:
Python2
$ alias urldecode='python -c "import sys, urllib as ul; \
print ul.unquote_plus(sys.argv[1])"'
$ alias urlencode='python -c "import sys, urllib as ul; \
print ul.quote_plus(sys.argv[1])"'
Python3
$ alias urldecode='python3 -c "import sys, urllib.parse as ul; \
print(ul.unquote_plus(sys.argv[1]))"'
$ alias urlencode='python3 -c "import sys, urllib.parse as ul; \
print (ul.quote_plus(sys.argv[1]))"'
Example
$ urldecode 'q+werty%3D%2F%3B'
q werty=/;
$ urlencode 'q werty=/;'
q+werty%3D%2F%3B
References
Urlencode and urldecode from a command line
| Decoding URL encoding (percent encoding) |
1,327,107,993,000 |
How do you check if $* is empty? In other words, how to check if there were no arguments provided to a command?
|
To check if there were no arguments provided to the command, check value of $# variable then,
if [ $# -eq 0 ]; then
>&2 echo "No arguments provided"
exit 1
fi
If you want to use $*(not preferable) then,
if [ "$*" == "" ]; then
>&2 echo "No arguments provided"
exit 1
fi
Some explanation:
The second approach is not preferable because in positional parameter expansion * expands to the positional parameters, starting from one. When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. That means a string is constructed. So there is extra overhead.
On the other hand # expands to the number of positional parameters.
Example:
$ command param1 param2
Here,
Value of $# is 2 and value of $* is string "param1 param2" (without quotes), if IFS is unset. Because if IFS is unset, the parameters are separated by spaces
For more details man bash and read topic named Special Parameters
| How to check if there are no parameters provided to a command? |
1,327,107,993,000 |
I've seen this comment many times on Unix & Linux as well as on other sites that use the phrasing "backticks have been deprecated", with respect to shells such as Bash & Zsh.
Is this statement true or false?
|
There are two different meanings of "deprecated."
be deprecated: (chiefly of a software feature) be usable but regarded as obsolete and best avoided, typically due to having been superseded.
—New Oxford American Dictionary
By this definition backticks are deprecated.
Deprecated status may also indicate the feature will be removed in the future.
—Wikipedia
By this definition backticks are not deprecated.
Still supported:
Citing the Open Group Specification on Shell Command Languages,
specifically section 2.6.3 Command Substitution, it can be seen that both forms of command substitution, backticks (`..cmd..`) or dollar parens ($(..cmd..)) are still supported insofar as the specification goes.
Command substitution allows the output of a command to be substituted in place
of the command name itself. Command substitution shall occur when the command
is enclosed as follows:
$(command)
or (backquoted version):
`command`
The shell shall expand the command substitution by executing command in a
subshell environment (see Shell Execution Environment) and replacing the
command substitution (the text of command plus the enclosing $() or
backquotes) with the standard output of the command, removing sequences of one
or more <newline> characters at the end of the substitution. Embedded <newline> characters before the end of the output shall not be removed; however,
they may be treated as field delimiters and eliminated during field splitting,
depending on the value of IFS and quoting that is in effect. If the output
contains any null bytes, the behavior is unspecified.
Within the backquoted style of command substitution, <backslash> shall retain
its literal meaning, except when followed by: '$', '`', or <backslash>. The
search for the matching backquote shall be satisfied by the first unquoted
non-escaped backquote; during this search, if a non-escaped backquote is
encountered within a shell comment, a here-document, an embedded command
substitution of the $(command) form, or a quoted string, undefined results
occur. A single-quoted or double-quoted string that begins, but does not end,
within the "`...`" sequence produces undefined results.
With the $(command) form, all characters following the open parenthesis to
the matching closing parenthesis constitute the command. Any valid shell
script can be used for command, except a script consisting solely of
re-directions which produces unspecified results.
So then why does everyone say that backticks have been deprecated?
Because most of the use cases should be making use of the dollar parens form instead of backticks. (Deprecated in the first sense above.) Many of the most reputable sites (including U&L) often state this as well, throughout, so it's sound advice. This advice should not be confused with some non-existent plan to remove support for backticks from shells.
BashFAQ #082 - Why is $(...) preferred over `...` (backticks)?
`...` is the legacy syntax required by only the very oldest of
non-POSIX-compatible bourne-shells. There are several reasons to always
prefer the $(...) syntax:
...
Bash Hackers Wiki - Obsolete and deprecated syntax
This is the older Bourne-compatible form of the command substitution.
Both the `COMMANDS` and $(COMMANDS) syntaxes are specified by POSIX,
but the latter is greatly preferred, though the former is unfortunately
still very prevalent in scripts. New-style command substitutions are widely
implemented by every modern shell (and then some). The only reason for using
backticks is for compatibility with a real Bourne shell (like Heirloom).
Backtick command substitutions require special escaping when nested, and
examples found in the wild are improperly quoted more often than not. See:
Why is $(...) preferred over `...` (backticks)?.
POSIX standard rationale
Because of these inconsistent behaviors, the backquoted variety of command substitution is not recommended for new applications that nest command substitutions or attempt to embed complex scripts.
NOTE: This third excerpt (above) goes on to show several situations where backticks simply won't work, but the newer dollar parens method does, beginning with the following paragraph:
Additionally, the backquoted syntax has historical restrictions on the contents of the embedded command. While the newer "$()" form can process any kind of valid embedded script, the backquoted form cannot handle some valid scripts that include backquotes.
If you continue reading that section the failures are highlighted showing how they would fail using backticks, but do work using the newer dollar parens notation.
Conclusions
So it's preferable that you use dollar parens instead of backticks but you aren't actually using something that's been technically "deprecated" as in "this will stop working entirely at some planned point."
After reading all this you should have the take away that you're strongly encouraged to use dollar parens unless you specifically require compatibility with a real original non-POSIX Bourne shell.
| Have backticks (i.e. `cmd`) in *sh shells been deprecated? |
1,327,107,993,000 |
I just saw this in an init script:
echo $"Stopping Apache"
What is that dollar-sign for?
My research so far:
I found this in the bash manual:
extquote
If set, $'string' and $"string" quoting is performed within ${parameter} expansions enclosed in double quotes. This option is enabled by default.
...but I'm not finding any difference between strings with and without the $ prefix:
$ echo "I am in $PWD"
I am in /var/shared/home/southworth/qed
$ echo $"I am in $PWD"
I am in /var/shared/home/southworth/qed
$ echo $"I am in ${PWD}"
I am in /var/shared/home/southworth/qed
$ echo "I am in ${PWD}"
I am in /var/shared/home/southworth/qed
$ echo 'I am in ${PWD}'
I am in ${PWD}
$ echo $'I am in ${PWD}'
I am in ${PWD}
$ echo $'I am in $PWD'
I am in $PWD
|
There are two different things going on here, both documented in the bash manual
$'
Dollar-sign single quote is a special form of quoting:
ANSI C Quoting
Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard.
$"
Dollar-sign double-quote is for localization:
Locale translation
A double-quoted string preceded by a dollar sign (‘$’) will cause the string to be translated according to the current locale. If the current locale is C or POSIX, the dollar sign is ignored. If the string is translated and replaced, the replacement is double-quoted.
| What does it mean to have a $"dollarsign-prefixed string" in a script? |
1,327,107,993,000 |
The Windows dir directory listing command has a line at the end showing the total amount of space taken up by the files listed. For example, dir *.exe shows all the .exe files in the current directory, their sizes, and the sum total of their sizes. I'd love to have similar functionality with my dir alias in bash, but I'm not sure exactly how to go about it.
Currently, I have alias dir='ls -FaGl' in my .bash_profile, showing
drwxr-x---+ 24 mattdmo 4096 Mar 14 16:35 ./
drwxr-x--x. 256 root 12288 Apr 8 21:29 ../
-rw------- 1 mattdmo 13795 Apr 4 17:52 .bash_history
-rw-r--r-- 1 mattdmo 18 May 10 2012 .bash_logout
-rw-r--r-- 1 mattdmo 395 Dec 9 17:33 .bash_profile
-rw-r--r-- 1 mattdmo 176 May 10 2012 .bash_profile~
-rw-r--r-- 1 mattdmo 411 Dec 9 17:33 .bashrc
-rw-r--r-- 1 mattdmo 124 May 10 2012 .bashrc~
drwx------ 2 mattdmo 4096 Mar 24 20:03 bin/
drwxrwxr-x 2 mattdmo 4096 Mar 11 16:29 download/
for example. Taking the answers from this question:
dir | awk '{ total += $4 }; END { print total }'
which gives me the total, but doesn't print the directory listing itself. Is there a way to alter this into a one-liner or shell script so I can pass any ls arguments I want to dir and get a full listing plus sum total? For example, I'd like to run dir -R *.jpg *.tif to get the listing and total size of those file types in all subdirectories. Ideally, it would be great if I could get the size of each subdirectory, but this isn't essential.
|
The following function does most of what you're asking for:
dir () { ls -FaGl "${@}" | awk '{ total += $4; print }; END { print total }'; }
... but it won't give you what you're asking for from dir -R *.jpg *.tif, because that's not how ls -R works. You might want to play around with the find utility for that.
| Show sum of file sizes in directory listing |
1,327,107,993,000 |
I am having a hard time defining and running my own shell functions in zsh. I followed the instructions on the official documentation and tried with easy example first, but I failed to get it work.
I have a folder:
~/.my_zsh_functions
In this folder I have a file called functions_1 with rwx user permissions. In this file I have the following shell function defined:
my_function () {
echo "Hello world";
}
I defined FPATH to include the path to the folder ~/.my_zsh_functions:
export FPATH=~/.my_zsh_functions:$FPATH
I can confirm that the folder .my_zsh_functions is in the functions path with echo $FPATH or echo $fpath
However, if I then try the following from the shell:
> autoload my_function
> my_function
I get:
zsh: my_test_function: function definition file not found
Is there anything else I need to do to be able to call my_function ?
Update:
The answers so far suggest sourcing the file with the zsh functions. This makes sense, but I am bit confused. Shouldn't zsh know where those files are with FPATH? What is the purpose of autoload then?
|
In zsh, the function search path ($fpath) defines a set of directories, which contain files that can be marked to be loaded automatically when the function they contain is needed for the first time.
Zsh has two modes of autoloading files: Zsh's native way and another mode that resembles ksh's autoloading. The latter is active if the KSH_AUTOLOAD option is set. Zsh's native mode is the default and I will not discuss the other way here (see "man zshmisc" and "man zshoptions" for details about ksh-style autoloading).
Okay. Say you got a directory `~/.zfunc' and you want it to be part of the function search path, you do this:
fpath=( ~/.zfunc "${fpath[@]}" )
That adds your private directory to the front of the search path. That is important if you want to override functions from zsh's installation with your own (like, when you want to use an updated completion function such as `_git' from zsh's CVS repository with an older installed version of the shell).
It is also worth noting, that the directories from `$fpath' are not searched recursively. If you want your private directory to be searched recursively, you will have to take care of that yourself, like this (the following snippet requires the `EXTENDED_GLOB' option to be set):
fpath=(
~/.zfuncs
~/.zfuncs/**/*~*/(CVS)#(/N)
"${fpath[@]}"
)
It may look cryptic to the untrained eye, but it really just adds all directories below `~/.zfunc' to `$fpath', while ignoring directories called "CVS" (which is useful, if you're planning to checkout a whole function tree from zsh's CVS into your private search path).
Let's assume you got a file `~/.zfunc/hello' that contains the following line:
printf 'Hello world.\n'
All you need to do now is mark the function to be automatically loaded upon its first reference:
autoload -Uz hello
"What is the -Uz about?", you ask? Well, that's just a set of options that will cause `autoload' to do the right thing, no matter what options are being set otherwise. The `U' disables alias expansion while the function is being loaded and the `z' forces zsh-style autoloading even if `KSH_AUTOLOAD' is set for whatever reason.
After that has been taken care of, you can use your new `hello' function:
zsh% hello
Hello world.
A word about sourcing these files: That's just wrong. If you'd source that `~/.zfunc/hello' file, it would just print "Hello world." once. Nothing more. No function will be defined. And besides, the idea is to only load the function's code when it is required. After the `autoload' call the function's definition is not read. The function is just marked to be autoloaded later as needed.
And finally, a note about $FPATH and $fpath: Zsh maintains those as linked parameters. The lower case parameter is an array. The upper case version is a string scalar, that contains the entries from the linked array joined by colons in between the entries. This is done, because handling a list of scalars is way more natural using arrays, while also maintaining backwards compatibility for code that uses the scalar parameter. If you choose to use $FPATH (the scalar one), you need to be careful:
FPATH=~/.zfunc:$FPATH
will work, while the following will not:
FPATH="~/.zfunc:$FPATH"
The reason is that tilde expansion is not performed within double quotes. This is likely the source of your problems. If echo $FPATH prints a tilde and not an expanded path then it will not work. To be safe, I'd use $HOME instead of a tilde like this:
FPATH="$HOME/.zfunc:$FPATH"
That being said, I'd much rather use the array parameter like I did at the top of this explanation.
You also shouldn't export the $FPATH parameter. It is only needed by the current shell process and not by any of its children.
Update
Regarding the contents of files in `$fpath':
With zsh-style autoloading, the content of a file is the body of the function it defines. Thus a file named "hello" containing a line echo "Hello world." completely defines a function called "hello". You're free to put
hello () { ... } around the code, but that would be superfluous.
The claim that one file may only contain one function is not entirely correct, though.
Especially if you look at some functions from the function based completion system (compsys) you'll quickly realise that that is a misconception. You are free to define additional functions in a function file. You are also free to do any sort of initialisation, that you may need to do the first time the function is called. However, when you do you will always define a function that is named like the file in the file and call that function at the end of the file, so it gets run the first time the function is referenced.
If - with sub-functions - you didn't define a function named like the file within the file, you'd end up with that function having function definitions in it (namely those of the sub-functions in the file). You would effectively be defining all your sub-functions every time you call the function that is named like the file. Normally, that is not what you want, so you'd re-define a function, that's named like the file within the file.
I'll include a short skeleton, that will give you an idea of how that works:
# Let's again assume that these are the contents of a file called "hello".
# You may run arbitrary code in here, that will run the first time the
# function is referenced. Commonly, that is initialisation code. For example
# the `_tmux' completion function does exactly that.
echo initialising...
# You may also define additional functions in here. Note, that these
# functions are visible in global scope, so it is paramount to take
# care when you're naming these so you do not shadow existing commands or
# redefine existing functions.
hello_helper_one () {
printf 'Hello'
}
hello_helper_two () {
printf 'world.'
}
# Now you should redefine the "hello" function (which currently contains
# all the code from the file) to something that covers its actual
# functionality. After that, the two helper functions along with the core
# function will be defined and visible in global scope.
hello () {
printf '%s %s\n' "$(hello_helper_one)" "$(hello_helper_two)"
}
# Finally run the redefined function with the same arguments as the current
# run. If this is left out, the functionality implemented by the newly
# defined "hello" function is not executed upon its first call. So:
hello "$@"
If you'd run this silly example, the first run would look like this:
zsh% hello
initialising...
Hello world.
And consecutive calls will look like this:
zsh% hello
Hello World.
I hope this clears things up.
(One of the more complex real-world examples that uses all those tricks is the already mentioned `_tmux' function from zsh's function based completion system.)
| How to define and load your own shell function in zsh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.