date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,431,696,284,000
how do I set it so clear command clears the output that was stored in ram from my understanding, konsole keeps screen output in ram. I want to clear that when I clear the visible part of the screen with the clear command.
Clearing the ram used by whatever running process is a facility rarely offered to the user. More over, unless you precisely know the code the process is running, it is just impossible to know what is being stored where. The visible part of the screen as well as some variable amount of lines that were previously displayed (1000 per default) are kept in the scrollback buffer that can be cleared altogether via the View > Clear Scrollback and Reset menu (or keying in Ctrl+Shift+K if you kept default shortcuts) (see §2.1.3) Keep in mind that while no log is kept by konsole, the user might well have : Copied parts of the screen into the clipboard, (*1) Saved parts of the screen into whatever file via the File > Save Output As menu option or whatever other mean offered by the shell, Clearing these parts cannot obviously be achieved by konsole. 1 : Selectively clearing the clipboard history would be part of another topic. This is actually possible from the command line thanks to dbus. For example, if running Klipper, firing qdbus org.kde.klipper /klipper org.kde.klipper.klipper.clearClipboardHistory would wipe it entirely.
have clear screen clear konsole ram
1,431,696,284,000
I am working on ARM-based processor (OS version: Linux 3.4.35) and I need to analyze the processor's performance while some processes are running, by typing top command, I can see some statistics but I do not understand the details there, what information should I look for ? Here the details I need to understand (difference between CPU usr and CPU sys, what is nic, idle, io irq and sirq and how to clear cached RAM): Mem: 32184K used, 648K free, 0K shrd, 676K buff, 7536K cached CPU: 11.7% usr 29.4% sys 0.0% nic 41.1% idle 11.7% io 0.0% irq 5.8% sirq
The best place to get started with learning about a given Linux/Bash command is to reference the manual page or manpage of the given command. Here is a link to a top manpage. In shell, you should be able to read the manpage by simply executing man top. I will also include a link to a blog explaining top. The relevant part to your question can be found at section 2b. TASK and CPU States of the manpage: As a default, percentages for these individual categories are displayed. Where two labels are shown below, those for more recent kernel versions are shown first. us, user : time running un-niced user processes sy, system : time running kernel processes ni, nice : time running niced user processes id, idle : time spent in the kernel idle handler wa, IO-wait : time waiting for I/O completion hi : time spent servicing hardware interrupts si : time spent servicing software interrupts us and ni are the percentage of CPU usage spent on un-niced and niced processes respectively. Nice values are user space processes that are either nice or not in that they can be given a priority value that either cooperates and gets out of the way of more important kernel or system processes or does not. Here is a link to a fairly straightforward explanation of niceness and priority. The others should be rather straightforward: idle is how much of the processor's capacity is idle or unused. io is the Input/Output queue of the processor. irq and srq are hardware and software interrupts respectively. If you want more information on how to sort top output, here is a relevant Stack Overflow post. Additionally if you want to know more about clearing cached memory/buffers, here is a U&L stack exchange post. Please read over all the links I have provided and if needed you should dig a little deeper and research more into how Linux processing and memory handling works. There is a wealth of information out there online.
How to analyze top command results: CPU & RAM consumption
1,431,696,284,000
I have just upgraded my PC with another 32GB of RAM. Both BIOS and lshw does acknowledge the existence of all four 16GB RAM modules, but neither free, top and htop sees the actual memory. $ sudo lshw adam-potwor width: 64 bits capabilities: smbios-3.0 dmi-3.0 smp vsyscall32 configuration: boot=normal chassis=desktop family=To Be Filled By O.E.M. sku=To Be Filled By O.E.M. uuid=00020003-0004-0005-0006-000700080009 *-core description: Motherboard product: X399 Taichi vendor: ASRock physical id: 0 serial: M80-AA002300154 *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: P1.50 date: 09/05/2017 size: 64KiB capacity: 15MiB capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer acpi usb biosbootspecification uefi *-memory description: System Memory physical id: 10 slot: System board or motherboard size: 64GiB *-bank:0 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 2134 MHz (0.5 ns) product: F4-3000C16-16GISB vendor: Unknown physical id: 0 serial: 00000000 slot: DIMM 0 size: 16GiB width: 64 bits clock: 2134MHz (0.5ns) *-bank:1 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 2134 MHz (0.5 ns) product: F4-3000C16-16GISB vendor: Unknown physical id: 1 serial: 00000000 slot: DIMM 1 size: 16GiB width: 64 bits clock: 2134MHz (0.5ns) *-bank:2 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 2134 MHz (0.5 ns) product: F4-3000C16-16GISB vendor: Unknown physical id: 2 serial: 00000000 slot: DIMM 0 size: 16GiB width: 64 bits clock: 2134MHz (0.5ns) *-bank:3 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 2134 MHz (0.5 ns) product: F4-3000C16-16GISB vendor: Unknown physical id: 3 serial: 00000000 slot: DIMM 1 size: 16GiB width: 64 bits clock: 2134MHz (0.5ns) *-bank:4 description: [empty] product: Unknown vendor: Unknown physical id: 4 serial: Unknown slot: DIMM 0 *-bank:5 description: [empty] product: Unknown vendor: Unknown physical id: 5 serial: Unknown slot: DIMM 1 *-bank:6 description: [empty] product: Unknown vendor: Unknown physical id: 6 serial: Unknown slot: DIMM 0 *-bank:7 description: [empty] product: Unknown vendor: Unknown physical id: 7 serial: Unknown slot: DIMM 1 *-cache:0 description: L1 cache physical id: 12 slot: L1 - Cache size: 1536KiB capacity: 1536KiB clock: 1GHz (1.0ns) capabilities: pipeline-burst internal write-back unified configuration: level=1 *-cache:1 description: L2 cache physical id: 13 slot: L2 - Cache size: 8MiB capacity: 8MiB clock: 1GHz (1.0ns) capabilities: pipeline-burst internal write-back unified configuration: level=2 *-cache:2 description: L3 cache physical id: 14 slot: L3 - Cache size: 32MiB capacity: 32MiB clock: 1GHz (1.0ns) capabilities: pipeline-burst internal write-back unified configuration: level=3 *-cpu description: CPU product: AMD Ryzen Threadripper 1950X 16-Core Processor vendor: Advanced Micro Devices [AMD] physical id: 15 bus info: cpu@0 version: AMD Ryzen Threadripper 1950X 16-Core Processor serial: Unknown slot: SP3r2 size: 1888MHz capacity: 4200MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca cpufreq configuration: cores=16 enabledcores=16 threads=32 $ free -h total used free shared buff/cache available Mem: 31G 1.8G 27G 18M 1.8G 29G Swap: 14G 0B 14G I use Ubuntu 18.04 which comes with $ uname -r 4.15.0-33-generic kernel. I could post this question on AskUbuntu, but I have a feeling that this question is rather kernel-related and not Ubuntu-specific. My current grub boot entry reads menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9c791297-4f61-471a-ac23-6228987c316e' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi insmod part_gpt insmod ext2 set root='hd0,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1 b0fddb5e-191e-4de0-b8bd-543c3e22a22a else search --no-floppy --fs-uuid --set=root b0fddb5e-191e-4de0-b8bd-543c3e22a22a fi linux /vmlinuz-4.15.0-33-generic root=UUID=9c791297-4f61-471a-ac23-6228987c316e ro rootflags=subvol=@ quiet splash $vt_handoff initrd /initrd.img-4.15.0-33-generic } I also installed Windows (tm) to diagnoze the problem. And Windows 10 64bit also sees only 32GB of RAM. It diagnoses the other 32GB as "hardware reserved", which points strongly to bug in the motherboard's firmware. I also confirmed, that my motherboard has the most recent BIOS version (as of 7th of September 2018) $ dmesg |grep Memory [ 0.000000] Memory: 32676352K/33432868K available (12300K kernel code, 2470K rwdata, 4244K rodata, 2408K init, 2416K bss, 756516K reserved, 0K cma-reserved) [ 0.132315] x86/mm: Memory block size: 128MB
The problem was caused by improper memory slot configuration. After replacing the memory into A2, B2, C2 and D2 slots, just as Mark Patrick described, solved all the problems. So the morale of the story is: when Linux (or Windows) does not use all the installed memory (even if it sees the chips in the slots), check if the memory modules are inserted into the right slots.
Ubuntu 18.04 does not use more than 32GB of RAM [closed]
1,431,696,284,000
How to setup limit for process to memory usage? Similar to open files limit in /etc/security/limits.conf: ubuntu soft nofile 4096 ubuntu hard nofile 8192 E.g. while I'm launching python script with raw eval of json data from 1.1G file, python takes whole of RAM, while creating objects around each dict and list in json.txt. It hung my machine for 20-30 minutes. Thereafter: # python read_data.py Killed Ubuntu System is very stable today. Its recovery from hung. Swap going to 8Gb of usage. RAM completely empty, script going off. I'm trying to find, is this limit, which killed my script configurable? Can I tune my system in such way, that every process, which takes more than 70% of current RAM size would be just killed or stopped or something.
Using the same file limits.conf you can limit: data - max data size (KB) memlock - max locked-in-memory address space (KB) rss - max resident set size (KB) as - address space limit (KB) stack - max stack size (KB) So it's pretty much you can do. After setting up limits you can check this level with ulimit -a → you can also use this program to set them. To use limits.conf you need pam_limits.so. And last thing is what killed your script. I think it could be OOM killer. A feature in linux kernel which kills misbehaving processes when they eat all real memory ;) Use dmesg to find out if killer was used. There will be lots of info about candidates to kill and what process precisely was killed. Cheers,
Memory Usage in Linux [duplicate]
1,431,696,284,000
I've never seen or heard anything like this before, and I can't find anything else online that is at all similar. I've upgraded my network to gigabit and have been transferring large files lately (this one in question is a bunch of DVD images totaling over 200GB). Whenever I try to copy a set of files a few gigs and larger, I've noticed this odd behavior where Mint will load a chunk of the data into RAM-- usually about 1.2 GB or smaller-- sometimes only a few hundred megs-- and then start transferring. When it gets done transferring that, it will literally halt the transfer, spit out the old hunk of data, and wait to continue transferring until the next chunk of data is loaded into the RAM. Then it will resume transferring across the network. Then it repeats. And repeats. And repeats. Until the data is all done. Here is a screenshot of the System Monitor during one of these weird moments. . You can see the death of the transfer at the precise moment the RAM dumps the data, and then you see the RAM level out at the same moment that the transfer resumes again. What's also funny is that I actually have six gigs of RAM, not 3.2 as Sys. Monitor would have you believe-- this is the second time Mint hasn't reported it all of the sudden. But that's a question for another day. It's not the worst thing in the world, but it is a little annoying when every other OS I've used simultaneously loads data in and out of the RAM while it's transferring across the network. It doesn't have to pause to think about it. It would save me time while I'm moving these large sets of data if I could remedy this. Are there any suggestions, remedies, diagnoses, or theories?
Marco's comment inspired me to try a few things that I didn't think of, and I discovered the answer. Well, I guess I discovered an alternative. If anyone knows more about this, please add an answer. I ought to have specified beforehand how I was transferring the file. This was done over the network (of course) via a WebDAV connection to my Synology NAS. After Marco's comment, I tested copying about 11.7 GB to the NAS using several different methods: Samba: Not only was the average speed much faster, but it didn't have the waiting-for-data-to-load problem. FTP: The average speed was faster, the transfer didn't stop to wait for data to load into the ram, but sometimes the CPU would get a little funny... and by that I mean that it maxed out one of the cores, and I had to kill the FTP process because it kept eating up the CPU even after I cancelled the transfer. WebDAV: Same as before-- the RAM would grab a bunch of data, data would transfer, then RAM would dump it and grab more, transfer that, &tc. So I have discovered that Samba is the better method in this instance. I did a little Googling and saw that some people feel that WebDAV is a clunky protocol especially for LANs. Still, I don't know if this is just the way WebDAV is-- if other people have the same problem-- or if it's something wrong with Mint, or if it's just my particular setup of Mint. So I think I'll give this a few days before I select this as the best answer just to see if others have better solutions/more to add that I can't add.
Linux Mint Stops Network File Transfers to Load Data into RAM
1,431,696,284,000
I try Peppermint OS (a distro based on Lubuntu) on live usb. I check ram by the command free -h and the result is peppermint@peppermint ~ $ free -h total used free shared buff/cache available Mem: 3.8G 1.0G 1.0G 866M 1.8G 1.8G Swap: 5.7G 51M 5.7G My usb is 4G, my ram is 4G. Why swap is 5.7G? And does this happen in any other ubuntu based distros? As I understand, trying ubuntu distros on live usb does not create swap on hard disk, so I considered 3 cases: I already created a swap of 4G before (when I created dual boot), and this swap is used, but here swap is 5.7G Another swap is created on the usb, but my usb only has 4G; so why swap is 5.7G? A swap of 1.7 G is created on usb, and 5.7G is the result of combining the two swaps. But I check my usb is 2.5G free and the linux iso in it is about 1.3G, so cannot have a swap of 1.7G on my usb As sourcejedi suggested, here is the result of cat /proc/swaps: peppermint@peppermint ~ $ cat /proc/swaps Filename Type Size Used Priority /dev/sda5 partition 3998716 0 -2 /dev/zram0 partition 1006892 26320 5 /dev/zram1 partition 1006892 26044 5
So it has detected your 4GB (decimal) swap partition, and automatically enabled it. I think this is fairly common. It is probably from common Ubuntu code. It is also using 2 x 1GB of zram swap. Data swapped to zram is compressed, and stored in RAM. This can be particularly useful on low-memory systems. Android can use a similar (same?) approach, and so does Windows 10 on low-memory systems. This is the result of Peppermint including the zram-config package from Ubuntu. As far as I know, the original Ubuntu Desktop does not install zram-config by default. You can see the code in /usr/bin/init-zram-swapping. It currently creates one zram device per CPU (for parallel compression or decompression). # Calculate memory to use for zram (1/2 of ram) totalmem=`LC_ALL=C free | grep -e "^Mem:" | sed -e 's/^Mem: *//' -e 's/ *.*//'` mem=$(((totalmem / 2 / ${NRDEVICES}) * 1024)) # initialize the devices for i in $(seq ${NRDEVICES}); do DEVNUMBER=$((i - 1)) echo $mem > /sys/block/zram${DEVNUMBER}/disksize mkswap /dev/zram${DEVNUMBER} swapon -p 5 /dev/zram${DEVNUMBER} done
Why is swap so large when I try Peppermint OS on live usb? (The usb's size is smaller than the swap's size)
1,532,520,757,000
I have attempted to make Tiny Core Linux, Archboot (didn't get very far), and SliTaz remastered live CD's with lsdvd included, in order to create a lightweight transcoding solution that allows as much of the processing to be on the transcoding as I can manage. Additionally, I opted for these RAM distributions so that I would be able to swap the live CD's out for a DVD without problems. I have two Virtual Machines set up, one for Tiny Core Linux and the other for SliTaz. Within the respective operating systems, lsdvd seems to work just fine (I installed libdvdcss and libdvdread on both). On each, I remastered live CD's so that all three of these packages are installed, and they both seem to behave in a similar way. That is, although they work on the installed OS's, they bring up similar errors when in a live CD environment. Here is the the error output for each (this occurs after the version of libdvdcss and before the DVD Title Table are displayed): Tiny Core Linux: libdvdread: Can't seek to block 100301 libdvdread: Can't seek to block 100301 libdvdread: Can't seek to block 4096128 libdvdread: Can't seek to block 4096128 SliTaz: hdc: command error: status=0x41 { DriveReady Error } hdc: command error: status=0x50 { LastFailedSense=0x05} hdc: possibly failed opcode: 0xa0 What interests me is that the problem seems to be distribution-independant. Is there something that I have on my installed VM's that I should be including in order to mitigate this error? In researching Google, I found that setting a region might help, but I am unsure how I would go about doing that in a portable way. If there is a simpler way to go about what I am trying to make than how I am making it, I would be grateful if you could let me in on it! Learning the remastering processes for these different systems is intuitive, but it does take some time.
It turned out to be an issue with VirtualBox's "Passthrough" feature for the Host IDE Disc Drive. Without it, lsdvd cannot fully function.
Can't Get "lsdvd" To Work On Remastered Live CD's
1,532,520,757,000
After working for more than a day on my machine, swap becomes about 1GB. Some of my panel plugins go to swap so they become laggy. Moreover, the system doesn't unswap until I make it to do it swapoff -all;swapon --all. Are there mechanisms in the Linux kernel to unswap when the load is low or something like this? Sometimes amount of used RAM is 90%, so writing a script which will do swapoff -all;swapon --all every hour is a bad idea.
When Linux needs to find RAM to store something, it looks for the pages in RAM that have been unused for the longest time. If these pages belong to files, they're freed. If these pages are process memory, they're moved to swap. Linux doesn't know what pages are going to be used soon, and doesn't know what pages are going to be needed quickly (e.g. for interactive programs to be reactive). I don't think there's any way to give a priority for a particular process to stay in RAM. Pages can be locked to RAM (this requires root or the appropriate capability), but locking stuff to RAM is not recommended since it makes less room for the rest. You can force a specific process to be loaded into RAM by reading its memory — see my unswap script. You can reduce the propensity to swap by setting the vm.swappiness sysctl parameter. However, beware that reducing swappiness is by no means guaranteed to make your system snappier. There's no miracle: if your system swaps less, it spends more time loading data from files (such as program code). If you have a relatively large amount of memory, one setting that I've found not to be well-tuned by default in 3.0–3.16+ kernels is another vm sysctl parameter: vm.vfs_cache_pressure. This parameter is somewhat similar to swappiness, but concerns kernel objects, especially the inode and dentry cache. Increasing the value effectively reduces the amount of memory devoted to this cache. You can see how much memory is used by the inode and dentry cache with slabtop or with </proc/slabinfo awk '{print $1, $3*$4}' |sort -k2n | tail If you find your system sluggish in the morning, this may be because nightly cron jobs such as updatedb filled the memory with inode cache entries. Try something like sysctl vm.swappiness=500. You can do a one-time flush of the cache with echo 2 >|/proc/sys/vm/drop_caches (don't do this on a regular basis as it can kill performance).
Is there way to force Linux to unswap when there is low CPU usage?
1,532,520,757,000
we have Linux machine that used the disk /dev/sdb to save data , we configured the mount point in /etc/fstab disk is 100G , and mount point is the folder /data now we want to use the memory ( we have 256G ) , instead of the disk so is it possible to use the memory RAM 256G instead to mount the disk? if yes how to mount the folder /data to RAM memory What is a RAM disk? Basically, a RAM-based file system is something that creates storage in memory as if it were a partition on a disk – it’s called a RAM disk. Note that RAM is volatile and data is lost at system restart or after a crash. The most important benefit of RAM drives is their speed – even over 10 times faster than SSDs. These very fast storage types are ideal for applications that need speed and fast cache access. Repeat: Data written in this type of file
More or less like you would for /dev/sdb1. First of all unmount /dev/sdb1 (you can mount it somewhere else). umount /dev/sdb1 You can create another directory and mount the disk there: mkdir /physical-data mount /dev/sdb1 /physical-data ...(the other options you already have). In /etc/fstab, rename "/data" to "/physical-data". Now the hard disk is mounted as before, but /data is free to use. So with the /data directory is available as a mount point, you can associate an instance of RAM tmpfs to it, and create a ramdisk there calling it, for example, "ramdisk1" (or whatever else): mount -t tmpfs -o size=100G ramdisk1 /data You can set up fstab to automatically remount it upon boot: ramdisk1 /data tmpfs nodev,nosuid,noexec,nodiratime,size=100G 0 0 You could then for example set up a script to run upon boot that would do, rsync -a /physical-data/ /data/ assuming your /dev/sdb1 is mounted on /physical-data, and a script to be run before shutdown that would copy the content of the modified RAM disk back to /dev/sdb1 rsync -a --delete /data/ /physical-data/ The "--delete" options ensures that if you delete a file from the RAM disk, it will also be deleted later from the hard disk. This way, your data would reside on the hard disk when power is off, and reappear on the much faster RAM disk upon powering on. Needless to say, experiment first and use caution. You could easily lose all of the data on the hard drive if something goes wrong.
RAM disk + is it possible to mount to RAM instead of disk
1,532,520,757,000
I use GNU make to start benchmarks of different write/read calls in R. In some cases this will result in my RAM being fully used and the process being killed (which is fine, since I want to see the limits). The issue however is, that the whole make process is killed and not only the specific script-call. For example, say I have the following Makefile, the target small.csv works fine, large.csv crashes (kinda expected), but now the other ... targets are not build. The question is, how can I start the process (Rscript tester --size large) in a way, that make can continue the other (...) benchmarks after one process has been killed? # Makefile .PHONY: all all: small.csv large.csv ... small.csv: tester.R Rscript tester.R --size small large.csv: tester.R Rscript tester.R --size large ... Note that the script touches the target in the first lines, so the target is always created, regardless of whether the run was killed or not.
This happens because Make stops execution by default if any command indicates that it failed. You can disable this by prefixing commands with -: small.csv: tester.R -Rscript tester.R --size small large.csv: tester.R -Rscript tester.R --size large Alternatively, you can run make with the -k option to tell it to keep going as far as possible after errors occur.
Make: start process that if being killed, does not kill make
1,532,520,757,000
I've been having an issue for many years where I have vm.swappiness=1 in sysctl.conf, but even though there's plenty of RAM available (possibly almost full in terms of cached, although still 4GB free usually, but "available" is still quite free), swap keeps getting used all the time and sometimes almost full. I simply don't want swap to be used at all, unless absolutely necessary. I've just read in this answer from 2019 that: swappiness=0 tells the kernel to avoid swapping processes out of physical memory for as long as possible However, I also read in this answer from 2012 that: swappiness=0: Kernel version 3.5 and newer: disables swapiness. Kernel version older than 3.5: avoids swapping processes out of physical memory for as long as possible. So I'm confused... does vm.swappiness=0 fully disable swap, or is swap still used when absolutely necessary? Kernel versions of my servers: 4.18 (AlmaLinux 8) 3.10 (CentOS 7)
The only way to fully disable swap is to not set it up in the first place. Setting vm.swappiness to 0 will cause the kernel to only use swap as a last resort; it is currently documented as At 0, the kernel will not initiate swap until the amount of free and file-backed pages is less than the high watermark in a zone. As far as I’m aware, setting swappiness to 0 has never disabled swap entirely, so you’ll see swap used in both your environments if necessary.
On AlmaLinux 8 or CentOS 7, does vm.swappiness=0 mean that Swap is fully disabled?
1,532,520,757,000
without overcommitment, every fork() would require enough free storage to duplicate the address space, even though the vast majority of pages never undergo copy-on-writes. The above statement is taken from Robert Love's book (Linux System Programming 2nd edition, Memory management chapter, Overcommitting and OOM topic). If we turned off the swap I can't overcommit the main memory. In this scenario will copy-on-write in main memory work (i.e for fork, malloc, mmap, etc) or it will try to pre-allocate the whole data in memory without any lazy allocation mechanism? kindly correct me if i am missing anything. Update: Friends, initially I thought I can't overcommit once we turned off the swap. As like below mentioned in the discussion we can overcommit even we turn off the swap.
You can overcommit without swap. The word "storage" in Robert Love's book in this context refers to physical RAM. The kernel sets up a memory space for the process that does not yet contain mappings to physical RAM, or points to shared page frames, in the case of Copy-on-Write. The mappings are created on demand, when the pages are accessed. The assumption is that not all mappings are needed at the same time, so it is relatively safe to overcommit.
Will copy-on-write in main memory work while swap is off?
1,532,520,757,000
I'm using the function rosbag of ROS. rosbag can record lots of data from other ROS nodes. In a word, it will generate a huge file, such as a file of 200MB. After generating such a file, I found that the buff/cache of the system increases a lot. Here is my capture: Before rosbag: After rosbag: You can see that after rosbag, buff/cache increased a lot, at the same time, free decreased from 19983 to 10896. I can't understand. As my understanding, free means the free size of RAM, buff/cache is the size of cache. Why can cache use the size of RAM? And I've also found that if I deleted the file generated by rosbag, the buff/cache would get back from 17999 to 8925. Is this the action of the system? When will the buff/cache increase? When will the buff/cache decrease?
In free’s output: “free” represents memory which is unused; “buff/cache” represents memory used by buffers (data in memory which is waiting to be written to disk) and cache (data on disk which is also available in memory); “available” represents the amount of memory a process can allocate and use without hitting swap (most of the time it’s this value you should care about). When a process writes a large amount of data, that data goes to buffers before being written to disk. Those buffers take up room in memory, so the amount of unused memory goes down. But that memory can be made available, so the amount of available memory doesn’t go down. When you delete the file, the buffers are no longer needed and are reclaimed (there’s no point in even keeping them as cache). Buffers increase when processes write data; they decrease when that data is written to disk. Cache increases when on-disk data is made available in memory (either by reading it from disk, or converting buffers into cache); it decreases when memory pressure means it’s no longer useful to keep it around. Both live in memory, so their size affects the amount of free memory; but both can be reclaimed, so their size doesn’t affect the amount of available memory.
free -m: free size reduced because of cache/buff
1,532,520,757,000
I feel that with all the additional apps pre-installed, and relying heavily on deriving elements from Ubuntu and Gnome, elementary must be heavy. So I would like to know an as-quantitative-as-possible analysis of how heavy the OS is on RAM (and may be the system in overall) compared to Classic-Ubuntu/Gnome-Ubuntu. This is a problem that every user who is stunned by the awesomeness of Pantheon would like to get an answer to, before switching their OS/Desktop, so that they don't feel the hesitation that it would slow down their PC. PS: Please feel free to add external links that may be helpful, but as of now I haven't been able to find a proper place that provides some sane discussion of this technical overhead comparison.
What applications are preinstalled is completely irrelevant. An application that is installed but not running costs nothing but disk space. I don't know why you feel that elementary must be heavier than Gnome. Gnome itself is pretty heavy. There was an article comparing Linux desktop environments in the Layer 3 Networking Blog in April 2013. Elementary didn't exist yet but the author tested a lot of environments. While figures can vary quite a bit depending on what applets, widgets and so on are loaded, the order of magnitude is telling: ~200MB for KDE, slightly less for Unity and Gnome3, ~50MB for lightweight desktop environments, a few MB for heavyweight window managers, <1MB for lightweight window managers. (Those figures are for the WM/DE only, not the base system.) Hectic Geek compared elementary Luna with Gnome in August 2013. They found that there was no significant difference in memory usage and boot times. Brendan Ingram compared the RAM usage of several configurations in July 2016. He found that Gnome on Ubuntu 16.04 took 700MB (that's total RAM, not just the desktop environment) while Pantheon on the elementary OS version based on 14.04 took 600MB, i.e. Pantheon used slightly less memory but that was an older version. The upshot is that Pantheon as configured by elementary and Gnome or Unity as configured by Gnome use similar amounts of memory. Elementary's default setup requires roughly the same amount of resources than Ubuntu's.
How heavy is elementaryOS over classic Ubuntu? [closed]
1,532,520,757,000
I have some old PCs runnning on my business. And by old I mean PENTIUM-3 800 MHZ and AMD DURON 950 MHZ, with 256MB RAM. They are running my business software today on Windows XP. I want to run Linux on them. But today's Linuxes versions require at least 512MB on RAM. Side note: I know that SWAP and RAM have HUGE speed difference. But with Windows, the performance is already bad. Here goes my question :) Can I run these new linuxes (Lubntu, for example, which is lightweight) with low RAM but with large SWAP areas? For this consider that I have no disk space problem. If I do, here comes a general question (for the sake of question, forget about performance here): how low can I go on RAM that the SWAP area will cover the RAM-defficiency? Can I run a Linux only by SWAP?
You can use lightweight distributions that require less ram. Linux doesn't need 512MB. SWAP is kind of temporary substitution for RAM. You could use it, but it's much slower than RAM, and could make your experience unpleasant. Still, you can check out Puppy Linux requirements. It's known for being lightweight. Also, here you can see list of distributions aiming for speed by using RAM, while remaining lightweight. Moreover, this and this topics can provide some information for you.
How low can I go on RAM?
1,532,520,757,000
I recently installed a copy of Ubuntu Server 14.04.2 LTS on a cluster. All appears to be working fine, but a large portion (around half) of our total available RAM is being used. I.e., when I run free I get the following output (the buffers/cache line being the relevant one): total used free shared buffers cached Mem: 251 215 36 0 2 70 -/+ buffers/cache: 141 110 Swap: 22 0 22 I checked the outputs of both ps and lsof to get the total memory usage per user, and nobody is using more than 1% of the RAM. I've read that Linux does not immediately free up memory used by exited processes, but is it feasible that it would continue using this much memory? If not, is there anything else that could be taking up all of this memory?
I figured out that a large portion of the memory usage was, in fact, attributable to inactive memory used by exited processes. The most accurate way to determine how much memory is available post-January 2014 is to look at MemAvailable in /proc/meminfo. You can also see the amount of inactive memory is this file.
Substantial portion of memory used is not accounted for by user processes
1,532,520,757,000
Someone told me that there is a sub folder off of dev that basically allows you to tell the OS to keep the contents of that folder cached in RAM. So if I put some files & directories in /dev/somefolder the OS would keep this folder's content cached. What folder is that? I'm on ubuntu 12.04, in case that matters.
There is /dev/shm, which is a RAM-backed filesystem. This isn't the same as caching as a cache means the file also resides on disk. With /dev/shm, which is a tmpfs filesystem, the files exist in memory only. Note that you can mount tmpfs volumes anywhere: mount -t tmpfs none /foo/bar There is just one usually mounted at /dev/shm.
What dev folder allows you tell the OS to cache something?
1,532,520,757,000
I do not think that this is Linux's disk cache. In htop, the memory bar is green (not orange for cache) and I removed the files stored in zram. No processes seem to be using a lot of memory. The load was compiling software with its build files stored in zram (PORTAGE_TMPDIR which is /var/tmp/portage in Gentoo), with swapfile on zram too. It had zram writeback configured so that it would write to disk if there is not much RAM left. I compiled 2 softwares, after 1 software, there seemed to be still about 1/2 the memory used, zramctl said that total data used was near 0G, and no process is using much memory, and Linux disk cache wasn't the issue. With kswapd continuously at 100% CPU utilization the kernel OOM killed the process consuming too much RAM. After this there was still RAM being used but nothing I could find was using it. If it was disk cache, the kernel would have handed over the space to the memory consuming process. But it didn't, so this is most likely NOT a disk cache issue, I rebooted the computer and the 2nd software compiled quickly without an issue! Does anyone know what could be the case, is there any way I could further identify what is using the memory?
Short answer: Use discard mount options when mounting file systems or turning on swap created on the Zram devices. Extended: When mounting a file system use discard as a mount option, you can set mount options with -o and options separated with a ,, no space between. It should be supported on most Linux file systems, I use it on Btrfs. On swap, use -d when using swapon. You could also in addition to this, periodically run fstrim on the directory that the file system is mounted at, but from what I've seen in the output of zramctl this isn't necessary and the discard mount option is good enough. Edit: Actually, after some further testing I think it's a good idea to periodically run fstrim on the Zram mount. After compiling Firefox with it's build directory in Zram, there was about 1.1GB of RAM usage. Not nearly as bad as without the discard mount option, but there is room for improvement. Running fstrim on the Zram mount (which only took a couple of seconds to run) caused the RAM usage to go to 400MB, which is normal. I'd probably put it in a cron job or after a portage compile. Explanation: When files are removed, Zram doesn't remove the compressed pages on memory because it's not notified that the space is not used for data anymore. The discard option performs discard when a file is removed. If you use the discard mount option Zram will be notified about the unused pages and will resize accordingly.
After a heavy I/O load, and storing many things in Zram, used space is close to total in `free`
1,532,520,757,000
I have just installed Linux Mint about 5 days ago, and then I have upgraded my RAM memory from 8 GB to 12 GB, and I have 4 GB more on the way, so it will get to 16 GB or RAM. The only thing is, I don't understand how the free -g command works. Here's a picture of Stacer, in which I can see my RAM up to 12 GB: But while I use the terminal free -g command, it shows me this: dragos@madscientistlab ~ $ free -g total used free shared buff/cache available Mem: 11 3 6 0 2 7 Swap: 7 0 7 It has only 9 GB of RAM if we add the USED RAM and FREE RAM, and it has only 11 GB available in the total column, out of 12. Is it something wrong with my RAM memory? Or is it something that I'm not understanding? Also, one more question: If I have 12 GB of RAM, why does Stacer say I only have 11.6 GB?
Your system is fine. You need to add “available” and “used” memory, rather than “free” and “used” memory. You also need to take truncation into account: you have somewhere over 3GiB of memory used (by programs), somewhere over 6GiB of memory completely unused, somewhere over 2GiB of memory used in buffers and caches, and altogether, somewhere over 7GiB of memory available. Somewhere over 3, plus somewhere over 7, ends up giving a total somewhere over 10, or even 11 in your case. You should use free -m to get a better picture. You’ll find out more about available memory in How can I get the amount of available memory portably across distributions? Regarding your 11.6GiB v. 12GiB, you “lose” some memory because it’s set aside for the system’s purposes: your firmware, integrated GPU, and the kernel all keep some memory for their own purposes, leaving 11.6GiB usable by programs.
Linux Mint RAM Memory - Showing different usage and free space
1,532,520,757,000
I am interested in the totals of the 3 lines. Specifcally, if the "used" values can be counted as what's going on in real time, or if that's just a running total since the OS was started? How does this compare to the vmstat si and so output as opposed to free's representation of swap?
'Used' is real-time (or at least, close to it). It's important to note that the value for 'used' on the first line includes buffered and cached memory, and that even the value for 'used' on the second line includes file-backed (i.e. non-anonymous) pages that can be dropped without swapping if needed. Generally, these numbers should (roughly) match what you see in vmstat. They both read the basic memory info from /proc/meminfo. vmstat additionally reads data from /proc/stat and /proc/vmstat, but its basic memory usage stuff comes from meminfo. You can verify this with: strace free 2>&1 | grep open strace vmstat 2>&1 | grep open
How are the values represented with the "free" command
1,532,520,757,000
For a while now on my system browsers crash very often (between 3 times in a row when opening and once every hour). Sometimes the entire browser crashes, sometimes its only one tab. I normally use Firefox but Chromium Browsers crash in seemingly the same way. I assume this happens because of Nvidia proprietary Drivers because everything works before installing them or when using Nouveau Drivers. Browsers: Firefox, Brave, Chromium OS: Fedora, EndeavourOS, PopOS with Nvidia Drivers preinstalled Kernels: everything from Stable older kernels to newest ones Hardware: Nvidia GTX 1650 Super AMD Ryzen 7 5700g ASRock Fatal1ty B450 Motherboard Crucial P3 1 TB SSD Crash reasons given by Firefox: index out of bounds: the len is 63 but the index is 4103 no entry found for key running Firefox from CLI gives me this: signal 11: file /builds/worker/checkouts/gecko/ipc/chromium/src/base/process_util_posix.cc:265 [Parent 13845, IPC I/O Parent] WARNING: process 14620 exited on signal 11: file /builds/worker/checkouts/gecko/ipc/chromium/src/base/process_util_posix.cc:265 ExceptionHandler::GenerateDump cloned child 14783 ExceptionHandler::WaitForContinueSignal waiting for continue signal... ExceptionHandler::SendContinueSignalToChild sent continue signal to child Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Exiting due to channel error. Failed to open curl lib from binary, use libcurl.so instead I have tried switching OS and browsers, running Firefox in troubleshoot mode, installing the Driver in different ways, running different Kernels, understanding error messages. Can I somehow troubleshoot this issue? Edit 1: Output of dmesg [ 25.094695] audit: type=1400 audit(1704318740.132:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=930 comm="apparmor_parser" [ 25.094700] audit: type=1400 audit(1704318740.132:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=930 comm="apparmor_parser" [ 25.207561] nvidia: module license 'NVIDIA' taints kernel. [ 25.207575] nvidia: module license taints kernel. [ 25.226226] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.1/0000:01:00.1/sound/card0/input11 [ 25.226525] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:01.1/0000:01:00.1/sound/card0/input12 [ 25.226663] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:01.1/0000:01:00.1/sound/card0/input13 [ 25.226777] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:01.1/0000:01:00.1/sound/card0/input14 [ 25.319491] nvidia-nvlink: Nvlink Core is being initialized, major device number 511 [ 25.320706] nvidia 0000:01:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=io+mem [ 25.368293] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 545.29.06 Thu Nov 16 01:59:08 UTC 2023 [ 25.378744] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 545.29.06 Thu Nov 16 01:47:29 UTC 2023 [ 25.382645] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver [ 26.447537] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 1 [ 26.459014] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint. [ 26.505991] nvidia-uvm: Loaded the UVM driver, major device number 509. Edit 2: Memtest Memtest gave me about 20000 errors. I updated my BIOS and used the BIOS default settings for RAM instead of the XMP profile. Both steps showed the same errors. Now I am testing with only 1 of 2 16GB RAM sticks which seems to work. So either the one that I took out is faulty or having both in results in problems. Memtests are still running and I will update later Edit 3: One of my RAM sticks resulted in errors, the other didn't in the same socket. I removed it and right now, Firefox is running fine. Edit 4: Everything works perfectly. Browsers don't crash, System in general runs way better and I haven't had a single unexpected thing happening since fixing this.
Since it's not your GPU, please test your RAM using either memtest86+ or memtest86 for at least a couple of cycles.
Why do Browsers crash on my machine with Nvidia Drivers?
1,607,669,470,000
I'm learning how to use Message Queue in Linux and I've found a simple example: https://www.geeksforgeeks.org/ipc-using-message-queues/. With the reader and writer in this link, I can read and write messages through the Message Queue on my Ubuntu. Everything is fine. Well, if I'm right, when we write some messages into a Message Queue, the messages are stored into the Kernel, meaning that the Kernel will allocate some RAM to store them. Let's say I keep writing many messages into a Message Queue but never consume them. As my understanding, more and more RAM will be used. In this case, can I use the command top or ps aux to monitor the increasing usage of RAM? The lines VIRT and RES of the command top are about RAM usage and the lines VSZ and RSS of the command ps aux are about RAM usage too. In the case above, can I see some of the four numbers (VIRT, RES, VSZ and RSS) are increasing? Or top and pa aux can't show us the RAM usage of the Kernel, which is used by MQ, FIFO, SHM, domain socket or other IPC ways?
IPC resources aren’t tied to a given process, so they don’t show up in the data displayed by top, ps etc. You can see this in the example you’re referring to: the message queue is created by the writer but deleted by the reader. To monitor IPC resources, you can use lsipc: lsipc will provide an overview, and lsipc -q will show details of the message queues.
Is RAM usage of IPC a part of the RAM usage of a program
1,607,669,470,000
So i noticed how linux sometimes uses swap memory (located in internal HDD / SSD). When ram is overloaded PC stores some ram data in swap to make things faster it guess. So if you can store ram data in HD can you do the opposite and store HD data in ram (maybe temporary)? I know this is impractical but still...
Can you store files in RAM? Yes you can using something like ramfs (as mentioned in a comment), but this is unrelated to how and why the system swaps. In fact it could happen that some of your files stored in RAM end up swapped into disk. Linux (and Windows) have a concept of virtual memory, where the operating system loads and saves pages into memory. There is a process in the background that moves pages from RAM into swap (disk) if RAM space starts to run out using a particular criteria. This is a tradeoff of speed versus stability, since if you ran out of RAM space, the system will halt (actually it doesn't the OOM will kill processes randomly until you have more RAM, but this is destructive). So you only have data in swap if you start consuming a significant amount of RAM. You cannot decide what to swap and what to keep in RAM.
Can We save files in RAM?
1,607,669,470,000
With the performance set aside, is it possible to avoid RAM and use SDCard instead of it in Linux. Linux might be using allocated address space as RAM. Can we ask linux to use SDCard as RAM? I will use class 10 SDcard which is of the highest quality. Thanks in advance.
Yes as long as you don't try to eliminate all RAM. You need some RAM as the CPU needs to access RAM. It is how it works. TLB, and a lot of other stuff have to be in primary memory. In Gnu/Linux you can set up the SD card as swap, and use very little RAM. However this could lead to a lot of wear of the SD card. SD cards have a limited life, measured in number of writes. You need to ask, can you get all the essentials into RAM, with enough left over for swapping. Then will it be fast enough. I doubt 32k is enough to run a Unix like system. ls is 128k on debian, and debian is good at not wasting memory. You will have to get the whole kernel into RAM, and the kernel named Linux us huge (not as huge at NT, but huge) see https://stackoverflow.com/q/27941775/537980.
Linux: Replace RAM with SDCard
1,607,669,470,000
In windows, we can enable page heap verification using gflags to catch memory corruption bugs more easily. Is there a similar service/program for Linux and FreeBSD operating systems?
See https://en.wikibooks.org/wiki/Linux_Applications_Debugging_Techniques/Heap_corruption E.g., electric fence will do this. Valgrind will also do this and much more http://valgrind.org/docs/manual/mc-manual.html
Finding memory corruption bugs in Linux and FreeBSD
1,607,669,470,000
Is there a way to get out more out of your limited Ram on a VM? I have a VM running on a cloud hoster and try to optimize a quite low on RAM mashine. I heard, there is a way to compress parts in the memory if all free memory is in use called zram How do I get this running?
As explained on the Zram Wiki: zram (previously called compcache) can create RAM based block devices. It is an experimental (staging) module of the Linux kernel since 3.2. So If you are using a kernel before 3.2 you need to copy the following script (taken from here) to /etc/init.d/zram: ### BEGIN INIT INFO # Provides: zram # Required-Start: $local_fs # Required-Stop: $local_fs # Default-Start: S # Default-Stop: 0 1 6 # Short-Description: Use compressed RAM as in-memory swap # Description: Use compressed RAM as in-memory swap ### END INIT INFO # Author: Antonio Galea <[email protected]> # Thanks to Przemysław Tomczyk for suggesting swapoff parallelization FRACTION=75 MEMORY=`perl -ne'/^MemTotal:\s+(\d+)/ && print $1*1024;' < /proc/meminfo` CPUS=`grep -c processor /proc/cpuinfo` SIZE=$(( MEMORY * FRACTION / 100 / CPUS )) case "$1" in "start") param=`modinfo zram|grep num_devices|cut -f2 -d:|tr -d ' '` modprobe zram $param=$CPUS for n in `seq $CPUS`; do i=$((n - 1)) echo $SIZE > /sys/block/zram$i/disksize mkswap /dev/zram$i swapon /dev/zram$i -p 10 done ;; "stop") for n in `seq $CPUS`; do i=$((n - 1)) swapoff /dev/zram$i && echo "disabled disk $n of $CPUS" & done wait sleep .5 modprobe -r zram ;; *) echo "Usage: `basename $0` (start | stop)" exit 1 ;; esac give it executable rights with chmod +x /etc/init.d/zram then instruct you system to start it at boot time, with the command insserv zram After the next reboot you will see the swap with swapon -s which will look like: Filename Type Size Used Priority /dev/zram0 partition 381668 380716 10
Compress memory on low Ram VM
1,607,669,470,000
I'm trying to monitor the CPU and RAM usage (in % of total for example) of a given process wich may spawn several processes. The parent process is /bin/rscw so I get its pid by ppid_bl=$(ps -ef | grep [b]in/rscw | awk '{print $2}') and then I try something like ps -ppid $ppid_bl S (1) because in man ps it appears -ppid Select by parent process ID. This selects the processes with a parent process ID in pidlist. That is, it selects processes that are children of those listed in pidlist. Output format S Sum up some information, such as CPU usage, from dead child processes into their parent. This is useful for examining a system where a parent process repeatedly forks off short-lived children to do work. My question is, is my approach right? I'm getting ps error with (1), and this it's because I'm not using the right ps syntax, but maybe I'm not doing things right even with a correct syntax. Thanks for your time.
I have to use two dashes for this parameter, like $ ps --ppid 1 My version: $ ps --version procps-ng version 3.3.4
CPU and RAM monitorization by parent id
1,607,669,470,000
Sometimes my computer starts to behave sluggishly after running too many programs/processes simultaneously, at points almost looking crashed/frozen. Using Debian Linux, is there a way to automatically kill some processes before memory gets too scarce for smooth operation?
Basically, you want a daemon that monitors the free memory, and if it falls below a given threshold, it chooses some process and kills them to free up some memory. while (true) { size_t free_memory = get_free_memory(); if (free_memory < free_memory_threshold) { pid_t pid = choose_a_process_to_kill(); kill(pid, SIGTERM); } } An obvious question is: how do you choose processes to kill? An easy answer would be the one with the biggest memory usage, since it's likely that that is the misbehaving "memory hog", and killing that one process will free up enough memory for many other processes. However, a more fundamental question is: is it really okay to kill such a process to free up memory for others? How do you know that the one big process is less important than others? There's no general answer. Moreover, If you later try to run that big process again, will you allow it to kick out many other processes? If you do, won't there be an endless loop of revenge? Actually, the virtual memory mechanism is already doing similar things for you. Instead of killing processes, it swaps out some portion of their memory to disk so that others can use it. When the former process tries to use the portion of the memory later, the virtual memory mechanism swaps in the pages back. When this is happening from different process contentiously (which is called thrashing), you need to terminate some processes to free up the memory, or more preferably, supply more memory. When the system starts
How to make the system automatically kill some processes?
1,607,669,470,000
I think my MacBook with soldered RAM has a RAM issue.  With memtest86+, I figured out which BadRAM pattern I have, but I cannot interpret the result correctly.  How should I read the range to set up the right exclusion in GRUB? Here are my memtest results: BadRAM Patterns --------------- badram=0x0000000058cb4000,0xfffffffffffffc00, 0x0000000058cb4400,0xfffffffffffffc00, 0x0000000058cb4800,0xfffffffffffffc00, 0x0000000058cb4c00,0xfffffffffffffc00, 0x0000000058cb5000,0xfffffffffffff800, 0x0000000058cb5800,0xfffffffffffff800, 0x0000000058cb6000,0xfffffffffffff800, 0x0000000058cb6800,0xfffffffffffff800, 0x0000000058cb7000,0xfffffffffffff800, 0x0000000058cb7800,0xfffffffffffff800     [Manually transcribed from this image.] Would memmap=64K$0x58cb0000 be correct?
Yes, you can try it. Check /proc/cmdline to see if it's passed correctly, to make sure Grub doesn't mess with the $ characters, otherwise add \ escape characters. There is also badram support in Grub (GRUB_BADRAM in /etc/default/grub if you use grub-mkconfig). However it's also necessary to test whether it's effective. The reserved range should show up in /proc/iomem (ranges shown only for root). Another option, if your kernel has CONFIG_MEMTEST=y, is to try memtest=17 parameter. Then check dmesg for test results, and EarlyMemtestBad in /proc/meminfo. This way the kernel tests memory every time you boot up and automatically reserves bad ranges on its own. However, this only works if your RAM is faulty in a certain way that is always detected reliably. It would also slow down the boot process some. (The kernel only does a simple pattern test, which takes a few seconds). You can also test memory in userspace using memtester.
BadRAM Range: cannot set up the right range
1,607,669,470,000
I have a Lenovo IdeaPad 3-15ADA6 Laptop, Type 82KR, with Debian 11 Bullseye. I have just installed a brand new RAM chip, Corsair Vengeance 8Gb DDR4 2400MHz, as from Lenovo specifications. If I run sudo lshw I can see all the installed RAM correctly: *-memory description: System Memory physical id: 1 slot: System board or motherboard size: 12GiB *-bank:0 description: Row of chips DDR4 Synchronous Unbuffered (Unregistered) 2400 MHz (0,4 ns) product: CMSX8GX4M1A2400C16 vendor: Unknown physical id: 0 serial: 00000000 slot: DIMM 0 size: 8GiB width: 64 bits clock: 2400MHz (0.4ns) *-bank:1 description: SODIMM DDR4 Synchronous Unbuffered (Unregistered) 2400 MHz (0,4 ns) product: HMA851S6DJR6N-XN vendor: Hynix physical id: 1 serial: 00000000 slot: DIMM 0 size: 4GiB width: 64 bits clock: 2400MHz (0.4ns) I expected to have 12 GB of RAM, but if I run htop, the graphical system monitor or simply the free command, this is what I get: $ free -ht total used free shared buff/cache available Mem: 2,8Gi 1,8Gi 260Mi 45Mi 713Mi 700Mi Swap: 976Mi 973Mi 3,0Mi Total: 3,8Gi 2,8Gi 264Mi The system tends to freeze with a small number of open applications. I thought the memory might be broken, but I should be able to see 4 GB anyway, the amount of fixed installed memory on the motherboard, not 2.8 GB!
From what I've read, you should be using 8GB DDR4-RAM 3200MHz (PC4-25600) instead of the slower memory you've used
Why did 10 GB of RAM disappear?
1,607,669,470,000
I would like to copy the content of a file (string) to my ram memory. As in copy of a text so I can do paste later. Example: I have a file name: my_pub_key.pub and inside there's big amount of chars. Every time I highlight the text and do copy and later paste - I get partial of the string. Is there a (theoretical) way to do something like this: root@my-ip: copy-to-ram < ~/.ssh/my_pub_key.pub
to avoid any installation one can do: cat ~/.ssh/my_pub_key.pub > /dev/clipboard and you'll have it ready for a paste.
how to copy text of file to ram (to do paste later)
1,607,669,470,000
I have 8 gb of ram, ssd and very greedy for ram Android studio/gradle. and sometimes when gradle builds project(eats ram) it hangs all ubuntu(and xubuntu). ui becomes so laggish - mouse cursor moves 1 cm per 10 seconds. i dont know why it happens and it seems that no one knows cause there is a couple questions like mine in internet about "studio freezes ubuntu", "heavy for ram application freezes ubuntu", "ubuntu hangs when full ram", etc. and symptoms are the same. and there is also tracked bug since 2007. it seems that it's very specific situation because in one moment gradle needs a lot of ram for operating with big files on disk. it seems like at the same time it needs both ram and storage for opperating with big files. and somehow it hangs the system. so my question is - is there any option to disable buff/cache? or maybe param like swappines (0..100) but for buff/cache? i know that i shouldn't care for ram used for cache because it is available at any time, but i think in this case it doesn't work as should. maybe i am drastically wrong)
i found out that i had only 2gb file swap and my 8gb swap partition wasn't enabled(it's lame. but i'm new in linux). file partition swap was full on high loading and i assume that an ubuntu freeze was just a lack of virtual memory. i am still not sure about that because i remember that there was still 400-700mb buff/cache available memory before freeze. but anyway i mark this question as solved due to enabling 8gb swap partition. will reopen if freeze occur.
Disable buff/cache linux for fixing ubuntu hangs on full ram
1,607,669,470,000
As we know, cgroups can limit cpu usage of processes. Here is an example: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30142 root 20 0 104m 2520 1024 R 99.7 0.1 14:38.97 sh I have a process, whose pid is 30142. I can limit it as below: mkdir -p /sys/fs/cgroup/memory/foo echo 1048576 > /cgroup/memory/foo/memory.limit_in_bytes echo 30142 > /cgroup/memory/foo/tasks As we see, if I want to limit a process, I have to first execute it and then I could limit it according to its pid. Is it possible to limit a process according to its name? Is it possible to limit a process before executing it?
Control groups are pid-based, and there is no direct way of limiting processes by name. (Since control groups are hierarchical, this makes sense: a group also contains its member processes’ future children, by default, and having them re-attach to another group based on their name would be surprising.) The typical way to use control groups is to attach a parent process to them, and then rely on the fact that children inherit their parent’s group. However there is a tool which will allow you to start a process in a given group, cgexec: cgexec -g memory:foo yourcommand On Debian you’ll find this in cgroup-tools.
cgroups: Is it possible to limit cpu usage by process name instead of by pid
1,487,415,674,000
Here is the script: TYPE="${BLOCK_INSTANCE:-mem}" awk -v type=$TYPE ' /^MemTotal:/ { mem_total=$2 } /^MemFree:/ { mem_free=$2 } /^Buffers:/ { mem_free+=$2 } /^Cached:/ { mem_free+=$2 } /^SwapTotal:/ { swap_total=$2 } /^SwapFree:/ { swap_free=$2 } END { if (type == "swap") { free=swap_free/1024/1024 used=(swap_total-swap_free)/1024/1024 total=swap_total/1024/1024 } else { free=mem_free/1024/1024 used=(mem_total-mem_free)/1024/1024 total=mem_total/1024/1024 } pct=used/total*100 # full text printf("%.1fG/%.1fG (%.f%)\n", used, total, pct) # short text printf("%.f%\n", pct) # color if (pct > 90) { print("#FF0000\n") } else if (pct > 80) { print("#FFAE00\n") } else if (pct > 70) { print("#FFF600\n") } } ' /proc/meminfo Here is the error when I try to run it: $ ./memory awk: run time error: not enough arguments passed to printf("%.1fG/%.1fG (%.f%) ") FILENAME="/proc/meminfo" FNR=46 NR=46 1.1G/15.3G (7 It prints what I want (the memory usage) but also has an error. Can anyone help?
Awk's printf is treating your trailing % as the start of a fourth format specifier. If you want to print a literal % sign you need %%, for example $ awk 'BEGIN{printf("%.1fG/%.1fG (%.f%%)\n", 1.2, 3.4, 5.6)}' 1.2G/3.4G (6%)
Awk run time error: not enough arguments passed
1,487,415,674,000
This question is two-fold: I have a crappy little netbook with just 4GB of RAM, I've been running Linux on there for a couple of years, with just 4GB of ram. It's functioned well under the circumstances, but freezes from time to time, usually because of Firefox. I've set up a swap partition today, and that's helped drastically, but I was wondering if there's any alternative way to set up a USB stick as external RAM or the equivalent to it? Also, why did I not get asked about creating a swap partition when I originally installed this distro a couple of years ago? Seems like a strange question to have been left out of the installation process. Finally, as a pre-emptive answer to the potential question of "why have you had this installed for a couple of years, but are only now setting up a swap partition?": I've simply not given it any thought.
Is it possible to use a USB stick as additional RAM, excluding the use of swap partitions? No. From a system's point of view, memory is either RAM (the main memory, physically attached to the processor bus) or swap, the extended memory mapped to another device (usually disk). There is no other type of general memory. But you can have multiple swap devices and assign a priority to each one, so that the faster ones are used first. Then your USB stick (slower than a local disk partition) would be used only as a last resort. For example, create the swap space (preferably a partition) on the USB disk, run mkswap and the swapon with --priority (and change /etc/fstab to adjust the priority of the on-disk swap space). You will then have the main memory, a high priority swap on disk and a lower priority swap on the USB stick. The system might not be very responsive when starts to use the slower swap but it should keep on running. swapon --show will let you know the usage, priority and type of each individual swap space.
Is it possible to use a USB stick as additional RAM, excluding the use of swap partitions?
1,487,415,674,000
I'm planning to use one of two old computers as a low-volume backup server. The machine will be online and files will be backed up over ssh/rsync. I will do a minimal install of Ubuntu or Debian server. The specifications of the machines are as follows: Pentium3 1GHz Coppermine CPU with two 168-pin SDRAM DIMM slots with 320MB RAM (256MB and 64MB) Pentium3 450MHz Katmai CPU with three 168-pin SDRAM DIMM slots with 384MB RAM (256MB and two 64MB). The motherboard in this machine seems to be a little more reliable. I only have 384MB of 168-pin SDRAM DIMMs available in total. For the task of low-volume backup server, does it really matter which one I use? The difference in RAM is small and a previous minimal install of Ubuntu 8.04 server used <20MB RAM anyway. However, the Coppermine CPU is more than double the speed if the Katmai, so I should use that one?
I'd say that the most important thing for a backup server is reliability, so would tend to go for that machine. You say it's low-volume so the CPU would be idle most of the time anyway.
CPU or RAM better in old machine?
1,487,415,674,000
I have a Debian box, where I am doing some data recovery using ddrescue on a sata ssd. The process has been running for 24 hours, and has 24 to go (at least) in any event, the PC has 16GB ram, and 10GB swap. For some reason, There is 8GB swap in use, and 2GB RAM in use. This seems like an inefficient use of resources. I'd like to avoid this behavior in the future. Why is the memory devices being utilized in this way? And what can be done to avoid this sort of operation in the future?
Swap is being used instead of ram because the memory pushed to swap was inactive, not being used, and likely your ddrescue is pulling in a lot of data that is filling up cache in ram. This is not really an efficiency problem, it should be able to pull the data out of swap fairly quickly when it needs it. It is in swap because the system thinks growing cache is a better use of the ram. Generally this is true, but ddrescue is probably going to use those disk blocks only once anyway. This is a mostly harmless situation, but if it really bothers you, you could create a cgroup, move ddrescue into it, and then update the memory.max parameter for that cgroup to limit how much ram ddrescue can use. Note that if ddrescue is rereading the blocks more than once, this could potentially make it much slower. And if it is not, changing this parameter will not make the rest of the system significantly faster, as if the pages that are swapped out were being used, they would not have been swapped out.
why is all this swap space being used?
1,487,415,674,000
What happens if a Linux, let’s say Arch Linux or Debian, is installed with no swap partition or swap file. Then, when running the OS while almost out of RAM, the user opens a new application. Considering that this new application needs more RAM memory than what’s needed, what will happen? What part of the operating system handles RAM management operations, and can I configure it to behave differently?
The Linux kernel has a component called the OOM killer (out of memory). As Patrick pointed out in the comments the OOM killer can be disabled but the default setting is to allow overcommit (and thus enable the OOM killer). Applications ask the kernel for more memory and the kernel can refuse to give it to them (because there is not enough memory or because ulimit has been used to deny more memory to the process). If overcommit is enabled then an application has asked for some memory and was granted the amount but if the application writes to a new memory page (for the first time) and the kernel actually has to allocate memory for this but cannot do that then the kernel has to decide which process to kill in order to free memory. The kernel will rather kill new processes than old ones, especially those who (together with their children) consume much memory. So in your case the new process might start but would probably be the one which gets killed. You can use the files /proc/self/oom_adj /proc/self/oom_score /proc/self/oom_score_adj to check the current settings and to tell the kernel in which order it shall kill processes if necessary.
What happens if a Linux distro is installed with no swap and when it’s almost out of RAM executes a new application? [duplicate]
1,487,415,674,000
I just upgraded from two 2GB RAM cards to two 8GB ram cards. I've read that my laptop (T420) officially supports 8GB but can work with up to 16GB. I'm running free -mh which returns: total used free shared buff/cache available Mem: 7.7G 1.0G 5.3G 143M 1.4G 6.2G Swap: 3.9G 0B 3.9G I want to figure out if I installed them wrong or if it's a software configuration problem. Is there a series of commands I could run to check that both ram cards are seen?
sudo dmidecode --type 17 would returns physical RAM info.
How to check that ram is plugged in?
1,487,415,674,000
I have a java application that runs on a Linux server with physical memory(RAM) allocated as 12GB where I would see the normal utilization over a period of time as below. sys> free -h total used free shared buff/cache available Mem: 11G 7.8G 1.6G 9.0M 2.2G 3.5G Swap: 0B 0B 0B Recently on increasing the load of the application, I could see the RAM utilization is almost full, and available space is very less where I could face some slowness but still application continues to work fine. sys> free -h total used free shared buff/cache available Mem: 11G 11G 134M 17M 411M 240M Swap: 0B 0B 0B sys> free -h total used free shared buff/cache available Mem: 11G 11G 145M 25M 373M 204M Swap: 0B 0B 0B I referred to https://www.linuxatemyram.com/ where it suggested the below point. Warning signs of a genuine low memory situation that you may want to look into: available memory (or "free + buffers/cache") is close to zero swap used increases or fluctuates. dmesg | grep oom-killer shows the OutOfMemory-killer at work From the above points, I don't see any OOM issue at the application level and the swap was also disabled. so neglecting the two points. One point which troubles me was available memory is less than zero where I need a clarification Questions: In case available is close to 0, will it end up in a System crash? Does it mean I need to upgrade the RAM when available memory goes less? On what basis the RAM memory should be allocated/increased? Do we have any official recommendations/guidelines that need to follow for RAM memory allocation?
In case available is close to 0, will it end up in a System crash? On testing in one of my servers, where I loaded the memory with almost full as below sys> free -h total used free shared buff/cache available Mem: 11G 11G 135M 25M 187M 45M Swap: 0B 0B 0B Able to see my application alone (which consumed more memory) got killed by the Out of memory killer which can be referred in kernel logs dmesg -e [355623.918401] [21805] 553000 21805 69 21 2 0 0 rm [355623.921381] Out of memory: Kill process 11465 (java) score 205 or sacrifice child [355623.925379] Killed process 11465 (java), UID 553000, total-vm:6372028kB, anon-rss:2485580kB, file-rss:0kB, shmem-rss:0kB https://www.kernel.org/doc/gorman/html/understand/understand016.html The Out Of Memory Killer or OOM Killer is a process that the linux kernel employs when the system is critically low on memory. This situation occurs because the linux kernel has over allocated memory to its processes. ... This means that the running processes require more memory than is physically available.
When to upgrade RAM based on free output [closed]
1,487,415,674,000
I have this question that I need an answer to. What makes programs like st, zathura, sxiv, and feh load instantly and what makes programs like VS Code and Google Chrome load so slowly in low-spec computers? For example, I have a low-spec laptop running Linux Mint. And when I execute st, it instantly opens an st instance, but when I execute Google Chrome, it takes a long time to open a Google Chrome instance. What makes st load faster than Google Chrome and what makes Google Chrome load slower than st. Thank you! :)
feh does not do much comparing to Google Chrome. Just compare the files sizes and the number of dependencies: $ ls -l /usr/lib/chromium/chromium -rwxr-xr-x 1 root root 187751032 May 13 05:50 /usr/lib/chromium/chromium $ ls -l $(which feh) -rwxr-xr-x 1 root root 207280 Feb 2 21:03 /usr/bin/feh $ ldd /usr/lib/chromium/chromium | wc -l 178 $ ldd $(which feh) | wc -l 49 Besides, Google Chrome includes its own modified versions of many open source libraries that it depends on. It is huge and it takes much longer to load.
What makes a program load so fast? [closed]
1,487,415,674,000
I build some software on an armv7 with 1 GB RAM installed. It seems, that some builds need too much RAM: And the build get RIP and with internal compiler error: Killed (program cc1plus). So I am enlarge the swap by adding a swapfile, like it is described here http://www.thegeekstuff.com/2010/08/how-to-add-swap-space/ in method 2. But as you can see in the picture. The RAM is filled up near by 100% but the system don't make a swap. If there a possibility to correct or to force it? Thanks in advance Alex
echo 100 > /proc/sys/vm/swappiness https://en.wikipedia.org/wiki/Swappiness
Why the system don't swap?
1,487,415,674,000
Since the physical size of the file would be less than the logical file size, is it possible to create a sparse file with a size bigger than the available ram?
Files can naturally be bigger than RAM, sparse or no. I have a terabyte harddisk but not terabytes of RAM. If you meant bigger than the filesystem / partition, then sure, you can create a sparse file that has exabytes instead of terabytes in size, this is only limited by the maximum file size of a given filesystem and you can Google for these limits, for example the Wikipedia entry for a given filesystem usually lists them. Actually writing data to such a sparse file will eventually yield the common no space left on device error.
Can a sparse file go beyond the ram size?
1,487,415,674,000
So I have a Scientific Linux LiveDVD. And I have it installed on a PC that has no hard drives configured and avaliable at all. I want to install some applications that would allow me to configure system before I would be able to install OS. So I wonder: how to create a temporary installation folder that would exist only while OS is running in RAM, install applications into it (using standart installer yum) and be able to run them?
Yum will do that by default in Live mode; anything you install whilst running off a live optical disc is installed to RAM because you are running off of RAM as it is. If you want to do it explicitly, though, you can create a RAM disk: mkdir foo mount -t tmpfs -o size=4096M bar /foo where: mount is the command. -t tmpfs specifies the type of filesystem. In this case, the filesystem type is tmpfs -o size=4096M is for options and in this case, define the size as roughly 4gb. You can obviously make it larger or smaller depending on your needs and available RAM. bar is the label of the filesystem that you are creating. Name it whatever you please; you'll rarely see it. /foo is the location you want to mount the RAM disk. I do not see an advantage to doing it this way; the live environment's default should work just as well.
How to install applications temporary into RAM on LiveCD?
1,487,415,674,000
As a side project, I am thinking about adding a NTP server to pool.ntp.org. I would like to use CentOS basic, but how much RAM and CPU should I assign to a machine that will only be running ntpd (plus all the basic OS services, of course)?
I've run these services on a machine with 2x1GHz cores with 256MB RAM but I would expect that if this is your only service you have, you would only need something like dual-core 500MHz+ (one would work but needs to be a bit faster) CPU and 128MB RAM.
How much RAM and CPU for a NTP server? [closed]
1,487,415,674,000
TLDR; If /tmp in mounted as tmpfs, in the presence of swap, is there any kind of priority when swapping ? Does tmpfs start to swap before applications ? FULL STORY I have a laptop with 32G of RAM (Debian), and 32G of swap. I plan to mount /tmp with tmpfs. But I am concerned with the behavior if the system needs to swap : what will swap first ? I guess (and I hope) that applications will have priority for RAM usage over tmpfs (e.g., I guess tmpfs will swap first). But I couldn't find any confirmation for this. More broadly, are there any scenarios where mounting /tmp as tmpfs can slow down the system ?
I’m not aware of any priority applied here. Pages will be swapped out based on their level of use — pages which haven’t been used recently will be swapped out first. If that happens to be pages used in a tmpfs, or pages used by applications, doesn’t make any difference. See How does the kernel decide between disk-cache vs swap? for details of the process. So you’ll generally see applications which don’t run much swapped out first, then tmpfs content which hasn’t been touched in a while. In my experience using a tmpfs for /tmp increases system responsiveness overall, for /tmp-intensive workloads, and the improvement is greater than potential slowdown caused by increased memory use. It is however important to keep track of /tmp use — it’s easy to end up with obsolete files there which take up room in memory and/or swap without actually being necessary.
Do applications have priority over tmpfs for RAM usage (in the presence of swap)?
1,487,415,674,000
Background: I purchased a new computer, and during the past two months, I experienced the computer freezing, thrice. When it freezes, I have no option other than to reboot the computer. last -x showed: nav tty7 :0 Fri Jun 16 20:36 - crash (00:19) On running Memtest86+, it passed during the first run, but I let it run for another pass, and then it failed. With XMP switched off there were no failures. I reduced the RAM frequency from 3200MHz to 3000MHz, and there were no failures even after 6 passes. Question: I've been told that since the RAM was running on XMP for the two months, and since even the OS I downloaded and installed was while this RAM was in use, there's no telling how many files could have been corrupted by the flipped bits in the RAM. So I've been advised to reinstall my OS. The way I see it, it's not just the OS I'll have to reinstall. I'll also have to re-download all installer files of every other software I use. That part is ok. But all my personal files, I'll have to copy to an external disk and copy back. Is there a chance that those personal files could have been corrupted too? After installing the OS, I had copied those files to my SDD from an external HDD. But even though it passed through RAM, wouldn't there be a CRC check which ensures integrity of the files? So which files could have got corrupted, and do I really need to reinstall the OS and other softwares?
A Superuser post with an accepted answer tells you that unless you have ECC ram, disk files can be corrupted. That post was for Windows but the conclusion should hold for any operating system. I don't know of a distribution that does a CRC check of every file after installation as this would add a huge amount of time to complete. Your personal files certainly don't get CRC checks unless the individual apps include the feature. So there is a finite chance that any given file written since XMP was enabled is corrupt. I think the likelihood of a file corruption would be low. Unless there is some evidence that files are corrupt I personally would not re-install.
What files can an XMP-enabled RAM corrupt, if the RAM fails Memtest?
1,487,415,674,000
I ran some commands (in a script to be fast) and got this: $ ps -A | wc -l 513 $ echo "$((`ps -A -o rss |tr "\n" +`0))" 4368208 $ free total used free shared buff/cache available Mem: 5993608 5157844 132848 42616 702916 519028 Swap: 21030892 5276136 15754756 $ cat /proc/meminfo MemTotal: 5993608 kB MemFree: 132996 kB MemAvailable: 519176 kB Buffers: 83384 kB Cached: 514368 kB SwapCached: 422808 kB Active: 392060 kB Inactive: 1572336 kB Active(anon): 106632 kB Inactive(anon): 1312656 kB Active(file): 285428 kB Inactive(file): 259680 kB Unevictable: 27084 kB Mlocked: 27084 kB SwapTotal: 21030892 kB SwapFree: 15754756 kB Dirty: 264 kB Writeback: 0 kB AnonPages: 1190852 kB Mapped: 1107036 kB Shmem: 42616 kB KReclaimable: 105164 kB Slab: 291468 kB SReclaimable: 105164 kB SUnreclaim: 186304 kB KernelStack: 19376 kB PageTables: 58636 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 24027696 kB Committed_AS: 14543756 kB VmallocTotal: 34359738367 kB VmallocUsed: 117308 kB VmallocChunk: 0 kB Percpu: 7072 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB FileHugePages: 0 kB FilePmdMapped: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 5936768 kB DirectMap2M: 335872 kB DirectMap1G: 0 kB where did 789636 kB (5157844-4368208) go? My final goal is to determine what is using that RAM and if it can be freed. I need to be able to list what is using that RAM if possible. Is there a better ps command parameters for that? related: RAM usage doesn't add up? (Free+used < total) Substantial portion of memory used is not accounted for by user processes
The kernel itself uses some ram, but most of it is in that 702,916k of buffers/cache.
RAM used by apps doesn't sum up vs free RAM, why?
1,487,415,674,000
I usually monitor my server resource usage. I am using htop for monitoring. When running htop the memory usage is 1.3G, which is fine for me. But I tried to view the memory used by each process. For this, I am using ps command. To view the memory used by MySQL, I am runing ps aux | grep mysqld. It prints these lines: root 13908 0.0 0.0 112660 972 pts/0 S+ 11:12 0:00 grep --color=auto mysql mysql 17984 2.6 3.1 2845500 387676 ? Ssl 2017 2974:34 /usr/sbin/mysqld So, it seems that MySQL uses 2845500 of memory, which means around 2.7G of memory, which is much higher than (1.3G) the full system memory usage showed by htop. Is that the number shown by ps, a number of bytes, instead of kilobytes? PS: I am using CentOS 7, 64 bit version.
2845500 is the amount of memory allocated by the process, not the amount it’s using. The latter is given by the next column: 387676. Both values are measured in kilobytes.
`ps` showing much higher ram usage than `htop`
1,487,415,674,000
I have been developing a MySQL database that makes some use of BLObs. This has now been transferred on to a virtual machine (VM) in preparation for going fully into the 'cloud'. However some of the larger and more complex queries have caused MySQL to drop out! These were not a problem on my laptop. Investigating this it seems to occur when RAM usage in approaching max and Swap is maxing out. Looking at the configuration my humble laptop has twice as much RAM and fives times more Swap. The perceived wisdom seems to be that for servers the ratio of Swap to RAM is lower than for PCs. The question is how do I go about calculatng the RAM/Swap requirements? Thank you...
There's not really any meaningful formula here; there's just arbitrary rules of thumb that get thrown around. E.g., "The perceived wisdom seems to be that for servers the ratio of Swap to RAM is lower than for PCs." Perhaps that assumes a gargantuan amount of RAM, as servers are more likely to have. But this is still mostly meaningless. It says nothing about how much swap you actually need. Swap is compensation for not having enough RAM. Ideally, you have enough RAM, so you don't use any swap at all. Meaning, you don't need any, but if you want some "just in case", you might as well go with RAM * 2 or some other arbitrary figure. Since storage is much cheaper than RAM, allocating 25 or 50 or 100 MB to swap doesn't matter -- even if it is never used. But if you do not have enough RAM, then your swap usage is not theoretical. In this case, you are not just dealing with some arbitrary number to cover some abstract general use case. You have an actual requirement. If you aren't sure what that is and RAM * 2 turns out to be not enough, double it again until you are happy with the outcome.
Calculating RAM/Swap Space Requirements [duplicate]
1,487,415,674,000
I have a system with 125G ram available and a swap partition with 4G, which is constantly full. On a disk with 223 GiB available, I would like to increase the swap partition. Is it safe to just swapoff /dev/sda2, extend the partition table for /dev/sda2 and resize the swap file system and turn it back on again? This here is the partition layout of the disk. sda1 8:1 0 512M 0 part /boot sda2 8:2 0 4G 0 part [SWAP]
Assuming you have enough RAM free to absorb the 4GB then it is safe to just swapoff and resize if that's what you want. You can of course also add a second partition and add that too, there's unlikely to be any real loss to having two partitions instead of one. Just remember to add it in /etc/fstab Note that having "too much" swap can lead to situations where your system suddenly requires all that sat back in RAM which will leave processes paused for a long time because even on an NVME, 8GB takes a noticeable amount of time to load. Obviously though, only you know the specifics of your system.
Extend Swap partition
1,487,415,674,000
I am trying to mount a filesystem with DAX feature on RAM. I used this answer to load brd module and get /dev/ram0. Then I tried all 3 filesystems that support DAX: ext2, ext4 and xfs. However, when I use -o dax flag with mount, I get this error: wrong fs type, bad option, bad superblock on /dev/ram0, missing codepage or helper program, or other error With dmesg | tail I see the following issue: (ram0): DAX unsupported by block device. Is this an incorrect way to achieve my initial goal, mounting a filesystem with DAX on RAM, or what could I be doing wrong? I saw this question but I do not think -t ramfs does what I want, it won't appear in df -h list. I use 5.10.0-14-amd64 linux kernel version.
It seems that DAX support was removed from brd module. There is a patch from 2017.
Mount filesystem with DAX enabled on RAM
1,487,415,674,000
No idea how to create a bash script, without using sudo, that shows Memory Info like 8 x 16384 MB DIMM-1600 MT/s Samsung. number of memory + size of each + type + manufacturer
sudo lshw -class memory | sed -n '/-bank/,$p' | sed -n '/cache/q;p' | egrep "bank|description|product|size|clock|vendor" *-bank:0 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 3600 MHz (0.3 ns) product: XXXXX vendor: CRUCIAL size: 16GiB clock: 3600MHz (0.3ns) *-bank:1 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 3600 MHz (0.3 ns) product: XXXXX vendor: CRUCIAL size: 16GiB clock: 3600MHz (0.3ns) *-bank:2 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 3600 MHz (0.3 ns) product: XXXXX vendor: CRUCIAL size: 16GiB clock: 3600MHz (0.3ns) *-bank:3 description: DIMM DDR4 Synchronous Unbuffered (Unregistered) 3600 MHz (0.3 ns) product: XXXXX vendor: CRUCIAL size: 16GiB clock: 3600MHz (0.3ns) The first sed omits everything in the input unless -bank is found, the second truncates the output as soon as cache is found. egrep only outputs the strings which contain any of the words separated by the pipe symbol. Masters of shell could probably reduce these three processing commands down to one but I'm not so good. You could just run sudo lshw -class memory and see everything.
How to get memory info without sudo command on bash
1,602,259,652,000
I have a problem with a new RAM that I bought. My system is a Thinkpad T410 with Debian 10 stable with 2+2GB of RAM. Today I replaced a 2GB board with a 8GB. I checked with htop my total amount of RAM and it's correct (9.6GB) but when my pc is stressed (lot of programs open and CPU under stress too) the system suddenly reboots. This is the new RAM that I bought: https://www.amazon.it/Timetec-PC3-12800-Unbuffered-Computer-Portatile/dp/B0145WDNI4/ref=sr_1_1?__mk_it_IT=%C3%85M%C3%85%C5%BD%C3%95%C3%91&dchild=1&keywords=ram+thinkpad+8gb&qid=1602259408&sr=8-1 This computer used to work perfectly until now, so I'm almost sure that's a RAM problem, does anyone know how to fix this? I'm not an expert in changing hardware.
I'll leave a solution for anyone that will have my same problem. In my case I was trying to install a 8+2 GB RAM with the 8 GB at 1600 Mhz. My T410 only supports 2,4,8 GB of a single RAM for a maximum of 8GB in total. And as it's not enough, T410 only supports 1066 Mhz and this could cause conflicts as well. Solution: check Frequency, Max RAM and single RAM of your computer before buying a new RAM. If you don't know your RAM specs you can use dmidecode -t17 to print them in the terminal
Changed RAM and now my system is malfunctioning
1,602,259,652,000
I am trying to know how to enable ZRAM for my PC running Ubuntu. I have heard that the ZRAM is speeding up some laptop and PCs, so will the ZRAM help My PC too? PC specs: CPU: Intel Core 2 Duo E4500 (2) @ 1.770GHz GPU: Intel 965Q Memory: 684MiB / 2992MiB Can someone tell me how to enable ZRAM?
Just run sudo apt-get install zram-config and everything will be handled for you After that if you need to customize the settings then you can use zramctl or write some options to /dev/zramX
How to Enable ZRAM on PC?
1,602,259,652,000
Using this script taken from this site #!/usr/bin/ksh # Available memory memory=`prtconf | grep Memory | head -1 | awk 'BEGIN {FS=" "} {print $3}'` gb_memory=`echo "scale=2; $memory/1024" | bc -l` # Free memory pagesize=`pagesize` kb_pagesize=`echo "scale=2; $pagesize/1024" | bc -l` sar_freemem=`sar -r 1 1 | tail -1 | awk 'BEGIN {FS=" "} {print $2}'` gb_freemem=`echo "scale=2; $kb_pagesize*$sar_freemem/1024/1024" | bc -l` # Used Memory gb_usedmem=`echo "scale=2; $gb_memory-$gb_freemem" | bc -l` # Conclusion echo "Avai Mem: $gb_memory GB" echo "Free Mem: $gb_freemem GB" echo "Used Mem: $gb_usedmem GB" I see a lot of ram used Avai Mem: 7.25 GB Free Mem: .62 GB Used Mem: 6.63 GB With top command I see the most "ram eating" process use 144M of ram,but only 690M of ram is free,this is a little strange,even calculate the other rss process I still not understand how a system can use over 4GB of ram. last pid: 15109; load avg: 0.08, 0.08, 0.07; up 2+00:33:21 15:25:40 88 processes: 87 sleeping, 1 on cpu CPU states: 94.4% idle, 1.3% user, 4.3% kernel, 0.0% stolen, 0.0% swap Kernel: 1031 ctxsw, 633 trap, 769 intr, 2053 syscall, 617 flt Memory: 7430M phys mem, 690M free mem, 1024M total swap, 1024M free swap PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND 879 root 17 59 0 209M 142M sleep 8:11 0.32% sstored 1267 root 43 59 0 129M 30M sleep 0:04 0.01% fmd 14774 root 64 59 0 90M 64M sleep 0:04 0.24% pkg.depotd 14810 root 64 59 0 90M 63M sleep 0:04 0.25% pkg.depotd 14792 root 64 59 0 87M 60M sleep 0:04 0.24% pkg.depotd 15 root 36 59 0 80M 47M sleep 1:23 0.67% svc.configd 13 root 14 59 0 39M 7272K sleep 0:07 0.01% svc.startd 1448 root 12 59 0 28M 7152K sleep 0:14 0.01% sysstatd 1483 _polkitd 6 59 0 25M 196K sleep 0:00 0.00% polkitd 1465 pkg5srv 27 59 0 24M 2228K sleep 0:00 0.00% httpd 4962 pkg5srv 27 59 0 24M 4672K sleep 0:00 0.00% httpd 1461 pkg5srv 27 59 0 24M 4032K sleep 0:00 0.00% httpd 1464 pkg5srv 27 59 0 24M 3516K sleep 0:00 0.00% httpd 1032 webservd 27 59 0 24M 3668K sleep 0:00 0.00% httpd 1045 webservd 27 59 0 24M 3484K sleep 0:00 0.00% httpd 1041 webservd 27 59 0 24M 3480K sleep 0:00 0.00% httpd 1458 pkg5srv 24 59 0 24M 4656K sleep 0:04 0.00% httpd 1012 webservd 18 59 0 23M 7620K sleep 0:04 0.00% httpd 280 root 11 59 0 23M 8360K sleep 0:00 0.00% rad 658 root 31 59 0 23M 5168K sleep 0:07 0.04% nscd 359 daemon 3 59 0 21M 4K sleep 0:00 0.00% colord 560 netadm 8 59 0 18M 4352K sleep 0:02 0.00% nwamd 838 root 3 59 0 18M 3064K sleep 0:00 0.00% zoneadmd 338 root 7 59 0 17M 10M sleep 0:02 0.00% devfsadm 4350 root 1 59 0 17M 4972K sleep 0:01 0.00% sendmail 1469 root 3 59 0 16M 3776K sleep 0:00 0.00% accounts-daemon p.s=there is one zone active,but..is not running
As @Fox noted in a comment, it may be ZFS which by default uses free memory to cache IO. There are various tweaks, but just upgrading Solaris may help as they've added a few features to help ZFS performance in Solaris 11. Ref MOS doc: Memory Management Between ZFS and Applications in Oracle Solaris 11.x (Doc ID 1663862.1) for some information and guidance. Ref the following for some ZFS reco's outside of Oracle: https://constantin.glez.de/2010/04/28/ten-ways-to-easily-improve-oracle-solaris-zfs-filesystem-performance/#ssdread Also, if you're using dedup on any datasets, it will use a fair amount of memory.
Solaris11: use a lot of ram
1,602,259,652,000
I'm sorry for asking something as elementary, but the information is very confusing and, even when there are ISO recommendations, not everybody follows them. The manufacturer says my laptop has "8GB DDR4 system memory" and I want to set a swap partition with EXACTLY that size + a small margin, but the version of gparted my distro uses (Ubuntu 16.04) requires me to enter a size in "MB". To put things worse, that same gparted says my hard drive is "1000204 MB" in size whereas the manufacturer says "1 TB" and I have tried to derive from that the exact definition Ubuntu is using of a MB but without success.
The command free -m will give you the size in MiB (2^20 bytes per MiB) and the command free -mega will give you the size of your RAM in MB (10^6 bytes per MB) There are other options: man free You will probably want MiB for gparted, as it doesn't use MB. An alternative to using gparted is to create a swapfile. That may be more convenient a guide by digital ocean Perhaps you should finally note that on a system with 8GB of ram, you probably won't need much swap (though it depends on what you do with your computer). There is no general reason that the swap size should match the physical memory size exactly or even roughly. This is discussed elsewhere.
How to conver each GB "manufacturer" RAM to the EXACT MB equivalent gparted uses?
1,602,259,652,000
I have 16GB of RAM with Debian 9 in my notebook but almost all of it is unused by the kernel. It uses only 1GB for buffer/cache. Where can I tell the kernel to use more of the free RAM for caching? My FS is Ext4. Thanks!
The kernel only caches data that has been accessed. If you have not read more than 1GB of data from the disk since the last boot, then it will not have more than 1GB cached.
Where to set kernels buffer/cache size
1,602,259,652,000
My problem If I transfer file with rsync using tapes,or disk(usb,e-sata,firewire) linux hang,no way to resume if not using powerbutton(brutal shutdown!) sysrq-trigger don't work,ssh don't answer,keyboard and screen no input. I have a M5A97 R2.0 Asus board,with 16G ram Crucial. I 've ordered a couple of other ram by kingston In your opinion can be a hw problem,or ram problem? Using the program ramtest no error given. I have tried also this solution..but hang anyway. Your opinion? I forgot,hang happen on every transfer especially with big files(over 10G) I tried different kernel versions..same problem.
Change ram and..works fine. Transfer of over 2TB completed without panic. So problem was my old ram.
linux hang on transfer with rsync,can be a ram error?
1,602,259,652,000
I'm wondering if our system has lots of free RAM, or if its almost out. I read here regarding MemAvailable but I am wondering how it applies to VirtualBox as I am sure thats the reason the numbers I get from the following commands differ so much. cat /proc/meminfo | grep Mem && free -lg MemTotal: 32771584 kB MemFree: 203372 kB MemAvailable: 27739104 kB total used free shared buffers cached Mem: 31 31 0 0 0 25 Low: 31 31 0 High: 0 0 0 -/+ buffers/cache: 5 25 Swap: 31 0 31 We have only allocated about 15GB of RAM to our VM's and the sytem has 32GB's. Does the above output seem normal, do we have 27 GB of free RAM to allocate? Or are we almost out? Or perhaps a memory leak? Any ideas are welcome! Thank you in advnace Ubuntu 14.04 Virtualbox 5 x64 32 GB RAM
You have 25Gb free; it's all being used as cache. The free output is most telling: total used free shared buffers cached [...] -/+ buffers/cache: 5 25 See http://www.linuxatemyram.com/ for more details.
VirtualBox Host - Ram Question - Mem Free Vs Mem Available - Possible Mem Leak?
1,602,259,652,000
I have a laptop with 8GB of RAM running 4.0.4-2-ARCH. Recently, I installed Android Studio, and bam, my normally pristine system was suddenly stuttering and all out freezing (more than once). Before Android Studio, I comfortably ran SMB, Minidlna, Plex, MySQL, PostgreSQL, Apache and Chrome simultaneously with no issues. Now I struggled to run even Chrome with Android Studio. In both cases, both free and System Monitor reported only 6.5G used RAM! So I did a little digging and enabled a swap of 5G (swapfile). I was surprised by the performance improvement! No more lags. But still, during peak load, (Studio and Chrome) the usage was 5G RAM + 1.5G swap. This confused me a bit and I have two questions. Firstly, if used memory was only 6G, why the stutter and especially, why the freezes? Secondly, my hard drive (1TB) is about 3 years old, and I would rather keep swap disabled. Is there some other way to achieve swap-like performance, without stressing the hard drive. I have already set swappiness to a low value of 10, which uses about 1G normally, and my laptop is already at it's maximum RAM limit. I have already read these excellent answers, but I am asking this because I am not satisfied by them. EDIT: These answers state that Linux will use all available memory and hence, for new programs, paging will slow down the computer. But if a lot the content Linux put into memory is stuff that can be managed without having done so, why is performance penalized when programs requiring large RAM are run? I mean, at best program boot up should be slow (paging). Do I need swap space if I have more than enough amount of RAM? Why is swap used when a lot of memory is still free? Is swap an anachronism?
While it may appear you have more than enough RAM, Linux buffers file data in memory. It is also common to place file systems like /tmp in memory to speed up access. If you don't have swap enabled, there are a lot of things that may end up stuck in memory that may prevent caching of frequently accessed files. Your choice is really: page out unused memory to disk; or repeatedly read files from disk. You have no options that won't require IO once memory (including buffers) gets filled. These days it is common to page memory out to swap that hasn't been recently accessed rather than swap out whole programs. Things that might be paged include: Non-PIC (position-independent-code) that has been loaded into a program and modified for its memory location; Data read into programs that is not being actively used; temporary files that are not being actively used (may be paged); and any other memory that isn't being actively used and has no alternate backing store. PIC and other unmodified data read from disk may use the file from which it is read as a backing store rather than using swap. You can use a program like sar to monitor paging, swapping, and disk I/O. I would expect you will see less use of disk when you have swap enabled. If you want to suspend to disk, it is common that you need a fairly large swap space into which memory can be copied when you suspend.
Why and how is swap improving performance, and can I disable it? [closed]
1,602,259,652,000
I have found on web those two command line to see top 10 cpu and ram eating process ram echo " SZ PID RUSER COMMAND";UNIX95= ps -ef -o 'sz pid ruser args' |sort -nr|head -10 cpu echo " SZ PID RUSER COMMAND";UNIX95= ps -ef -o 'sz pid ruser args' |sort -nr|head -10 Someone know a command line to see how much mb of ram is consuming the top 10 process? Thanks
Solution found in a very good script http://www.cfg2html.com/ echo "VSZ(KB) PID RUSER CPU TIME COMMAND" UNIX95= ps -e -o 'vsz pid ruser cpu time args' |sort -nr|head -25
hpux: how much memory process eating?
1,602,259,652,000
we have VM machines (machines are RHEL 7.9), and recently machines RAM memory was update from 32G to 64G is it possible to verify on which date? memory was update?
Your kernel logs at boot should give you the total memory. Did you search through them? See this previous post in stack exchange. On my system, I kind of just shotgun with grep -i ram /var/log/* Bib in comments notes: grep -Ei memory.+available /var/log/* may work better. It does on my test cases. But on RHEL it may well be /var/log/messages
how to verify when RAM memory was update on RHEL machine
1,602,259,652,000
I am using nginx + varnish. I noticed that I have ~ 22GB / 32GB in use. I looked in htop and noticed that there are about 170 duplicate varnish processes. Tell me what could be the problem? Where to start looking?
Threads What you're seeing in htop are Varnish treads, not Varnish processes. Varnish only has 2 long running processes: The main process (owned by the varnish user) The worker process (owned by the vcache user) The reason why you're seeing so many of them, is because Varnish wants the system to be responsive enough to handle a spike in traffic. Creating new threads is costly in terms of resources, and will slow the system down. That's why Varnish has 2 thread pools by default that contain 100 threads each. As demand grows, Varnish can scale this to 5000 threads per thread pool. These numbers are configurable via the thread_pool_min and thread_pool_max runtime settings. At the minimum 200 threads will be active, at the maximum 10000. Here's how you can see those values: varnishadm param.show thread_pool_min varnishadm param.show thread_pool_max You can use add -p thread_pool_min=x and -p thread_pool_max=x to the varnishd process if you want to change the default values. Don't forget that these values apply per thread pool, and as there are 2 thread pools, the minimum and maximum value are to be multiplied by 2. If you run varnishstat -f MAIN.threads, you'll see the amount of currently active threads. Memory Varnish's memory consumption partly depends objects in the cache, and partly depends on activity inside worker threads. Object storage The -s runtime parameter in varnishd limits the size of the object cache. However, there's also an unbound transient storage that stores shortlived content and that temporarily hosts uncacheable data. The following command will allow you to monitor object & transient storage varnishstat -f "SMA.*.g_bytes" -f "SMA.*.g_space" SMA.s0.g_bytes monitors the amount of object memory that is in use SMA.s0.g_space monitors the amount of space that is left in the object storage SMA.Transient.g_bytes monitors the amount of transient memory storage that is in use SMA.Transient.g_space monitors that amount of transient memory storage that is available. This should will be zero most of the times, because by default transient storage is unbounded Each object in cache also has some overhead in terms of memory consumption. This is rather small, but the larger the number of objects in cache, the bigger the overhead. The following command allows you to determine the number of objects in cache: varnishstat -f MAIN.n_object Workspace memory Memory is also consumed by threads. Each thread has workspace memory available to keep track of state: The client workspace is set to 64KB per thread, and is used by worker threads that deal with client requests The backend workspace is used for worker threads involved in backend connections. Each of these threads can also consume 64KB per thread The session workspace is used for storing TCP session information. Each of these The following commands can be used to display workspace limits per worker thread type: varnishadm param.show workspace_client varnishadm param.show workspace_backend varnishadm param.show workspace_session Stack space Each worker thread can also consume stack space. This is used by libraries that Varnish depends on. The default limit per thread is 48KB. The following command allows you to check the limit on your system: varnishadm param.show thread_pool_stack Conclusion Varnish Cache, the open source version of Varnish cannot limit its own total memory footprint. You can limit the object storage. You can also limit the size of the transient storage, but that may result in some annoying side effects. It is basically down to the size of the cache, transient storage, and traffic patterns. Varnish Enterprise is able to limit the total memory footprint. This is the commercial version of Varnish, which offers a proprietary storage engined called MSE (Massive Storage Engine). MSE has a featured called the Memory Governor that takes object storage, transient storage, workspace memory and stack space into account. If there is an imbalance, MSE will shrink the size of the cache to cater for extra memory needs.
Varnish duplicate process 170 once
1,602,259,652,000
(Manjaro 20, Linux 5.8.3, KDE, this laptop) When reasonably much is going on in my system, the RAM and swap usage often goes much higher than it should. For example currently I have a VM with a VM in it running and two instances of Minecraft, also some smaller stuff like music. That might sound like much, but my CPU is totally fine and even the sum of all the RAM usage shown in the task manager seems to be less than 2GB. Despite this, almost all of my 16GB RAM and 16GB swap are in use by… something. This is the output of free: total used free shared buff/cache available Mem: 15898 15218 151 305 527 92 Swap: 17490 16442 1047 This much calmer picture is seen in the task manager: I've read here that sometimes virtualisation causes weird RAM issues, but my outer VM is restricted to 8GB. Even if it somehow used all of that without showing it in the task manager, it would still not explain about a quarter of my RAM usage and none of the swap usage. free shows that I should not blame caching (for once). I've heard that disk I/O that can't be done in time gets queued up in RAM, but iotop shows nothing overly active. This is also not just a measurement error, I do indeed get lag spikes in pretty much everything due to this. So quite a portion of memory of the programs I'm actively using is in swap. What uses this additional RAM and why so much of it?
Following comments, ps aux --sort -vsize helped locating the big spender, baloo. I'll propose disabling it; it's an index service, perhaps you can live without it. Control commands: balooctl status balooctl disable Edit ~/.config/baloofilerc Indexing-Enabled=false (perhaps needs reboot) You could also try to purge and rebuild cache, or narrow the directories it indexes. I can't tell why it's not shown in graphic task manager, I never rely or use these tools. top, or more modern htop (interactive tools) and of course ps command are your friends.
High RAM and swap usage out of seemingly nowhere
1,427,416,989,000
In order to understand another answer (by glenn jackman): find / -type d -print0 | while read -r -d '' dir; do ls -ltr "$dir" | sed '$!d'; done the first step is to understand the usage of the option -r of the read command. First, I thought, it would be sufficient to simply execute man read to look up the meaning of the -r option, but I realized the man page does not contain any explanation for options at all, so I Googled for it.  I got some read -t, read -p examples, but no read -r.
There is no stand-alone read command: instead, it is a shell built-in, and as such is documented in the man page for bash: read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name ...]     ︙ -r Backslash does not act as an escape character.  The backslash is considered to be part of the line.  In particular, a backslash-newline pair may not be used as a line continuation. So, to summarize, read normally allows long lines to be broken using a trailing backslash character, and normally reconstructs such lines. This slightly surprising behavior can be deactivated using -r.
What is the meaning of read -r?
1,427,416,989,000
I read that it is bad to write things like for line in $(command), the correct way seem to be instead: command | while IFS= read -r line; do echo $line; done This works great. But what if what I want to iterate on is the contents of a variable, not the direct result of a command? For example, imagine that you create the following file quickfox: The quick brown foxjumps\ over - the lazy , dog. I would like to be able to do something like this: # This is just for the example, # I could of course stream the contents to `read` variable=$(cat quickfox); while IFS= read -r line < $variable; do echo $line; done; # this is incorrect
In modern shells like bash and zsh, you have a very useful `<<<' redirector that accepts a string as an input. So you would do while IFS= read -r line ; do echo $line; done <<< "$variable" Otherwise, you can always do echo "$variable" | while IFS= read -r line ; do echo $line; done
Iterating over multiple-line string stored in variable
1,427,416,989,000
How do I get read and write IOPS separately in Linux, using command line or in a programmatic way? I have installed sysstat package. Please tell me how do I calculate these separately using sysstat package commands. Or, is it possible to calculate them using file system? ex: /proc or /sys or /dev
iostat is part of the sysstat package, which is able to show overall iops if desired, or show them separated by reads/writes. Run iostat with the -d flag to only show the device information page, and -x for detailed information (separate read/write stats). You can specify the device you want information for by simply adding it afterwards on the command line. Try running iostat -dx and looking at the summary to get a feel for the output. You can also use iostat -dx 1 to show a continuously refreshing output, which is useful for troubleshooting or live monitoring, Using awk, field 4 will give you reads/second, while field 5 will give you writes/second. Reads/second only: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4; }' Writes/sec only: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $5; }' Reads/sec and writes/sec separated with a slash: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4"/"$5; }' Overall IOPS (what most people talk about): iostat -d <your disk name> | grep <your disk name> | awk '{ print $2; }' For example, running the last command with my main drive, /dev/sda, looks like this: dan@daneel ~ $ iostat -dx sda | grep sda | awk '{ print $4"/"$5; }' 15.59/2.70 Note that you do not need to be root to run this either, making it useful for non-privileged users. TL;DR: If you're just interested in sda, the following command will give you overall IOPS for sda: iostat -d sda | grep sda | awk '{ print $2; }' If you want to add up the IOPS across all devices, you can use awk again: iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}' This produces output like so: dan@daneel ~ $ iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}' 18.88
How to get total read and total write IOPS in Linux?
1,427,416,989,000
With ksh I'm using read as a convenient way to separate values: $ echo 1 2 3 4 5 | read a b dump $ echo $b $a 2 1 $ But it fails in Bash: $ echo 1 2 3 4 5 | read a b dump $ echo $b $a $ I didn't find a reason in the man page why it fails, any idea?
bash runs the right-hand side of a pipeline in a subshell context, so changes to variables (which is what read does) are not preserved — they die when the subshell does, at the end of the command. Instead, you can use process substitution: $ read a b dump < <(echo 1 2 3 4 5) $ echo $b $a 2 1 In this case, read is running within our primary shell, and our output-producing command runs in the subshell. The <(...) syntax creates a subshell and connects its output to a pipe, which we redirect into the input of read with the ordinary < operation. Because read ran in our main shell the variables are set correctly. As pointed out in a comment, if your goal is literally to split a string into variables somehow, you can use a here string: read a b dump <<<"1 2 3 4 5" I assume there's more to it than that, but this is a better option if there isn't.
In bash, read after a pipe is not setting values
1,427,416,989,000
In some Bourne-like shells, the read builtin can not read the whole line from file in /proc (the command below should be run in zsh, replace $=shell with $shell with other shells): $ for shell in bash dash ksh mksh yash zsh schily-sh heirloom-sh "busybox sh"; do printf '[%s]\n' "$shell" $=shell -c 'IFS= read x </proc/sys/fs/file-max; echo "$x"' done [bash] 602160 [dash] 6 [ksh] 602160 [mksh] 6 [yash] 6 [zsh] 6 [schily-sh] 602160 [heirloom-sh] 602160 [busybox sh] 6 read standard requires the standard input need to be a text file, does that requirement cause the varied behaviors? Read the POSIX definition of text file, I do some verification: $ od -t a </proc/sys/fs/file-max 0000000 6 0 2 1 6 0 nl 0000007 $ find /proc/sys/fs -type f -name 'file-max' /proc/sys/fs/file-max There's no NUL character in content of /proc/sys/fs/file-max, and also find reported it as a regular file (Is this a bug in find?). I guess the shell did something under the hood, like file: $ file /proc/sys/fs/file-max /proc/sys/fs/file-max: empty
The problem is that those /proc files on Linux appear as text files as far as stat()/fstat() is concerned, but do not behave as such. Because it's dynamic data, you can only do one read() system call on them (for some of them at least). Doing more than one could get you two chunks of two different contents, so instead it seems a second read() on them just returns nothing (meaning end-of-file) (unless you lseek() back to the beginning (and to the beginning only)). The read utility needs to read the content of files one byte at a time to be sure not to read past the newline character. That's what dash does: $ strace -fe read dash -c 'read a < /proc/sys/fs/file-max' read(0, "1", 1) = 1 read(0, "", 1) = 0 Some shells like bash have an optimisation to avoid having to do so many read() system calls. They first check whether the file is seekable, and if so, read in chunks as then they know they can put the cursor back just after the newline if they've read past it: $ strace -e lseek,read bash -c 'read a' < /proc/sys/fs/file-max lseek(0, 0, SEEK_CUR) = 0 read(0, "1628689\n", 128) = 8 With bash, you'd still have problems for proc files that are more than 128 bytes large and can only be read in one read system call. bash also seems to disable that optimization when the -d option is used. ksh93 takes the optimisation even further so much as to become bogus. ksh93's read does seek back, but remembers the extra data it has read for the next read, so the next read (or any of its other builtins that read data like cat or head) doesn't even try to read the data (even if that data has been modified by other commands in between): $ seq 10 > a; ksh -c 'read a; echo test > a; read b; echo "$a $b"' < a 1 2 $ seq 10 > a; sh -c 'read a; echo test > a; read b; echo "$a $b"' < a 1 st
Why some shells `read` builtin fail to read the whole line from file in `/proc`?
1,427,416,989,000
I have a script which connects to a remote server and check if some package is installed: ssh root@server 'bash -s' < myscript.sh myscript.sh: OUT=`rpm -qa | grep ntpdate` if [ "$OUT" != "" ] ; then echo "ntpdate already installed" else yum install $1 fi This example could be simplified. Here is myscript2.sh which has same problem: read -p "Package is not installed. Do you want to install it (y/n)?" choise My problem is that bash can not read my answers interactively. Is there a way to execute local script remotely without losing ability to prompt user?
Try something like this: $ ssh -t yourserver "$(<your_script)" The -t forces a tty allocation, $(<your_script) reads the whole file and in this cases passes the content as one argument to ssh, which will be executed by the remote user's shell. If the script needs parameters, pass them after the script: $ ssh -t yourserver "$(<your_script)" arg1 arg2 ... Works for me, not sure if it's universal though.
Bash: interactive remote prompt
1,427,416,989,000
I have a use case where I need to read in multiple variables at the start of each iteration and read in an input from the user into the loop. Possible paths to solution which I do not know how to explore -- For assignment use another filehandle instead of stdin Use a for loop instead of ... | while read ... ... I do not know how to assign multiple variables inside a for loop echo -e "1 2 3\n4 5 6" |\ while read a b c; do echo "$a -> $b -> $c"; echo "Enter a number:"; read d ; echo "This number is $d" ; done
If I got this right, I think you want to basically loop over lists of values, and then read another within the loop. Here's a few options, 1 and 2 are probably the sanest. 1. Emulate arrays with strings Having 2D arrays would be nice, but not really possible in Bash. If your values don't have whitespace, one workaround to approximate that is to stick each set of three numbers into a string, and split the strings inside the loop: for x in "1 2 3" "4 5 6"; do read a b c <<< "$x"; read -p "Enter a number: " d echo "$a - $b - $c - $d "; done Of course you could use some other separator too, e.g. for x in 1:2:3 ... and IFS=: read a b c <<< "$x". 2. Replace the pipe with another redirection to free stdin Another possibility is to have the read a b c read from another fd and direct the input to that (this should work in a standard shell): while read a b c <&3; do printf "Enter a number: " read d echo "$a - $b - $c - $d "; done 3<<EOF 1 2 3 4 5 6 EOF And here you can also use a process substitution if you want to get the data from a command: while read a b c <&3; ...done 3< <(echo $'1 2 3\n4 5 6') (process substitution is a bash/ksh/zsh feature) 3. Take user input from stderr instead Or, the other way around, using a pipe like in your example, but have the user input read from stderr (fd 2) instead of stdin where the pipe comes from: echo $'1 2 3\n4 5 6' | while read a b c; do read -u 2 -p "Enter a number: " d echo "$a - $b - $c - $d "; done Reading from stderr is a bit odd, but actually often works in an interactive session. (You could also explicitly open /dev/tty, assuming you want to actually bypass any redirections, that's what stuff like less uses to get the user's input even when the data is piped to it.) Though using stderr like that might not work in all cases, and if you're using some external command instead of read, you'd at least need to add a bunch of redirections to the command. Also, see Why is my variable local in one 'while read' loop, but not in another seemingly similar loop? for some issues regarding ... | while. 4. Slice parts of an array as needed I suppose you could also approximate a 2D-ish array by copying slices of a regular one-dimensional one: data=(1 2 3 4 5 6) n=3 for ((i=0; i < "${#data[@]}"; i += n)); do a=( "${data[@]:i:n}" ) read -p "Enter a number: " d echo "${a[0]} - ${a[1]} - ${a[2]} - $d " done You could also assign ${a[0]} etc. to a, b etc if you want names for the variables, but Zsh would do that much more nicely.
Use read as a prompt inside a while loop driven by read?
1,427,416,989,000
In zsh, running the command read -p 'erasing all directories (y/n) ?' ans, throws the error, read: -p: no coprocess But in bash, it prints a prompt. How do I do this in zsh?
You can still use read, you just need to print a prompt first. In zsh, -p indicates that input should be read from a coprocess instead of indicating the prompt to use. You can do the following instead, which is POSIX-compliant: printf >&2 '%s ' 'erase all directories? (y/n)' read ans Like for ksh/zsh's read 'var?prompt' or bash's read -p prompt var, the prompt is issued on stderr so as not to pollute the normal output of your script.
read command in zsh throws error
1,427,416,989,000
How do I handle the backspaces entered, it shows ^? if tried & how read counts the characters, as in 12^?3 already 5 characters were complete(though all of them were not actual input), but after 12^?3^? it returned the prompt, weird. Please help! -bash-3.2$ read -n 5 12^?3^?-bash-3.2$
When you read a whole line with plain read (or read -r or other options that don't affect this behavior), the kernel-provided line editor recognizes the Backspace key to erase one character, as well as a very few other commands (including Return to finish the input line and send it). The shortcut keys can be configured with the stty utility. The terminal is said to be in cooked mode when its line editor is active. In raw mode, each character typed on the keyboard is transmitted to the application immediately. In cooked mode, the characters are stored in a buffer and only complete lines are transmitted to the application. In order to stop reading after a fixed number of characters so as to implement read -n, bash has to switch to raw mode. In raw mode, the terminal doesn't do any processing of the Backspace key (by the time you press Backspace, the preceding character has already been sent to bash), and bash doesn't do any processing either (presumably because this gives the greater flexibility of allowing the script to do its own processing). You can pass the option -e to enable bash's own line editor (readline, which is a proper line editor, not like the kernel's extremely crude one). Since bash is doing the line edition, it can stop reading once it has the requested number of characters.
How to handle backspace while reading?
1,427,416,989,000
I need to run a script by piping it through bash with wget(rather than running it directly with bash). $ wget -O - http://example.com/my-script.sh | bash It's not working because my script has read statements in it. For some reason these don't work when piping to bash: # Piping to bash works in general $ echo 'hi' hi $ echo "echo 'hi'" | bash hi # `read` works directly $ read -p "input: " var input: <prompt> # But not when piping - returns immediately $ echo 'read -p "input: " var' | bash $ Instead of prompting input: and asking for a value as it should, the read command just gets passed over by bash. Does anyone know how I can pipe a script with read to bash?
read reads from standard input. But the standard input of the bash process is already taken by the script. Depending on the shell, either read won't read anything because the shell has already read and parsed the whole script, or read will consume unpredictable lines in the script. Simple solution: bash -c "$(wget -O - http://example.com/my-script.sh)" More complex solution, more for education purposes than to illustrate a good solution for this particular scenario: echo '{ exec </dev/tty; wget -O - http://example.com/my-script.sh; }' | bash
Piping a script with "read" to bash
1,427,416,989,000
I'm used to bash's builtin read function in while loops, e.g.: echo "0 1 1 1 1 2 2 3" |\ while read A B; do echo $A + $B | bc; done I've been working on some make project, and it became prudent to split files and store intermediary results. As a consequence I often end up shredding single lines into variables. While the following example works pretty well, head -n1 somefile | while read A B C D E FOO; do [... use vars here ...]; done it's sort of stupid, because the while loop will never run more than once. But without the while, head -n1 somefile | read A B C D E FOO; [... use vars here ...] The read variables are always empty when I use them. I never noticed this behaviour of read, because usually I'd use while loops to process many similar lines. How can I use bash's read builtin without a while loop? Or is there another (or even better) way to read a single line into multiple (!) variables? Conclusion The answers teach us, it's a problem of scoping. The statement cmd0; cmd1; cmd2 | cmd3; cmd4 is interpreted such that the commands cmd0, cmd1, and cmd4 are executed in the same scope, while the commands cmd2 and cmd3 are each given their own subshell, and consequently different scopes. The original shell is the parent of both subshells.
It's because the part where you use the vars is a new set of commands. Use this instead: head somefile | { read A B C D E FOO; echo $A $B $C $D $E $FOO; } Note that, in this syntax, there must be a space after the { and a ; (semicolon) before the }.  Also -n1 is not necessary; read only reads the first line. For better understanding, this may help you; it does the same as above: read A B C D E FOO < <(head somefile); echo $A $B $C $D $E $FOO Edit: It's often said that the next two statements do the same: head somefile | read A B C D E FOO read A B C D E FOO < <(head somefile) Well, not exactly. The first one is a pipe from head to bash's read builtin. One process's stdout to another process's stdin. The second statement is redirection and process substitution. It is handled by bash itself. It creates a FIFO (named pipe, <(...)) that head's output is connected to, and redirects (<) it to the read process. So far these seem equivalent. But when working with variables it can matter. In the first one the variables are not set after executing. In the second one they are available in the current environment. Every shell has another behavior in this situation. See that link for which they are. In bash you can work around that behavior with command grouping {}, process substitution (< <()) or Here strings (<<<).
Use bash's read builtin without a while loop
1,427,416,989,000
I can't figure this out. I need to look at every line in a file and check whether it matches a word that is given in a variable. I started with command read, but I don't know what I am supposed to use after that. I tried grep, but I probably used it wrongly. while read line; do if [ $condition ] ;then echo "ok" fi done < file.txt
Here's a quickie for you, simply what we're doing is Line 1: While reading file into variable line Line 2: Match a regex, echo the $line if matching the word "bird" echo that line. Do whatever actions you need here, in this if statement. Line 3: End of while loop, which pipes in the file foo.text #!/bin/bash while read line; do if [[ $line =~ bird ]] ; then echo $line; fi done <foo.text Note that "bird" is a regex. So that you could replace it with for example: bird.*word to match the same line with a regular expression. Try it with a file like so, called foo.text with the contents: my dog is brown her cat is white the bird is the word
Read lines and match against pattern
1,427,416,989,000
The bash man page says the following about the read builtin: The exit status is zero, unless end-of-file is encountered This recently bit me because I had the -e option set and was using the following code: read -rd '' json <<EOF { "foo":"bar" } EOF I just don't understand why it would be desirable to exit non successfully in this scenario. In what situation would this be useful?
read reads a record (line by default, but ksh93/bash/zsh allow other delimiters with -d, even NUL with zsh/bash) and returns success as long as a full record has been read. read returns non-zero when it finds EOF while the record delimiter has still not been encountered. That allows you do do things like while IFS= read -r line; do ... done < text-file Or with zsh/bash while IFS= read -rd '' nul_delimited_record; do ... done < null-delimited-list And that loop to exit after the last record has been read. You can still check if there was more data after the last full record with [ -n "$nul_delimited_record" ]. In your case, read's input doesn't contain any record as it doesn't contain any NUL character. In bash, it's not possible to embed a NUL inside a here document. So read fails because it hasn't managed to read a full record. It stills stores what it has read until EOF (after IFS processing) in the json variable. In any case, using read without setting $IFS rarely makes sense. For more details, see Understanding "IFS= read -r line".
For what purpose does "read" exit 1 when EOF is encountered?
1,427,416,989,000
I have a local machine which is supposed to make an SSH session to a remote master machine and then another inner SSH session from the master to each of some remote slaves, and then execute 2 commands i.e. to delete a specific directory and recreate it. Note that the local machine has passwordless SSH to the master and the master has passwordless SSH to the slaves. Also all hostnames are known in .ssh/config of the local/master machines and the hostnames of the slaves are in slaves.txt locally and I read them from there. So what I do and works is this: username="ubuntu" masterHostname="myMaster" while read line do #Remove previous folders and create new ones. ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition"" ssh -n $username@$masterHostname "ssh -t -t $username@$line "mkdir -p EC2_WORKSPACE/$project Input Output Partition"" #Update changed files... ssh -n $username@$masterHostname "ssh -t -t $username@$line "rsync --delete -avzh /EC2_NFS/$project/* EC2_WORKSPACE/$project"" done < slaves.txt This cluster is on Amazon EC2 and I have noticed that there are 6 SSH sessions created at each iteration which induces a significant delay. I would like to combine these 3 commands into 1 to get fewer SSH connections. So I tried to combine the first 2 commands into ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input Output Partition"" But it doesn't work as expected. It seems to execute the first one (rm -rf Input Output Partition) and then exits the session and goes on. What can I do?
Consider that && is a logical operator. It does not mean "also run this command" it means "run this command if the other succeeded". That means if the rm command fails (which will happen if any of the three directories don't exist) then the mkdir won't be executed. This does not sound like the behaviour you want; if the directories don't exist, it's probably fine to create them. Use ; The semicolon ; is used to separate commands. The commands are run sequentially, waiting for each before continuing onto the next, but their success or failure has no impact on each other. Escape inner quotes Quotes inside other quotes should be escaped, otherwise you're creating an extra end point and start point. Your command: ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input Output Partition"" Becomes: ssh -n $username@$masterHostname "ssh -t -t $username@$line \"rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input OutputPartition\"" Your current command, because of the lack of escaped quotes should be executing: ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition if that succeeds: mkdir -p EC2_WORKSPACE/$project Input Output Partition"" # runs on your local machine You'll notice the syntax highlighting shows the entire command as red on here, which means the whole command is the string being passed to ssh. Check your local machine; you may have the directories Input Output and Partition where you were running this.
Multiple commands during an SSH inside an SSH session
1,427,416,989,000
I need to write a shell script that find and print all files in a directory which starts with the string: #include. Now, I know how to check if a string is in the file, by using: for f in `ls`; do if grep -q 'MyString' $f; then: #DO SOMETHING fi but how can I apply this to the first line? I thought to maybe create a variable of the first line and check if it starts with #include, but I'm not sure how to do this. I tried the read command but I fail to read into a variable. I'd like to hear other approaches to this problem; maybe awk? Anyway, remember, I need to check if the first line starts with #include, not if it contains that string. That's why I found those questions: How to print file content only if the first line matches a certain pattern? https://stackoverflow.com/questions/5536018/how-to-print-matched-regex-pattern-using-awk they are not completely helping.
It is easy to check if the first line starts with #include in (GNU and AT&T) sed: sed -n '1{/^#include/p};q' file Or simplified (and POSIX compatible): sed -n '/^#include/p;q' file That will have an output only if the file contains #include in the first line. That only needs to read the first line to make the check, so it will be very fast. So, a shell loop for all files (with sed) should be like this: for file in * do [ "$(sed -n '/^#include/p;q' "$file")" ] && printf '%s\n' "$file" done If there are only files (not directories) in the pwd. If what you need is to print all lines of the file, a solution similar to the first code posted will work (GNU & AT&T version): sed -n '1{/^#include/!q};p' file Or, (BSD compatible POSIXfied version): sed -ne '1{/^#include/!q;}' -e p file Or: sed -n '1{ /^#include/!q } p ' file
how to check if the first line of file contain a specific string? [duplicate]
1,427,416,989,000
I have this while loop and here-document combo which I run in Bash 4.3.48(1) and I don't understand its logic at all. while read file; do source ~/unwe/"$file" done <<-EOF x.sh y.sh EOF My question is comprised of these parts: What does the read do here (I always use read to declare a variable and assign its value interactively, but I'm missing what it's supposed to do here). What is the meaning of while read? Where does the concept of while come in here? If the here-document itself comes after the loop, how is it even affected by the loop? I mean, it comes after done, and not inside the loop, so what's the actual association between these two structures? Why does this fail? while read file; do source ~/unwe/"$file" done <<-EOF x.sh y.sh EOF I mean, done is done... So why does it matter if done <<-EOF is on the same line as the loop? If I recall correctly, I did have a case in which a for loop was one-liner and still worked.
The read command reads from its standard input stream and assigns what's read to the variable file (it's a bit more compicated than that, see long discussion here). The standard input stream is coming from the here-document redirected into the loop after done. If not given data from anywhere, it will read from the terminal, interactively. In this case though, the shell has arranged to connect its input stream to the here-document. while read will cause the loop to iterate until the read command returns a non-zero exit status. This will happen if there are any errors, or (most commonly) when there is no more data to be read (its input stream is in an end-of-file state). The convention is that any utility that wishes to signal an error or "false" or "no" to the calling shell does so by returning a non-zero exit status. A zero exit status signals "true" or "yes" or "no error". This status, would you wish to inspect it, is available in $? (only from the last executed utility). The exit status may be used in if statements and while loops or anywhere where a test is required. For example if grep -q 'pattern' file; then ...; fi A here-document is a form of redirection. In this case, it's a redirection into the loop. Anything inside the loop could read from it but in this case it's only the read command that does. Do read up on here-documents. If the input was coming from an ordinary file, the last line would have been done <filename Seeing the loop as one single command may make this more intuitive: while ...; do ...; done <filename which is one case of somecommand <filename Some shells also supports "here-strings" with <<<"string": cat <<<"This is the here-string" DavidFoerster points out that if any of the two scripts x.sh and y.sh reads from standard input, without explicitly being given data to read from a file or from elsewhere, the data read will actually come from the here-document. With a x.sh that contains only read a, this would make the variable a contain the string y.sh, and the y.sh script would never run. This is due to the fact that the standard input is redirected for all commands in the while loop (and also "inherited" by any invoked script or command) and the second line is "consumed" by x.sh before the while loop's read can read it. If this behaviour is unwanted, it can be avoided, but it's a bit tricky. It fails because there is no ; or newline before done. Without ; or newline before done, the word done will be taken as an argument of source, and the loop will additionally not be properly closed (this is a syntax error). It is almost true that any ; may be replaced by a newline (at least when it's a command delimiter). It signals the end of a command, as does |, &, && and || (and probably others that I have forgotten).
A while loop and an here-document - what happens when?
1,427,416,989,000
General problem I want to write a script that interacts with the user even though it is in the middle of a chain of pipes. Concrete example Concretely, it takes a file or stdin, displays lines (with line numbers), asks the user to input a selection or line numbers, and then prints the corresponding lines to stdout. Let's call this script selector. Then basically, I want to be able to do grep abc foo | selector > myfile.tmp If foo contains blabcbla foo abc bar quux xyzzy abc then selector presents me (on the terminal, not in myfile.tmp!) with options 1) blabcbla 2) foo abc bar 3) xyzzy abc Select options: after which I type in 2-3 and end up with foo abc bar xyzzy abc as contents of myfile.tmp. I've got a selector script up and running, and basically it is working perfectly if I don't redirect input and output. So selector foo behaves like I want. However, when piping things together as in the above example, selector prints the presented options to myfile.tmp and tries to read a selection from the grepped input. My approach I've tried to use the -u flag of read, as in exec 4< /proc/$PPID/fd/0 exec 4> /proc/$PPID/fd/1 nl $INPUT >4 read -u4 -p"Select options: " but this doesn't do what I hoped it would. Q: How do I get actual user interaction?
Using /proc/$PPID/fd/0 is unreliable: the parent of the selector process may not have the terminal as its input. There is a standard path that always refers to the current process's terminal: /dev/tty. nl "$INPUT" >/dev/tty read -p"Select options: " </dev/tty or exec </dev/tty >/dev/tty nl "$INPUT" read -p"Select options: "
How to read user input when using script in pipe
1,427,416,989,000
The following commands seem to be roughly equivalent: read varname varname=$(head -1) varname=$(sed 1q) One difference is that read is a shell builtin while head and sed aren't. Besides that, is there any difference in behavior between the three? My motivation is to better understand the nuances of the shell and key utilities like head,sed. For example, if using head is an easy replacement for read, then why does read exist as a builtin?
Neither efficiency nor builtinness is the biggest difference. All of them will return different output for certain input. head -n1 will provide a trailing newline only if the input has one. sed 1q will always provide a trailing newline, but otherwise preserve the input. read will never provide a trailing newline, and will interpret backslash sequences. Additionally, read has additional options, such as splitting, timeouts, and input history, some of which are standard and others vary between shells.
Is there a difference between read, head -1, and sed 1q?
1,427,416,989,000
I want to simply wait for the user to acknowledge a message by pressing Return. In bash, I am able to call $ read $ However, in sh (dash in my case), I get $ read sh: 1: read: arg count $ It seems like I must provide an argument? Where does that difference come from?
The standard read utility takes at least one variable's name. Some shell's read implementation uses a default variable, like REPLY, to store the read data if no name is supplied, but dash, aiming to be a POSIX compliant shell, does not (as it's not required to do so by the standard). The equivalent in the dash shell would be read REPLY The bash shell, even in its POSIX mode, does keep some non-POSIX features enabled. This is one of them, which means that read with no variable's name will work even if you run a bash --posix shell. For a full list of things that happens when you enable POSIX mode in bash (which this question really isn't about), see https://www.gnu.org/software/bash/manual/html_node/Bash-POSIX-Mode.html
Error sh: 1: read: arg count
1,427,416,989,000
I have a bash script that is named reader. It reads user input: #!/bin/bash read -p "What is your name?" username echo "Hello, ${username}" Running the script by source reader (EDIT: from the zsh shell), I get the error reader:read:2: -p: no coprocess. It doesn't give this error when I run it as ./reader. Other read options do not produce this error. For example, I could have done: #/bin/bash echo -n "What is your name?" read username echo "Hello, ${username}" Where does the no coprocess error come from? What does it mean? And what should I do about it?
When you use source, it's the current shell that reads the file, not the shell mentioned on the #! line. And I assume that your shell is either zsh or ksh93 which uses read -p to read from a co-process. An example of that in ksh93: cat /etc/passwd |& while IFS=":" read -p user rest; do printf 'There is a user called %s\n' "$user" done To run your script, either explicitly mention the interpreter: $ bash script.sh ... or make the script executable and run it: $ chmod +x script.sh $ ./script.sh To get read to use a custom prompt in both zsh and ksh93: read username"?What's you name? " printf 'Hello %s!\n' "$username"
"no coprocess" error when using read [duplicate]
1,427,416,989,000
The output of the command below is weird to me. Why does it not give me back element 5? $ echo '0,1,2,3,4,5' | while read -d, i; do echo $i; done 0 1 2 3 4 I would expect '5' to be returned as well. Running GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu). Adding a comma works, but my input data does not have a comma. Am I missing something?
With read, -d is used to terminate the input lines (i.e. not to separate input lines). Your last "line" contains no terminator, so read returns false on EOF and the loop exits (even though the final value was read). echo '0,1,2,3,4,5' | { while read -d, i; do echo "$i"; done; echo "last value=$i"; } (Even with -d, read also uses $IFS, absorbing whitespace including the trailing \n on the final value that would appear using other methods such as readarray) The Bash FAQ discusses this, and how to handle various similar cases: Bash Pitfalls #47 IFS=, read [...] BashFAQ 001 How can I read a file [...] line-by-line BashFAQ 005 How can I use array variables?
Bash Read: Reading comma separated list, last element is missed
1,427,416,989,000
I know this question has been already asked & answered, but the solution I found listens for space and enter: while [ "$key" != '' ]; do read -n1 -s -r key done Is there a way (in bash) to make a script that will wait only for the space bar?
I suggest to use only read -d ' ' key. -d delim: continue until the first character of DELIM is read, rather than newline See: help read
Press SPACE to continue (not ENTER)
1,427,416,989,000
If I try to execute read -a fooArr -d '\n' < bar the exit code is 1 -- even though it accomplishes what I want it to; put each line of bar in an element of the array fooArr (using bash 4.2.37). Can someone explain why this is happening I've found other ways to solve this, like the ones below, so that's not what I'm asking for. for ((i=1;; i++)); do read "fooArr$i" || break; done < bar or mapfile -t fooArr < bar
What needs to be explained is that the command appeared to work, not its exit code '\n' is two characters: a backslash \ and a letter n. What you thought you needed was $'\n', which is a linefeed (but that wouldn't be right either, see below). The -d option does this: -d delim continue until the first character of DELIM is read, rather than newline So without that option, read would read up to a newline, split the line into words using the characters in $IFS as separators, and put the words into the array. If you specified -d $'\n', setting the line delimiter to a newline, it would do exactly the same thing. Setting -d '\n' means that it will read up to the first backslash (but, once again, see below), which is the first character in delim. Since there is no backslash in your file, the read will terminate at the end of file, and: Exit Status: The return code is zero, unless end-of-file is encountered, read times out, or an invalid file descriptor is supplied as the argument to -u. So that's why the exit code is 1. From the fact that you believe that the command worked, we can conclude that there are no spaces in the file, so that read, after reading the entire file in the futile hope of finding a backslash, will split it by whitespace (the default value of $IFS), including newlines. So each line (or each word, if a line contains more than one word) gets stashed into the array. The mysterious case of the purloined backslash Now, how did I know the file didn't contain any backslashes? Because you didn't supply the -r flag to read: -r do not allow backslashes to escape any characters So if you had any backslashes in the file, they would have been stripped, unless you had two of them in a row. And, of course, there is the evidence that read had an exit code of 1, which demonstrates that it didn't find a backslash, so there weren't two of them in a row either. Takeaways Bash wouldn't be bash if there weren't gotchas hiding behind just about every command, and read is no exception. Here are a couple: Unless you specify -r, read will interpret backslash escape sequences. Unless that's actually what you want (which it occasionally is, but only occasionally), you should remember to specify -r to avoid having characters disappear in the rare case that there are backslashes in the input. The fact that read returns an exit code of 1 does not mean that it failed. It may well have succeeded, except for finding the line terminator. So be careful with a loop like this: while read -r LINE; do something with LINE; done because it will fail to do something with the last line in the rare case that the last line doesn't have a newline at the end. read -r LINE preserves backslashes, but it doesn't preserve leading or trailing whitespace.
read -a array -d '\n' < foo, exit code 1
1,427,416,989,000
(For simplicity I'll assume the file to read is the first argument - $1.) I can do what I want externally with: tempfile=$(mktemp) awk '/^#/ {next}; NF == 0 {next}; {print}' "$1" > $tempfile while read var1 var2 var3 var4 < $tempfile; do # stuff with var1, etc. done However, it seems absurd to need to call awk every time I parse the config file. Is there a way to make read ignore commented or whitespace-only lines in a file, without external binaries/potential performance issues? Answers so far are quite helpful! To clarify, I don't want to use a temp file, but I do want to read the config from a file, not from standard in. I'm well aware that I can use an input redirection when I call the script, but for various reasons that won't work in my circumstance. I want to softcode the input to read from, e.g.: configfile="/opt/myconfigfile.txt" [ $# -gt 0 ] && [ -r "$1" ] && configfile="$1" while read var1 var2 var3 var4 < "$configfile" ; do ... But when I try this, it just reads the first line of configfile over and over until I kill the process. Maybe this should be its own question...but it's probably a single line change from what I'm doing. Where's my error?
You don't need a tempfile to do this, and sed (or awk) are far more flexible in comment processing than a shell case statement. For example: configfile='/opt/myconfigfile.txt' [ $# -gt 0 ] && [ -r "$1" ] && configfile="$1" sed -e 's/[[:space:]]*#.*// ; /^[[:space:]]*$/d' "$configfile" | while read var1 var2 var3 var4; do # stuff with var1, etc. done # Note: var1 etc are not available to the script at this # point. They are only available in the sub-shell running # the while loop, and go away when that sub-shell ends. This strips comments (with or without leading whitespace) and deletes empty lines from the input before piping it into the while loop. It handles comments on lines by themselves and comments appended to the end of the line: # full-line comment # var1 var2 var3 var4 abc 123 xyz def # comment here Calling sed or awk for tasks like this isn't "absurd", it's perfectly normal. That's what these tools are for. As for performance, I'd bet that in anything but very tiny input files, the sed version would be much faster. Piping to sed has some startup overhead but runs very fast, while shell is slow. Update 2022-05-03: Note that the variables (var1, var2, var3, etc) which are set in the while read loop will "go out of scope" when the while loop ends. The can only be used inside that while loop. The while loop is being run in a sub-shell because the config file is being piped into it. When that sub-shell dies, its environment goes with it - and a child process can not change the environment of its parent process. If you want the variables to retain their values after the while loop, you need to avoid using a pipe. For example, use input redirection (<) and process substitution (<(...)): while read var1 var2 var3 var4; do # stuff with var1, etc. done < <(sed -e 's/[[:space:]]*#.*// ; /^[[:space:]]*$/d' "$configfile") # remainder of script can use var1 etc if and as needed. With this process substitution version, the while loop runs in the parent shell and the sed script is run as a child process (with its output redirected into the while loop). sed and its environment goes away when it finished, while the shell running the while loop retains the variables created/changed by the loop.
How to make bash built-in "read" ignore commented or empty lines?
1,427,416,989,000
This script takes the user input line after line, and executes myfunction on every line #!/bin/bash SENTENCE="" while read word do myfunction $word" done echo $SENTENCE To stop the input, the user has to press [ENTER] and then Ctrl+D. How can I rebuild my script to end only with Ctrl+D and process the line where Ctrl+D was pressed.
To do that, you'd have to read character by character, not line by line. Why? The shell very likely uses the standard C library function read() to read the data that the user is typing in, and that function returns the number of bytes actually read. If it returns zero, that means it has encountered EOF (see the read(2) manual; man 2 read). Note that EOF isn't a character but a condition, i.e. the condition "there is nothing more to be read", end-of-file. Ctrl+D sends an end-of-transmission character (EOT, ASCII character code 4, $'\04' in bash) to the terminal driver. This has the effect of sending whatever there is to send to the waiting read() call of the shell. When you press Ctrl+D halfway through entering the text on a line, whatever you have typed so far is sent to the shell1. This means that if you enter Ctrl+D twice after having typed something on a line, the first one will send some data, and the second one will send nothing, and the read() call will return zero and the shell interpret that as EOF. Likewise, if you press Enter followed by Ctrl+D, the shell gets EOF at once as there wasn't any data to send. So how to avoid having to type Ctrl+D twice? As I said, read single characters. When you use the read shell built-in command, it probably has an input buffer and asks read() to read a maximum of that many characters from the input stream (maybe 16 kb or so). This means that the shell will get a bunch of 16 kb chunks of input, followed by a chunk that may be less than 16 kb, followed by zero bytes (EOF). Once encountering the end of input (or a newline, or a specified delimiter), control is returned to the script. If you use read -n 1 to read a single character, the shell will use a buffer of a single byte in its call to read(), i.e. it will sit in a tight loop reading character by character, returning control to the shell script after each one. The only issue with read -n is that it sets the terminal to "raw mode", which means that characters are sent as they are without any interpretation. For example, if you press Ctrl+D, you'll get a literal EOT character in your string. So we have to check for that. This also has the side-effect that the user will be unable to edit the line before submitting it to the script, for example by pressing Backspace, or by using Ctrl+W (to delete the previous word) or Ctrl+U (to delete to the beginning of the line). To make a long story short: The following is the final loop that your bash script needs to do to read a line of input, while at the same time allowing the user to interrupt the input at any time by pressing Ctrl+D: while true; do line='' while IFS= read -r -N 1 ch; do case "$ch" in $'\04') got_eot=1 ;& $'\n') break ;; *) line="$line$ch" ;; esac done printf 'line: "%s"\n' "$line" if (( got_eot )); then break fi done Without going into too much detail about this: IFS= clears the IFS variable. Without this, we would not be able to read spaces. I use read -N instead of read -n, otherwise we wouldn't be able to detect newlines. The -r option to read enables us to read backslashes properly. The case statement acts on each read character ($ch). If an EOT ($'\04') is detected, it sets got_eot to 1 and then falls through to the break statement which gets it out of the inner loop. If a newline ($'\n') is detected, it just breaks out of the inner loop. Otherwise it adds the character to the end of the line variable. After the loop, the line is printed to standard output. This would be where you call your script or function that uses "$line". If we got here by detecting an EOT, we exit the outermost loop. 1 You may test this by running cat >file in one terminal and tail -f file in another, and then enter a partial line into the cat and press Ctrl+D to see what happens in the output of tail. For ksh93 users: The loop above will read a carriage return character rather than a newline character in ksh93, which means that the test for $'\n' will need to change to a test for $'\r'. The shell will also display these as ^M. To work around this: stty_saved="$( stty -g )" stty -echoctl # the loop goes here, with $'\n' replaced by $'\r' stty "$stty_saved" You might also want to output a newline explicitly just before the break to get exactly the same behaviour as in bash.
How to read the user input line by line until Ctrl+D and include the line where Ctrl+D was typed
1,427,416,989,000
I found some code for reading input from a file a while ago, I believe from Stack Exchange, that I was able to adapt for my needs: while read -r line || [[ -n "$line" ]]; do if [[ $line != "" ]] then ((x++)) echo "$x: $line" # <then do something with $line> fi done < "$1" I'm reviewing my script now & trying to understand what it's doing ...I don't understand what this statement is doing: while read -r line || [[ -n "$line" ]]; I understand that the -r option says that we're reading raw text into $line, but I'm confused about the || [[ -n "$line" ]] portion of the statement. Can someone please explain what that is doing?
[[ -n "$line" ]] tests if $line (the variable just read by read) is not empty. It's useful since read returns a success if and only if it sees a newline character before the end-of-file. If the input contains a line fragment without a newline in the end, this test will catch that, and the loop will process that final incomplete line, too. Without the extra test, such an incomplete line would be read into $line, but ignored by the loop. I said "incomplete line", since the POSIX definitions of a text file and a line require a newline at the end of each line. Other tools than read can also care, e.g. wc -l counts the newline characters, and so ignores a final incomplete line. See e.g. What's the point in adding a new line to the end of a file? and Why should text files end with a newline? on SO. The cmd1 || cmd2 construct is of course just like the equivalent in C. The second command runs if the first returns a falsy status, and the result is the exit status of the last command that executed. Compare: $ printf 'foo\nbar' | ( while read line; do echo "in loop: $line" done echo "finally: $line" ) in loop: foo finally: bar and $ printf 'foo\nbar' | ( while read line || [[ -n $line ]]; do echo "in loop: $line" done echo "finally: $line" ) in loop: foo in loop: bar finally:
What does `while read -r line || [[ -n $line ]]` mean?
1,427,416,989,000
With the below function signature ssize_t read(int fd, void *buf, size_t count); While I do understand based off the man page that on a success case, return value can be lesser than count, but can the return value exceed count at any instance?
A call to read() might result in more data being read behind the scenes than was requested (e.g. to read a full block from storage, or read ahead the following blocks), but read() itself never returns more data than was requested (count). If it did, the consequence could well be a buffer overflow since buf is often sized for only count bytes. POSIX (see the link above) specifies this limit explicitly: Upon successful completion, where nbyte is greater than 0, read() shall mark for update the last data access timestamp of the file, and shall return the number of bytes read. This number shall never be greater than nbyte. The Linux man page isn’t quite as explicit, but it does say read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf. (Emphasis added.)
Can read() return value exceed the count value?
1,427,416,989,000
I have a following small bash script var=a while true : do echo $var sleep 0.5 read -n 1 -s var done It just prints the character entered by the user and waits for the next input. What I want to do is actually not block on the read, i.e. every 0.5 second print the last character entered by the user. When user presses some key then it should continue to print the new key infinitely until the next key press and so on. Any suggestions?
From help read: -t timeout time out and return failure if a complete line of input is not read within TIMEOUT seconds. The value of the TMOUT variable is the default timeout. TIMEOUT may be a fractional number. If TIMEOUT is 0, read returns immediately, without trying to read any data, returning success only if input is available on the specified file descriptor. The exit status is greater than 128 if the timeout is exceeded So try: while true do echo "$var" IFS= read -r -t 0.5 -n 1 -s holder && var="$holder" done The holder variable is used since a variable loses its contents when used with read unless it is readonly (in which case it's not useful anyway), even if read timed out: $ declare -r a $ read -t 0.5 a bash: a: readonly variable code 1 I couldn't find any way to prevent this.
bash: non blocking read inside a loop
1,427,416,989,000
My script (should) acts differently, depending on the presence of the data in the input stream. So I can invoke it like this: $ my-script.sh or: $ my-script.sh <<-MARK Data comes... ...data goes. MARK or: $ some-command | my-script.sh where two last cases should read the data, while first case should notice the data is missing and act accordingly. The crucial part (excerpt) of the script is: #!/bin/bash local myData; read -d '' -t 0 myData; [ -z "${myData}" ] && { # Notice the lack of the data. } || { # Process the data. } I use read to read input data, then option -d '' to read multiple lines, as this is expected, and the -t 0 to set timeout to zero. Why the timeout? According to help read (typing left unchanged; bold is mine): -t timeout time out and return failure if a complete line of input is not read withint TIMEOUT seconds. The value of the TMOUT variable is the default timeout. TIMEOUT may be a fractional number. If TIMEOUT is 0, read returns success only if input is available on the specified file descriptor. The exit status is greater than 128 if the timeout is exceeded So I in case 2 and 3 it should read the data immediately, as I understand it. Unfortunately it doesn't. As -t can take fractional values (according to above man page), changing read line to: read -d '' -t 0.01 myData; actually reads the data when data is present and skips it (after 10ms timeout) if it is not. But it should also work when TIMEOUT is set to real 0. Why it actually doesn't? How can this be fixed? And is there, perhaps, alternative solution to the problem of "act differently depending on the presence of the data"? UPDATE Thanks to @Isaac I found a misleading discrepancy between quoted on-line version and my local one (normally I do not have locale set to en_US, so help read gave me translation which I couldn't paste here, and looking up for on-line translation was faster than setting new env---but that caused the whole problem). So for 4.4.12 version of Bash it says: If TIMEOUT is 0, read returns immediately, without trying to read any data, returning success only if input is available on the specified file descriptor. This gives a little bit different impression than "If TIMEOUT is 0, read returns success only if input is available on the specified file descriptor"---for me it implied actually an attempt to read the data. So finally I tested this and it worked perfectly: read -t 0 && read -d '' myData; The meaning: see if there's anything to read and if it succeed, just read it. So as to base question, the correct answer was provided by Isaac. And as to alternative solution I prefer the above "read && read" method.
No, read -t 0 will not read any data. You are reading the wrong manual. The man read will give the manual of a program in PATH called read. That is not the manual for the builtin bash read. To read bash man page use man bash and search for read [-ers] or simply use: help read Which contains this (on version 4.4): If timeout is 0, read returns immediately, without trying to read any data. So, no, no data will be read with -t 0. Q1 Why it actually doesn't? Because that is the documented way it works. Q2 How can this be fixed? Only if that is accepted as a bug (I doubt it will) and the bash source code is changed. Q3 And is there, perhaps, alternative solution to the problem of "act differently depending on the presence of the data"? Yes, actually, there is a solution. The next sentence of help read after what I quoted above reads: returning success only if input is available on the specified file descriptor. Which means that even if it doesn't read any data, it could be used to trigger an actual read of available data: read -t 0 && read -d '' myData [ "$myData" ] && echo "got input" || echo "no input was available" That will have no delay.
Why bash "read -t 0" does not see input?
1,427,416,989,000
The command df . can show us which device we are on. For example, me@ubuntu1804:~$ df . Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 61664044 8510340 49991644 15% /home Now I want to get the string /dev/sdb1. I tried like this but it didn't work: df . | read a; read a b; echo "$a", this command gave me an empty output. But df . | (read a; read a b; echo "$a") will work as expected. I'm kind of confused now. I know that (read a; read a b; echo "$a") is a subshell, but I don't know why I have to make a subshell here. As my understanding, x|y will redirect the output of x to the input of y. Why read a; read a b; echo $a can't get the input but a subshell can?
The main problem here is grouping the commands correctly. Subshells are a secondary issue. x|y will redirect the output of x to the input of y Yes, but x | y; z isn't going to redirect the output of x to both y and z. In df . | read a; read a b; echo "$a", the pipeline only connects df . and read a, the other commands have no connection to that pipeline. You have to group the reads together: df . | { read a; read a b; } or df . | (read a; read a b) for the pipeline to be connected to both of them. However, now comes the subshell issue: commands in a pipeline are run in a subshell, so setting a variable in them doesn't affect the parent shell. So the echo command has to be in the same subshell as the reads. So: df . | { read a; read a b; echo "$a"; }. Now whether you use ( ... ) or { ...; } makes no particular difference here since the commands in a pipeline are run in subshells anyway.
Why must I put the command read into a subshell while using pipeline [duplicate]
1,427,416,989,000
File.tsv is a tab delimited file with 7 columns: cat File.tsv 1 A J 1 2 B K N 1 3 C L O P Q 1 The following reads File.tsv which is tab delimited file with 7 columns, and stores the entries in an Array A. while IFS=$'\t' read -r -a D; do A=("${A[@]}" "${D[i]}" "${D[$((i + 1))]}" "${D[$((i + 2))]}" "${D[$((i + 3))]}" "${D[$((i + 4))]}" "${D[$((i + 5))]}" "${D[$((i + 6))]}") done < File.tsv nA=${#A[@]} for ((i = 0; i < nA; i = i + 7)); do SlNo="${A[i]}" Artist="${A[$((i + 1))]}" VideoTitle="${A[$((i + 2))]}" VideoId="${A[$((i + 3))]}" TimeStart="${A[$((i + 4))]}" TimeEnd="${A[$((i + 5))]}" VideoSpeed="${A[$((i + 6))]}" done Issue Certain entries are empty in tsv files, but the empty values are skipped while reading the file. Note Is the tsv file, empty values are preceded and succeeded by a tab character. Desired Solution Empty values be read and stored in the array.
As I said in my comments, this is not a job for a shell script. bash (and similar shells) are for co-ordinating the execution of other programs, not for processing data. Use any other language instead - awk, perl, and python are good choices. It will be easier to write, easier to read and maintain, and much faster. Here's an example of how to read your text file into an Array of Hashes (AoH) in perl, and then use the data in various print statements. An AoH is a data structure that is exactly what its name says it is - an array where each element is an associative array (aka hash). BTW, this could also be done with an Array of Arrays (AoA) data structure (also known as a List of Lists or LoL), but it's convenient to be able to access fields by their field name instead of having to remember their field number. You can read more about perl data structures in the Perl Data Structures Cookbook which is included with perl. Run man perldsc or perldoc perldsc. You probably also want to read perllol and perlreftut too. and perldata if you're not familiar with perl variables ("Perl has three built-in data types: scalars, arrays of scalars, and associative arrays of scalars, known as hashes". A "scalar" is any single value, like a number or a string or a reference to another variable) Perl comes with a lot of documentation and tutorials - run man perl for an overview. The included perl docs come to about 14MB, so it's often in a separate package in case you don't want to install it. On debian: apt install perl-doc. Also, each library module has its own documentation. #!/usr/bin/perl -l use strict; # Array to hold the hashes for each record my @data; # Array of field header names. This is used to insert the # data into the %record hash with the right key AND to # ensure that we can access/print each record in the right # order (perl hashes are inherently unordered so it's useful # and convenient to use an indexed array to order it) my @headers=qw(SlNo Artist VideoTitle VideoId TimeStart TimeEnd VideoSpeed); # main loop, read in each line, split it by single tabs, build into # a hash, and then push the hash onto the @data array. while (<>) { chomp; my %record = (); my @line = split /\t/; # iterate over the indices of the @line array so we can use # the same index number to look up the field header name foreach my $i (0..$#line) { # insert each field into the hash with the header as key. # if a field contains only whitespace, then make it empty ($record{$headers[$i]} = $line[$i]) =~ s/^\s+$//; } push @data, \%record ; } # show how to access the AoH elements in a loop: print "\nprint \@data in a loop:"; foreach my $i (0 .. $#data) { foreach my $h (@headers) { printf "\$data[%i]->{%s} = %s\n", $i, $h, $data[$i]->{$h}; } print; } # show how to access individual elements print "\nprint some individual elements:"; print $data[0]->{'SlNo'}; print $data[0]->{'Artist'}; # show how the data is structured (requires Data::Dump # module, comment out if not installed) print "\nDump the data:"; use Data::Dump qw(dd); dd \@data; FYI, as @Sobrique points out in a comment, the my @line =... and the entire foreach loop inside the main while (<>) loop can be replaced with just a single line of code (perl has some very nice syntactic sugar): @record{@headers} = map { s/^\s+$//, $_ } split /\t/; Note: Data::Dump is a perl module for pretty-printing entire data-structures. Useful for debugging, and making sure that the data structure actually is what you think it is. And, not at all co-incidentally, the output is in a form that can be copy-pasted into a perl script and assigned directly to a variable. It's available for debian and related distros in the libdata-dump-perl package. Other distros probably have it packaged too. Otherwise get it from CPAN. Or just comment out or delete the last three lines of the script - it's not necessary to use it here, it's just another way of printing the data that's already printed in the output loop. Save it as, say, read-tsv.pl, make it executable with chmod +x read-tsv.pl and run it: $ ./read-tsv.pl file.tsv print @data in a loop: $data[0]->{SlNo} = 1 $data[0]->{Artist} = A $data[0]->{VideoTitle} = J $data[0]->{VideoId} = $data[0]->{TimeStart} = $data[0]->{TimeEnd} = $data[0]->{VideoSpeed} = 1 $data[1]->{SlNo} = 2 $data[1]->{Artist} = B $data[1]->{VideoTitle} = K $data[1]->{VideoId} = N $data[1]->{TimeStart} = $data[1]->{TimeEnd} = $data[1]->{VideoSpeed} = 1 $data[2]->{SlNo} = 3 $data[2]->{Artist} = C $data[2]->{VideoTitle} = L $data[2]->{VideoId} = O $data[2]->{TimeStart} = P $data[2]->{TimeEnd} = Q $data[2]->{VideoSpeed} = 1 print some individual elements: 1 A Dump the data: [ { Artist => "A", SlNo => 1, TimeEnd => "", TimeStart => "", VideoId => "", VideoSpeed => 1, VideoTitle => "J", }, { Artist => "B", SlNo => 2, TimeEnd => "", TimeStart => "", VideoId => "N", VideoSpeed => 1, VideoTitle => "K", }, { Artist => "C", SlNo => 3, TimeEnd => "Q", TimeStart => "P", VideoId => "O", VideoSpeed => 1, VideoTitle => "L", }, ] Notice how the nested for loops print the data structure in the exact order we want (because we iterated over the @headers array), while just dumping it with the dd function from Data::Dump outputs the records sorted by key name (which is how Data::Dump deals with the fact that hashes in perl aren't ordered). Other comments Once you have your data in a data structure like this, it's easy to insert it into an SQL database like mysql/mariadb or postgresql or sqlite3. Perl has database modules (see DBI) for all of those and more. (In debian, etc, these are packaged as libdbd-mysql-perl, libdbd-mariadb-perl, libdbd-pg-perl, libdbd-sqlite3-perl, and libdbi-perl. Other distros will have different package names) BTW, the main parsing loop could also be implemented using another perl module called Text::CSV, which can parse CSV and similar file formats like Tab separated. Or with DBD::CSV which builds on Text::CSV to allow you to open a CSV or TSV file and run SQL queries against it as if it were an SQL database. In fact, it's a fairly trivial 10-15 line script to use these modules to import a CSV or TSV file into an SQL database, and most of that is boilerplate setup stuff...the actual algorithm is a simple while loop to run a SELECT query on the source data and an INSERT statement into the destination. Both of these modules are packaged for debian, etc, as libtext-csv-perl and libdbd-csv-perl. Probably packaged for other distros too. and, as always, available on CPAN.
Read file and store as array without skipping empty strings
1,427,416,989,000
How can I give space " " as input in shell script ? Ex : echo " Enter date for grep... Ex: Oct 6 [***Double space is important for single date***] (or) Oct 12 " read mdate echo $mdate I get the output as Oct 6 but I want Oct 6.
You already have Oct  6 in $mdate, your problem is that you're expanding it when printing. Always use double quotes around variable substitutions. Additionally, to retain leading and trailing whitespace (those are stripped by read), set IFS to the empty string. IFS= read -r mdate echo "$mdate" The -r (raw) option to read tells it not to treat backslashes specially, which is what you want most of the time (but it's not the problem here).
Read space as input in shell script [duplicate]
1,427,416,989,000
I'm not sure how to explain the problem in general, so I'll just use this example: #!/bin/bash cleanup() { rm "$myfifo" rm "$mylock" kill '$(jobs -p)' } writer() { for i in $(seq 0 100); do echo "$(date -R) writing \"$i\"." echo "$i" > "$myfifo" done } reader() { while true; do flock 3 read -st 1 line status=$? if [ $status -eq 0 ]; then echo "$(date -R) reading \"$line\" in thread $1." else echo "$(date -R) status $status in thread $1. break fi flock -u 3 sleep 10 done 3<"$mylock" <"$myfifo" } trap cleanup EXIT myfifo="$(mktemp)" mylock="$(mktemp)" rm "$myfifo" mkfifo "$myfifo" writer & for i in $(seq 1 10); do reader $i & sleep 1 done wait Now I would expect the reading threads to each take a line (or a few lines) but the first reading process will take all the lines (in a random order which I don't understand but that's ok), put it in a buffer somewhere and all the other reading processes will not get any line. Also the timeout parameter supplied to the read command doesn't seem to work because the readers 2-10 do not exit. Why? How can I fix this so the lines get (somewhat) evenly distributed among the readers?
Letting read timeout read timeout actually works. The problem here is that opening a FIFO in reading mode blocks until the FIFO is opened in writing mode. And in this case, this is not read that is blocked, this is bash, when redirecting your FIFO to stdin. Once some other process opens the FIFO for write, bash will successfully open the FIFO for read and will execute the read command (which will timeout as expected). If you are using Linux, the man page for fifo tells us that "opening a FIFO for read and write will succeed both in blocking and nonblocking mode". Therefore, the following command will timeout even when no other process opens the FIFO for write: read -st 1 data <> "$fifo" Beware of the race condition Once your shell process opens the FIFO for read, the writer(s) will then be unlocked and, by the time bash redirects the FIFO to stdin and calls read, the writer may be able to open the FIFO and write into it several times. Since you read only one line at a time, any line remaining to be read while the FIFO is closed at both ends will be lost. A better solution would be to keep the FIFO open by redirecting it to stdin for the whole while...done loop, as you did for fd 3. Something like: while ...; do ... read -st 1 data ... done 3<"$lock" < "$fifo" Or even at an upper level, if you have several readers in parallel. What matters is to keep the FIFO open. Same for the writer side. For example, with the code you posted with your update, the upper level would be: # Writer writer > "$myfifo" & # Reader for i in $(seq 1 10); do reader $i & sleep 1 done < "$myfifo" Of course, remove the redirections to/from $myfifo everywhere else in your code, and remove the echo "$(date -R) writing \"$i\"." in your writer, or redirect it to stderr, else it would go to the FIFO.
read timeout parameter (-t) not working?
1,427,416,989,000
I have a long running Bash script that I don't want to run as root, but it needs root access periodically throughout. I solved this by asking for the user for the root password using sudo -v and then I backgrounded a process that would loop and reset the sudo timer using sudo -n true I then started to have weird problems when using read in the main process. Here is a minimal script exhibiting this problem. If you run it and don't input anything before the sudo -n true is run, the read gets a read error: 0: Resource temporarily unavailable #!/usr/bin/env bash sudo -v # ask user for password sleep 1 && sudo -n true & # background a process to reset sudo printf "Reading text: " read -n 1 TEXT echo "Read text: $TEXT" I haven't been able to replicate this behavior with any other command besides sudo. How can I run sudo -n true in the background without interfering with read? Edit: I only get this problem on Ubuntu not macOS.
I get the same behaviour with: sleep 1 && true < /dev/tty & read var sudo opens /dev/tty to query the current foreground process group, that causes the read() system call made by bash's read to return with EAGAIN with Ubuntu 18.04's Linux kernel 4.15.0-45-generic and 4.18.0-14-generic at least causing the read utility to return with that error. That seems to be caused by a bug in recent versions of the Ubuntu variants of the Linux kernel. I can't reproduce it on Solaris nor FreeBSD, nor in any version of Linux on Debian (though I can reproduce it if I boot Debian on Ubuntu's 4.18 kernel). https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe/+bug/1815021 seems to be another manifestation of that bug. That's introduced by https://lkml.org/lkml/2018/11/1/663 which ubuntu backported to both its 4.15 and 4.18 kernels at least. But Ubuntu had not backported another change that fixes a regression introduced by that patch until 2 hours ago. 4.18.0-15-generic has now landed in the Ubuntu repositories and fixes the issue. I suppose the one for 4.15 will follow shortly. ksh93 doesn't have the problem for that same code as its read builtin uses select() first to wait for input, and select() doesn't return when the other process opens /dev/tty. So here you could use ksh93 instead of bash or wait for the fixed kernel or go back to 4.15.0-43 until 4.15.0-46 is released. Alternatively, you could use zsh which has builtin support for changing uids (via the EUID/UID/USERNAME special variables) provided you start the script as root so you wouldn't need to run sudo within the script (it's also potentially dangerous to extend the life of the sudo token longer than the user would expect).
Bash: how can I run `sudo -n true` in the background without interfering with `read`?
1,427,416,989,000
I want to download a provisioning script that reads some configuration parameters via read, and execute it: curl http://example.com/provisioning.sh | sh The problem is, that the read command in the script is call with the -i parameter to provide a default: read -p "Name: " -i joe name echo $name If I download the script, set the +x permission and run it, everything's fine. If I run it with cat provisioning.sh | sh or sh provisioning.sh, it fails with: read: Illegal option -i Why wouldn't read support providing a default if it's run via sh? But whatever, I'll remove the -i, being left with read -p "Name: " name echo $name Now if I run the script via cat provisioning.sh | sh, it won't do anything. Why is that? Ubuntu 14.04.
When you pipe the output of curl into sh you're making the script text be standard input of the shell, which takes it in as commands to run. After that, there's nothing left to read. Even if it were to try, it wouldn't get anything from the terminal input, because it's not connected to it. The pipe has replaced standard input for the sh process. The next problem is that read -i is not a POSIX sh feature, but rather an extension supported by bash. Ubuntu uses dash, a minimal POSIX-compliant shell with minimal extensions, as /bin/sh by default. That's why it rejects the -i option specifically (although it does support -p). If you're using a more capable shell yourself, you can try something like: bash <(curl http://example.com/provisioning.sh) which creates a pipe for bash to read the output of curl from and provides it as the script file argument. In this case, the standard input of the script is still connected to the terminal, and read will work (but note the big caveat below the line). I'll note also that "curl | sh" is generally frowned upon as an obvious security problem, but you know best the situation your script is in.
Why does the read command not take interactive options when run by sh?